Hacker News new | past | comments | ask | show | jobs | submit | chfritz's comments login

So now Elon makes not only boring products, but also others that just suck?

Definitely 1! As I commented before here, https://www.linkedin.com/advice/0/heres-how-you-can-stay-ahe...:

Many roboticists overlook the need for JavaScript in their skill set and tech stack. This is because of the evolution of robotics projects and companies: first you build a prototype and only then do you deploy and scale your fleet. The first part is hard and doesn't require any JS. But the second part is even harder and requires a ton of JS, because without web-tooling you won't be able to operate efficiently. This offers a great opportunity for roboticists to distinguish themselves from others be mastering JS for both the backend (rosnodejs, rclnodejs) and the front-end (React, Transitive Robotics). This sets you apart from both "regular" roboticists and from "pure" web developers.


What I'm trying to say: robotics is a very broad field and your current skills are needed just as well. And while working on those aspects of the robotics stack you can learn about the rest if you want. But you don't even have to to progress in your career in the robotics industry.

Learn ROS. It has tutorials using just turtle sim. Also https://www.theconstruct.ai/ is a place designed for people who want to learn robotics (based on ROS). https://www.youtube.com/@mikelikesrobots also has some nice beginner videos.

This is similar to ROS Board, but with some significant differences. Let me know if you care to hear about that.

Sorry to hear, Kyle! But keep in mind: good ideas always survive, you just need to find the right business model for them. Maybe Basis didn't work as a VC funded business from day one. But take Foxglove again: they came out of the open-source release of webviz from Cruise. That's how they "bootstrapped". Would love to see what would happen if you worked inside another company that wants to build their own middle-ware but also knows that they do not need to own it and would benefit from more users (because more "test coverage", more developers, more extensions). Then, after a couple of year, it might be ready enough where other companies can try it and get value very quickly (and much faster than they could now).


I'd definitely love to take the tech or ideas to another company. I'm happy for now to sit on it and make slow improvements for fun.


It's Moravac's paradox: Many of the things that are easy for humans (like doing dishes) are still very hard for computers/robots. That is why so far no one has been able to build a robot that can do the things people want help with at home: dishes, laundry, taking out the trash, cleaning. And by cleaning I mean cleaning everything, incl. toilets, window shades, and the shower not just vacuuming and mopping the floor. It's hard enough to build a robot that can do one thing really, really well and 99.9% autonomously, which is why we still see innovation in things like robot vacuuming (see Matic).

https://en.wikipedia.org/wiki/Moravec%27s_paradox


Also, the wealthy can always just hire a person to cook and clean for them, and that will probably always be cheaper.

Even Elon Musk style robots puppeted for pennies an hour by gig workers and AI wouldn't be very cost effective.



Comments moved thither. Thanks!


I've heard about you guys before. Nice application! What are you using for teleop/remote-monitoring? Did you build that yourself?


Teleop and monitoring are systems that we've built ourselves and are pretty happy with. Since we use MuJoCo for simulation/visualization and some kinematics subroutines, to visualize, I just keep the MuJoCo GL context open after rendering and then throw all of our sensor data into it - it's very performant and low latency!

We've since introduced a message-bus layer that makes it possible to do it all over the internet etc, but adds the associated serialization and transport latency.


> to do it all over the internet, but adds the associated serialization and transport latency.

I wrote this blog post on that topic a while back after having seen various approaches robotics companies take and their shortcomings: https://transitiverobotics.com/blog/streaming-video-from-rob...


Excellent post! Curious if WebRTC can be adapted for 3d sensor data and would love to chat more about it - I'll send an email!


There should be a fully fledged, robust and comprehensive open source robotics OS.

I imagine most of this code being reinvented on a daily basis at countless companies around the world, what a waste of human resources.


For a robotics company, their code is the "secret sauce" that makes their company valuable. It wouldn't make sense to open-source it all and let their competitors do the same thing without having had to spend so much money and time developing it.

Open source works great for shared code that isn't part of the "value added" by a company. So for a modern robotics company, it makes a lot more sense to use Linux for instance rather than rolling their own proprietary OS. And to use an open-source compiler for building the code. They're in the business of providing solutions using robotics, not selling operating systems and compilers, just like countless other companies build their products on top of these infrastructural tools, and sometimes contribute bug fixes and improvements back. But the code that actually makes the robot work (vision, motion planning, etc.) is what they spent most of their funding building, so giving it away makes no business sense.

Basically, you're complaining about all companies having trade secrets, and ultimately, you're complaining that competition exists instead of just having a single company having a monopoly over a whole market.


[There is, and by some estimates 1.3M people use it.](https://docs.ros.org/en/rolling/)


WebRTC is end-to-end encrypted, too, and unlike OpenMLS is designed for video streaming, so it's great for low-latency live viewing, even over terrible network connections. We are using it in robotic and built a component you can just embed in any web page/app (not open-source, sorry): https://transitiverobotics.com/caps/transitive-robotics/webr... On the device side it supports hardware acceleration on Nvidia, RockChip, and Intel devices with VA-API support (e.g., NUKs).


Funny, just yesterday I was complaining about OpenAI's gptbot's stupidity costing site operators too much: https://x.com/chfritz/status/1863689365740020012


Hey sorry I didn't get back to you sooner, I guess HN doesn't sent reply notices.

Want to talk about it? Would be good to connect with someone else interested in this.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: