Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you can generate realistic video stream, responding to player movements and interactions, you can train your robot using that video stream. It's much more scalable, compared to building physical environments and performing real-world training.

Of course the alternative is to use game engines, but it's possible that AI would generate more realistic video stream for the same money spent. Those recent AI-generated videos certainly look much more realistic than any game footage I ever saw.



Game engines require a lot of additional work to make them suitable for that task, too— deep integration for sensor data, inputting maps and assets, plus the basic mismatch that these workflows are centered around Windows gui tools whereas robotics is happening on the Linux command line.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: