Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Doesn't this then turn into a problem of sample quantity? You would need to shift into a quality mindset because with a robot you can't perform a billion iterations, you're locked into much more complex world with unavoidably real time interactions. Failure is suddenly very costly.


With a million robots you can perform a billion iterations. We won't need a billion iterations on every task; we will start to see generalization and task transfer just as we did for LLMs once we have LLM-scale data.

You are right that failure is costly with today's robots. We need to reduce the cost of failure. That means cheaper and more robust robots. Robots that, like a toddler, can jump off a couch and fall over and still be OK.

Tying back to the article, this is the real evolutionary advantage that humans have over AIs. Not innate language skills or anything about the brain. It's our highly optimized, perceptive, robust, reliable, self-repairing, fail-safe, and efficient bodies, allowing us to experiment and learn in the real physical world.


> robust, reliable, self-repairing, fail-safe, and efficient bodies

you must be young and healthy because I cannot imagine using any of these words to describe this continuously decaying mortal coil in which we are all trapped and doomed


I wish! Hopefully AI can help with that too, but (contra Kurzweil) I fear medicine moves too slowly and it is already too late to save our generation from aging. Hopefully our kids can reap the benefits.


AI's advantage would be that their learning can be shared

For example if Robot 0002 learns that trying to move a pan without using the handle is a bad idea, Robot 0001 would get that update (even if it came before)


But that ends up with weirdly dogmatic rules because it's not always a bad idea to move a pan without using the handle, it's just in some situations. It still takes a ton of potentially destructive iterations to be sure of something.


Yea its tricky and costly. I believe we should bet on specificity to make this more optimal.

I know the trend with AI is to keep the scope generic so it can tackle different domains and look more like us, but I believe that even if we reach that, we'll always come back to make it better for a specific skill set, because we also do that as humans. No reason for an AI driver to know how to cook.

If we narrow the domain as much as possible it will cut the number of experiments it needs to do significantly

Edit: I wonder if its even going to be useful to devote so much resources into making a machine as similar as us as possible. We don't want a plane to fly like a bird, even if we could build it.


Then we will continue to have a Temperature variable in the Action Models.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: