Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it's reasonable to argue that data acquired via a sensorimotor loop in an embodied agent will go beyond what you can learn passively from a trove of internet data, but this argument goes beyond that - the "data" in evolution is "learned" (in a fashion) not just from a single agent, but from millions of agents, even those that didn't survive to replicate (the "selection", of course, being a key part of evolution).

A neat thing about the kind of artificial robots we build now is that the process can be massively sped up compared to the plodding trial and error of natural evolution.



Exactly. We have huge advantages over evolution in some regards. All of the experience from every robot can be combined into a single agent, so even if AI is not as sample efficient as human brains it could still far surpass us. And honestly the jury is still out on sample efficiency. We haven't yet attempted to train models on the same kind of data a human child gets, and once we do we may find that we are not as far away from the brain's sample efficiency as we thought.


> All of the experience from every robot can be combined into a single agent

I'm not so sure. It's not obvious that experience combines linearly, so you'll have to somehow figure out how to make the combination work in such a way that it doesn't mess up the receiver too badly--you still want some individuality among the robot fleet right?


That's interesting to think about. I'm not familiar with the literature on this but I'm 100% sure there's some interesting work on it (and related fields such as distributed and federated learning). I guess the simplest solution would be "centralized" - periodically aggregate all the raw data from all robots, train a model with all the data, redistribute the model. In that case there wouldn't be any "individuality", but (maybe again, by analogy with evolution) one could think it'd be advantageous to have some. But even if all the models all the same, the robots might be different types and operate in different environments, which raises issues of generalizability, transferability and specialization. Either way the centralized would have some scaling problems, naturally - some way to transfer/aggregate experience (possibly "peer to peer") without resorting to training from raw data then becomes attractive, and I'm sure something people are working on. It does turn out that at least in some recent LLMs, weights appear to be sort of linear and people have been using that to merge them with fairly naive methods with good results.


It's possible that different entities experience each other's experiences differently.. that is, if you were to magically teleport your experience of reading this post into my brain it might be overpoweringly disorienting and even painful. On the other hand it could just be "a little weird". Or would I instantly have everything that differentiates my mind from yours completely overwritten? This would probably catastrophically reduce my fitness because I'd have to--or more like you'd have to--learn how to operate my body.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: