Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What I find particularly interesting is the sort of "superstition" many of these kinds of robots show. What I'm referring to in this case is how, after 50 tries, the robot arm here moves to its right before every flip attempt. I believe the same thing shows up with solutions from genetic algorithms and neural networks.

Seems like there is always some non-negligible probability that within the factors any learning robot takes into account as part of its success is a factor which is actually totally irrelevant. That's probably somewhat how our own brains operate as well.



Perhaps running the learning process multiple times and averaging(?) the results would eliminate irrelevant motions. Or perhaps the irrelevant parts are critical to the rest of the movements, and removing them would break things.


If the motions are truly irrelevant, then yes, they would disappear. But they could disappear as a function of the number of trials, so anything that still persists after 50 trials would take a long time to get rid of.

It is general a hard problem in adaptive learning algorithms that they only produce results. You can't tell the superstition from the intuition.


That's because this is an evolutionary (in a broad sense) method of learning, i.e. it does not involve analytical understanding of what it's trying to optimize, but rather it uses feedback from previous iterations to achieve the objective.

Evolution works in the same way: there's no analysis from the phenotypical world back to the genes ("oh, we should be taller; let's just tweak this gene", i.e. Lamarckism). Instead, it's just massively scalable trial and error.

In fact, the logical disconnect between effect and cause is probably a strength rather than a weakness.


*In fact, the logical disconnect between effect and cause is probably a strength rather than a weakness.

Absolutely. I remember reading about an artificial life program which had evolving creatures try to survive in a very harsh artificial world. The strategies developed seemed less than optimal on the face of it, but when the writers of the program hand-coded their own supposedly "perfect" strategy, the increased efficiency of their strategy actually led to a lower overall survival rate. Only after that could they see why.


we probably "pointless move to the right" in some of our cognition and never notice it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: