Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How many bits of all that sensory information--note that our vision is nearly optimally sensitive--actually make it to the brain? I think there's got to be a large amount of low-effort (structurally baked-in, automatic) filtering, aggregation, and lossy compression happening, otherwise it would just be way too much, right?


While im sure there is a lot of noise filtered out, the opposite is true as well. Much of what we perceive is interpolated and pattern matched filler.

I dont know if this is due to source input limitations, or if it is a compression>processing>decompression technique for efficiency. Either way, it does imply that the amount of data desired is more than makes it through the bottleneck.


Given evolution's "modularity/abstraction/engineering principles be damned" approach to problem solving, I'd wager the shape of the bottleneck and every other little detail is actually important.

Put another way, evolution writes the worst side-effecting spaghetti code you've ever seen that somehow, (seemingly) miraculously does exactly the right thing, robustly, more efficiently than you can possibly imagine doing it.


> right thing, robustly, more efficiently than you can possibly imagine doing it.

Well... actually evolution doesn't tend to do that well against an engineered process. It has higher bandwidth than human engineers.

For example, engineers can move humans around faster and with better energy efficiency [0] than evolution managed directly. We've also figured out more effective ways to organise society (laws & principles) than nature managed (many of our instincts have effects that lead to measurably stupid outcomes, eg, mob forming instincts are just a disaster). There are quite a lot of examples where it turns out engineering > evolution in a quite strict sense.

[0] https://en.wikipedia.org/wiki/Energy_efficiency_in_transport


> There are quite a lot of examples where it turns out engineering > evolution in a quite strict sense.

Only if you're considering human agents more important than anything else. When considered the system as a whole, nature seems very efficient.


Evolution made the engineers, though. And the politicians. We are just as much a "part of nature" as, say, rodents.


Practically it has to be higher than the bitrate of audio-visual data that is presented on a computer. Call that 1MBps for a video stream (way under-calling the amount of data human vision reports I would suggest). That'd put a lower cap of around 50 GB/day of new data, 20 TB/year. Of course, computers can train with more than 2 eyes with one location and perspective. We aren't anywhere near the data cap with current training.

Although to be fair I do suspect that most of that data is repetitive, boring and of little use. In my opinion some sort of check for novel data is probably going to be the next big breakthrough in machine learning.


> (way under-calling the amount of data human vision reports I would suggest)

It doesn't matter because the amount of meaningful data for learning is accessible at a lower resolution.

If someone is born with bad vision such that they effectively see at 1/2 or 1/4th of the resolution of a normal person, It's not like they will grow up to be stupid (as long as they can sit in the front of the class to see the chalkboard).

In order for visual quality to impair learning it would need to be pretty bad such that you couldn't make out objects or symbols at a reasonable range.


Building the noise filter is part of the learning. I don't see any reason to believe that it's "structural". Babies aren't known for their stellar information processing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: