Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It has to evolve, not revolve. Like cars did (and they're still -far- from perfect).

Our remarkable bodies weren't designed by engineers, they survive now because, earlier, so many didn't. Are we smarter than evolution, because we like to think we are? Time will tell.



> Are we smarter than evolution, because we like to think we are?

Of course we are. We went from "natural state" to spaceflight in mere couple thousand years, of which the most meaningful were last 300.

We design, build and test things on a scale that's orders of magnitude shorter than gene-driven biological evolution. Still, that doesn't mean everything gets perfected instantly, and we humans are quite an impatient bunch (no surprise here, given our short lifespans).


Cars (assuming you mean level 5 autonomous ones) are another one of those things which we won’t really be getting for at least 20 more years. 90% of the problem is solved, the remaining 10% are exponentially harder, so no one has a foggiest clue how to solve them, let alone solve economically enough to make the cars viable on the market.


I grew up hearing "we will eventually get computers that can play chess, but go is exponentially more complicated." People wee talking a century.

I don't know that cars will happen soon(5-10 years), but I wouldn't want to bet one way or the other past that. We don't need perfection, we just need to equal humans, and humans are actually pretty bad, we are just used to it.


Humans are _amazing_ because they are able to correct the deficiencies of their perception with high level cognition augmented with memory of past experience. Machines can’t do cognition, and they can’t effectively use past experience either, to say nothing of doing a combination of those two things.

Current “AI” is basically function approximation and nothing else. And humans do everything they do in a 20W power envelope.


"they can’t effectively use past experience either"

While I'm unsure whether the on board computer of your autonomous car will be able to leverage past experience, I thought it was a forgone conclusion that the telemetry from all the cars on the road will be used to iteratively improve the core model. Which would then be dispersed as an os upgrade, effectively teaching your individual unit from the past experience from all the units on the road so far.


But it’s not memory. Currently you just show your neural net a million examples of a thing and it derives a function which, given an example input, minimizes the output error. That’s it. It’s not like “last time I caught a ball in a similar situation I moved like this, so let me start with that and correct based on visuals, audio, proprioception, and cognition”, all within 20 milliseconds, before you even fully understand you’re about to catch a ball. That’s not to mention that you also maintain the illusion of a continuous and static visual field, without even noticing, in stereo.


> last time I caught a ball in a similar situation I moved like this, so let me start with that and correct based on visuals, audio, proprioception, and cognition

There are types of neural networks (and other algorithms) that work literally like that. Just because a simple deep perceptron does not work like that does not mean no network does.

This one is just off the top of my head, it's quite recent but the memory part is based on old prior work. https://medium.com/applied-data-science/how-to-build-your-ow...

UPD: without even digging deep in the different types of networks, even AlphaGo(/Zero) works like that.


> Machines can’t do cognition

Assuming you mean that machines can’t do cognition at present, why do you think we won’t solve this problem in the next 20 years?


Because nobody knows how to even begin researching something like that.


Most humans don't do cognition nor learning from experiences well either. It is certainly DONE, but we tend to not pay attention to how often we fail. If you check the actual text of an average conversation, people are speaking past each other the majority of the time. We correct, but the vast majority of the time we have no awareness.

We rewrite our memories to fit our mental schemas, to the point where someone describing what they saw is more likely wrong than right, even in dramatic ways . (see a stabbing? Did the man in the suit or the man in rags commit the stabbing?). We suffer change blindness, confirmation bias, prejudice. We rationalize and justify to a ridiculous degree, and in the few cases we become aware of this, that awareness does not allow us to change the behaviors. If someone is wrong, the worst way to get them to change their stance is to show them they are wrong.

We're born helpless and spend a strong percentage of our lives learning how to not die. We transfer information inefficiently and inaccurately, with every generation biologically starting from scratch. We spend 1/3 of our lives unconscious (in addition to that helpless period), and almost 2 decades becoming ready to function independently, at which point most people have only a few years before they dedicate an even larger portion of their life to bootstrapping the next generation.

The Turing test exists because we can't even define what we are describing as obvious (and as I mentioned previously, humans fail the turing test often). Almost everyone that drives has been in some form of a car accident, the overwhelming majority of which were caused by human error. We burn plants so we can inhale the (toxic) vapors, we overestimate rare risks and underestimate inevitable ones, we drink poison for fun, and enjoy it because it reduces our thought processes, we gamble money with the intent of winning more money when it is well known the odds of winning are terrible. We entertain ourselves with habits that target innate thinking fallacies and call it "gamification". We ignore issues that we have confidence will arrive, and then react with panic when they do arrive because we've made no preparations. We declare human life to be so precious we don't want to end the potential, even to the extent of stopping people from preventing that potential, but don't take action to support that life once it is born. We look at a list of flaws like this and shrug it off. We oversimplify, stereotype, and categorize even when errors in those systems are pointed out to us. We don't like being wrong SO MUCH we'd often rather continue being wrong than accept that we were. We eat foods that are unhealthy in unhealthy quantities, and produce and purchase foods that directly encourage those habits. We have short attention spans and short (and inaccurate) memories.

Comparing current AI approaches and human thought is apples and oranges, but to mock AI efforts as function approximation ignores how much function approximation we do. We function, and the diversity of tasks we function at is indeed amazing. The complexity and adaptability of the human species is awe-inspiring. But doing amazing things is still not the same as doing them _well_.

I don't say this to claim humans are terrible. I'm pointing out that we are poor judges of quality and that any system following different fundamental restrictions will have different emergent behaviors. I expect that a car that can drive more safely and more consistently than a human is both a complex problem and much easier than most assume. Driving _well_ is harder, but driving better than a human? Not nearly as hard. What percentage of drivers do you think consider themselves to be "above average"?


That’s another reason why humans are so amazing: we correct so well we don’t even notice we’ve corrected anything. Our eyes see a continuos visual field in color even though we only see color in the center of each eye, our gaze jumps around all the time, and the image is heavily distorted, has blood vessels interfering with capture and nose obstructing part of peripheral vision. And yet you see none of that. We can’t individually control any of our muscles, yet we have fine motor skills that require strict countrol. We achieve through a visual and proprioceptive feedback loop, which corrects our previous memory of doing the same thing.

Driving better than a human from vision alone is extremely hard. Driving better than a human in an area for which you don’t have a 3d capture is extremely hard. Driving better than a human when it’s raining or snowing is extremely hard, etc, etc. Don’t be so eager to discount humans.


Waymo is going to have a service on the road in Arizona for the general public this year.

Sure, it isn't cars in every environment without having seen the territory, but it is an incredibly useful product that can be extended into more cities as the technology improves. And plenty of cities around the world have similar conditions to what is being used in Arizona.

Low speed, AI driven electric buses on limited routes are already all over the place too.

Driverless trucks on certain runs is likely too within a decade.


Busses on predefined routes aren't AI.


> no one has a foggiest clue how to solve them

Would you care to cite any sources for that? Sounds like something a couch expert would say. Have you talked to every Waymo engineer?


Testing autonomous cars in areas with inclement weather would be a good indicator of progress. So far no autonomous car can remain autonomous in heavy rain or moderate snow, and no car can predict if a kid standing on the sidewalk will dash in front of the car all of a sudden. Or if the thing being blown across the road is a plastic bag or something more substantial. Or where to drive if road markings have worn out or otherwise became invisible, or how to avoid a pothole, etc, etc. All that stuff which you do without thinking, all of it is unsolved.

Wake me up when they’re testing L5 in Alaska in winter, using a car with no steering wheel. Then I might consider trusting my life to it.


So what you are saying, it takes time to perfect the technology? How is that the same as "no one has a foggiest clue how to do it"? There are many people with lots of clues.


No, what I’m saying is LIDAR doesn’t work when it snows, and lane localization doesn’t work when there’s sleet on the road.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: