Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If AlphaGo eventually beats humans at go I think it says more about the game (or the way humans play the game) than about AI in general.

AI problems are always mysterious until they're solved. People say this sort of thing every time an AI achieves some task which had previously been deemed impossible.



> AI problems are always mysterious until they're solved.

There's truth there but this line is over-used. Playing chess better than a human was never deemed impossible and the ultimate solution doesn't seem to have any crazy insights that would stun a researcher from the 1960s. (You can read the source for Stockfish which is state of the art and open source). The improvements came in the form of more and more horsepower through hardware, more and more efficient innovations in the tree search very specific to the game of chess (bit board representation, move ordering), and simplifying evaluation and tweaking parameters according to unsupervised learning. Correct me if I'm wrong?

At the end of the day, chess (and go) are discrete games with perfect information played one turn at a time according to a tree of simple and trivially determined possible moves with clear criteria for winning. I don't see why we'd put this on a pedestal as the example of something generally considered uniquely human, so we'd better expand our imaginations of what we can achieve with AI. As the parent said, the mistake may have been in supposing there wasn't a solvable evaluation function for Go positions when in fact, through some more human ingenuity, there is.


So what happens if (when?) we come up with a system that does everything a human can do, better than a human, but doesn't contain any 'crazy insights', just a bunch of incremental improvements on what we have now?

Does that mean we aren't intelligent? Or does it mean that the system isn't intelligent but that we are "because we do it differently"? Or do we accept at that point that intelligence is composed of simple building blocks interacting in complex ways (which we already know, if we eschew Cartesian duelism)?


We would declare it intelligent? Certainly the people of today would call it intelligent. What I suspect you are claiming is that the people of that future would not call it intelligent, and this is the basis for arguing why that objection is not valid today. But that extrapolation to the future is just your speculation.


Or, in my view the most likely, that isn't in fact possible and the system we eventually arrive at that does that, _will be_ extremely different from what we have now.

Although I actually think that we'll never make a system that does _everything_ a human can do at all, simply because that would be silly.

And of course, there also has never been a human who can do everything that humans can do, so this bar is way too high anyway.


Perfect information isn't true, you don't exactly know the opponents next move. This broadens the search tree exponentially. Generally, with many hard problems, the size of the problem is a problem, when memory is limited.


Playing chess better than humans was certainly deemed impossible by some people (not the ones who wrote chess programs, of course). See for example Hubert Dreyfus:

https://en.wikipedia.org/wiki/Hubert_Dreyfus%27s_views_on_ar...

While progress came not as quickly as AI researchers (or their universities' publicity departments) had hoped, computers can now do a lot of the things that he wrote about for example in "What computers can't do".


Dreyfus does not appear to have claimed that computers would never be able to play chess well. At least, not in that book.

He reacted with skepticism when Newell and Simon said in 1957 that a computer would be world chess champion by 1967 and, well, he was right to.

He said that the computational techniques in use for computer chess in the 1970s wouldn't be capable of producing a world-class player, and he was probably wrong about that -- largely, I guess, because he didn't foresee how big an impact a performance improvement of ~10000x could have.

If he actually claimed that playing chess better than humans was impossible, can you say where?


Dreyfus was defeated at chess by MacHack in 1967: https://www.chess.com/article/view/machack-attack


There is another way of saying the same thing. A lot of seemingly groundbreaking progress in AI usually happens when people (people, not machines) discover a clever way of mapping a new unsolved problem to another - well-solved - problem. That's a valid and insightful observation that you shouldn't dismiss with such an ease.


Why 'seemingly'? Why is that not 'actual' groundbreaking progress? The achievement of any level of AI will by necessity require the chaining together of processes which are not themselves 'intelligent'. It has to be bootstrapped somehow.

On the day they build a walking, talking AI who can converse fully with a human, write a symphony, design a building, feel and express emotions and all the rest of the things we define as being essentially in the domain of humain intelligence, everyone will say, "But of course, none of those things required intelligence at all. This is all an elaborate collection of illusions and ugly hacks."

And they'll be right, but I suspect that the brain is the same way.


Why 'seemingly'? Why is that not 'actual' groundbreaking progress?

Because mapping problems to pre-existing algorithms is bread and butter of computer science and software engineering. To be groundbreaking a work needs to change our understanding of the underlying issues. In a lot of popularized cases that does not happen.


i tend to agree with that. it does seem like there's some general instinct in people that other things (animals, "artificially" intelligent machines) could possess human-like intelligence and subjective experience. i mean, we both seem to be tacitly agreeing to that here. there's also obviously a general instinct that's completely the opposite, and i'd guess that's probably the more prevalent instinct (in the population at large and in many conflicted individuals).

i think the opinion that humans are less special than we once thought, especially on expansive time scales, will only become more widespread.


The fact that people in the past have used the line "that's not really intelligence" doesn't at all invalidate the point that is being made, which is that games like Go/Chess/etc. have little to do with the real world. The real world is exponentially harder than those games (imperfect info, imperfect sensors, infinite state space, large - if not infinite - possible moves, infinite time horizon).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: