Interesting read & puts the Google Go bot in some needed perspective.
From the article:
> In the real world, the answer to any given question to be just about anything, and nobody has yet figured out how to scale AI to open-ended worlds at human levels of sophistication and flexibility.
One doesn't have to shoot for the moon in order to find useful applications for AI or Cognitive technology. If you can restrict the domain of knowledge of an expert system, it doesn't need to be create 'open-ended worlds' in order to provide value. It just has to beat human effort, or be an augmentation to human cognition to enable scale, for it to be useful - or provide business value.
Maybe we should take a cue from John Searle and consider AI an extension of human intelligence? Often, what we call "AI" is really a codification, automation, and scaling of human intelligence. Machine translation is a good example of this.
Is what people do 'actually intelligent'? If you break down the processes of the brain to a low enough level, all of the 'intelligence' will disappear, just as it does in a computer neural network.
Intelligence is not some kind of aristotelian substance that permeates brain matter. At some level, anything which is intelligent has to be built from parts which are not intelligent.
> If you break down the processes of the brain to a low enough level, all of the 'intelligence' will disappear, just as it does in a computer neural network.
If you break matter down to low enough level, everything is just elementary particles. Now, would you please trade me some gold for equal mass of aluminum?
A search tree will work better than an artificial neural network for a lot of domains. But you don't expect it to magically change behavior and become intelligent if you scale it up, do you?
>If you break down the processes of the brain to a low enough level, all of the 'intelligence' will disappear, just as it does in a computer neural network.
Well no. Free-energy minimizing, multi-information maximizing generative causal modeling is what appears when you break down the processes of the brain (at least as we best understand them right now).
That's right, yet "I think therefore I am." I can prove that I exist and have conscious thought beyond your observation that the my actions are low-level survival instincts. I would define intelligence as knowing "I think therefore I am." Unfortunately, it's impossible (as far as I know) to prove for anyone outside myself :)
Also, doesn't each human require some 20-30 years of training from birth in order to be able to answer such questions? This fact seems to be constantly ignored.
Humans do require training, they're slow, they sleep, rest, lose concentration, vary greatly in performance etc. All true and economically significant.
That is however distinct from no machine being able to do some of the things humans do as of now. It just means that most things they can do, they do better than humans. But first they must be able to do it.
From the article:
> In the real world, the answer to any given question to be just about anything, and nobody has yet figured out how to scale AI to open-ended worlds at human levels of sophistication and flexibility.
One doesn't have to shoot for the moon in order to find useful applications for AI or Cognitive technology. If you can restrict the domain of knowledge of an expert system, it doesn't need to be create 'open-ended worlds' in order to provide value. It just has to beat human effort, or be an augmentation to human cognition to enable scale, for it to be useful - or provide business value.