Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Except that AI research rarely tries to replicate how people think

It does not matter (and, literally, «how people think» is not necessarily "intelligence" - a special feature). And, if it «mimic[s] the typical outcomes of human effort», it is still within the broad proposal from McCarthy of «solve kinds of problems now reserved for humans» to substantiate the general idea of an "artificial intelligence", which is literal:

___ there is a phenomenon called 'intelligence': can we replicate it through computing?

The difference between the various areas: if it spawns from some consideration of the phenomenon of intelligence call it AI; if it can manage a process unsupervised call it Automation; if it refines its operation through feedback call it Cybernetics; if it manages a process unsupervised refining its operation through feedback in a system inspired by the phenomenon of intelligence call it Artificially Intelligent Cybernetic Automation - I do not see the problem. "It is not really intelligent": yes, we know. We study intelligence and see where that brings. We have posed a problem ("can we replicate intelligence through computing"), then we reap the outcomes of the effort.

The big issue here is that the "unsupervised" part becomes very risky after the feedback if responsibilities are to it delegated beyond its worth of reliability, which is especially relevant if the """intelligence""" part is oracular, a black box, non-transparent, which is apparently something the authors note on a different perspective as they note that «the algorithm was able to spot aspects of reality that humans had not contemplated, might not be able to detect and may never comprehend».

So, as "the strategist" said, "there are things we know we know, there are things we know we do not know, and there are things we do not know we do not know". We have a better idea of the reliability of a deterministic system; we do not know exactly what specific exceptions to our expectations can be raised by automation through less transparent systems; we know those systems are there for anyone to implement and delegate, and that is a problem.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: