Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
There is no general AI: Why Turing machines cannot pass the Turing test (arxiv.org)
39 points by sel1 on June 15, 2019 | hide | past | favorite | 88 comments


I am skeptical of any paper with this result, because several very plausible events would likely prove by example that computers can respond as a human would. It requires accepting certain assumption though.

1: Physicalism is true. Nothing exists that is not part of the physical world.

2: The physical world obeys mathematical laws, and those laws can be learned in time.

2.1: The physical contents of the human body can eventually be learned with arbitrary/sufficient fidelity.

3: Any mathematical rule can be computed by a sufficiently advanced computer. (Edit: or maybe a better assumption: the mathematical laws that underlie the universe are all computable.)

4: Computational power will continue to increase.

Subject to these assumptions, we will eventually gain the ability to simulate full physical human beings within computers. Perhaps with some amount of slowdown, but in the end, these simulated humans would be able to converse with entities outside the computer. In all likelihood, computers will pass the Turing test long before this. But if they don't, simulated humans seem like something that is certainly possible or even probable, and therefore the result of this paper is likely incorrect.


I also was of the opinion that there was the possibility to reach general AI in the future, but now I will take time to read this paper. I'm not skeptical about it because we know your assumption number 3 to be false: there are mathematical functions that aren't computable. For a proof you can look here [1] or google it. [1] https://www.hse.ru/mirror/pubs/share/198271819


Thanks. My assumptions are imprecise, because as I mentioned I am no academic. So I appreciate your correction. To make the post stronger, replace assumption 3 with "All of the mathematical rules which underlie physics are computable." IMO, this doesn't materially affect the plausibility of these assumptions.


>> "All of the mathematical rules which underlie physics are computable."

I really have to ask- what are those mathematical rules that underlie physics and how do you know that they are computable?


All of chemistry and physics as far as we're aware are described by mathematical rules that can be computed. Things like the strong and weak forces, gravity, etc. all have mathematical descriptions. There may be other, non-computable aspects to physics, but I am not aware of them if so.


>> There may be other, non-computable aspects to physics, but I am not aware of them if so.

In that case, what do you base your assumption (3) on?


Let's consolidate this discussion into the other thread, since it seems like you're offering the same objections here and there. :)


Wait, you forgot one interesting question. This paper claims that a Turing machine can not be a general AI, but it says nothing about other models of computation. The notion that (quantum) Turing machines are the most general/powerful model of computation is only a conjecture, although a very plausible one (usually called the (quantum) extended Church Turing thesis).

But even with that in mind, this paper does not seem convincing, including for the reasons you already mentioned. Hopefully it will spark a conversation though.


I am not an academic neurologist, but I have never read anything convincing that suggests quantum effects are critical to the behavior of human cognition. It may yet be true, but I suspect that classical physics/chemistry is probably sufficiently close to what actually happens to convincingly model human behavior.


I completely agree with you. I put quantum in parentheses because quantum behavior is important in complexity/computability, and I wanted to be complete. However, yes, there is absolutely no reason to think a human brain possesses that computational capability (and plenty of reasons to suspect it does not).


How does quantum behavior affect computability?


It does not. Quantum effects are a smokescreen for clinging onto the idea that "only humans can be human". BQP is quite easily contained in PSPACE, and more generally adding quantum operations does not move one beyond Turing machines (realistically - only something like a halting problem oracle does).


I think you missed the point of my comment. BQP (i.e. "stuff that is efficiently solvable by a quantum computer") is almost certainly bigger than P (i.e. "stuff that is efficiently solvable by classical computers"). There was not even remotely anything in my argument that was claiming "only humans can be human" or "only humans can be intelligent" or anything like that (that would have been a silly claim). And nothing remotely related to claiming quantum effects have anything to do with human brains.


It affects complexity. Sure, it is fascinating to learn there is a separation between computable and not computable tasks, but there is an important separation among the computable tasks as well. There are "practically computable" tasks (computable with a reasonable complexity) and "computable tasks that are still infeasible in the real universe" (e.g. tasks with exponential complexity). Amusingly, it seems that there are also tasks on the border between between practical and infeasible, that can be practically solved on a quantum computer, but can not be solved in a reasonable time on a classical computer (even one as big as the whole universe).

P.S. There was some abuse of naming conventions and nomenclature above.

P.P.S. While computability is a well proven concept, claims about complexity (i.e. practical vs infeasible "computable" tasks) are frequently only conjectures (although they have some supporting empirical observations).

P.P.P.S. The proper terms to google for as a starting point would be (really, this barely scratches the surface):

- complexity class P: computable tasks that can be efficiently solved by a classical computer

- complexity class BQP: computable tasks that can be efficiently solved by a quantum computer (and includes tasks that are conjectured to not be efficiently solvable by a classical computer)

- complexity class NP-hard: computable tasks that are conjectured to not be solvable by a classical or a quantum computer efficiently (i.e. a medium sized problem would take a computer bigger than the universe to solve)


I think your premises are fair, but assumption #3 ("Any mathematical rule can be computed by a sufficiently advanced computer") is effectively ruled out by Gödel's incompleteness theorem[1] and/or the Church-Turing thesis[2].

The problem then becomes finding an approach to general AI that avoids hitting incompleteness/undecidability[3] issues. My feeling is that this would be difficult. One way to try to avoid these issues is to avoid notions of self-reference, since self-reference spawns a lot of undecidable stuff (eg, "this statement is false" is neither true nor false). It seems to me, though, that the notions of the self and self-awareness are central to human consciousness, and so unavoidable when developing a complete simulation of human consciousness. The self is probably not computable.

Obviously there could be approaches that avoid these pitfalls, but every year that goes by without much progress towards general AI makes me feel more confident in this intuition. I do think there will be lots of useful progress in specialized AIs, but I see this as analogous to developing algorithms to decide the halting problem for special classes of algorithms. General AI is a whole different beast.

But if general AI is physically impossible, how does the human brain "compute" general intelligence at all? It could be that your assumption #1 ("Physicalism is true. Nothing exists that is not part of the physical world.") is not correct. Maybe reality has "layers" and our world is some kind of simulation in another layer. Or maybe there is only one consciousness like many spiritual people and Boltzmann[4] suggest. Or maybe the human experience could be a process of trying to solve an undecidable problem and failing...

1. https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...

2. https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis

3. https://en.wikipedia.org/wiki/Undecidable_problem

4. https://en.wikipedia.org/wiki/Boltzmann_brain


>> But if general AI is physically impossible, how does the human brain "compute" general intelligence at all?

Who says that the brain "computes" general intelligence? We don't know enough about the brain to know what it is, but it's certainly nothing like a computer. Only by analogy is intelligence something that can be computed and the only reason we have this analogy in the first place is because we have computers. But isn't the accuracy of the analogy what we would like to know with some certainty, in the first place?

This is just another big assumption that is taken for granted: that the brain is a computational device. It seems an easy assumption to make, given all we know about computation. And yet, like you say, several generations of AI researchers have failed to reproduce intelligence with computers. Perhaps the reason for this is that the brain is not a computer, intelligence is not a program, and that's why we do not BSOD when confronted with paradoxical statements like "this statement is false".


In another reply, I modified point three to be the assumption that all of the physical laws of the universe are defined by computable maths. I believe this is the case to the best of our knowledge, but please let me know if I'm wrong.


Unfortunately, restricting to only computable maths means disallowing the natural numbers, basic arithmetic, or any equivalent structure, since Gödel incompleteness would apply. I doubt any system without access to the full set of natural numbers or basic arithmetic could qualify as "general AI".


Pardon my ignorance. Computers appear to be able to perform basic arithmetic. For example, you can open up the console in your browser and find that the sum of two and two is indeed four. So it is not entirely obvious to me how basic arithmetic is non-computable.


If you permit infinitely many integers it becomes problematic. If you are dealing with a finite entity (e.g. the finite part of the universe that can affect us), then there are no problems.


Can you sum correctly two arbitrarily large integers?


I don't think it matters, right? Since arbitrarily large integers are not things that occur in the physical world.


How do you know all those things about the physical world? For example- you say that "all of the physical laws of the universe are defined by computable maths". Do you really know what all the physical laws of the univese are?


We don't know. We just think it likely. We are unaware of counterexamples, or reasons to suspect the existence of counterexamples.


Again I have to ask- who is this "we"?

Apologies if my question sounds too contrarian, but I think you are making some very big assumptions about the computability of the laws of physics that are not really based on anything concrete, like a strong knowledge of the mathematics of modern physics.


We is humanity, as far as I know and as far as brief Googling is able to determine. I am not a physicist, so I have good knowledge of physics up to the high school level, and a dabbler's knowledge of what lies beyond. I am open to correction, so feel free to offer some contradictory evidence if you have any.


What exactly did you google for?

Are we really communicating here? I'm saying that there is a lot that physicists don't know about physics and that therefore it's impossible to make the assumption that you make, that every law of physics is computable. Because nobody knows all of them, and nobody knows what nobody knows, or how much of it there is.

And you're saying that, given high-school physics and "dabbling", we know all of it and it's all computable.

Is that a good summary of our discussion so far?


> Is that a good summary of our discussion so far?

Hrm, I wouldn't say so, and I don't think if that is your impression that it's going to be very productive to continue it.

FWIW: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."


Yes, I'm aware of the guidelines, thank you. They are not a tool to passive-aggressively end conversations by accusing other commenters of bad faith.

But I agree this is not a productive conversation. I don't see that you have a very clear idea of what you are trying to say.


Even if real infinities existed, it would be impossible to tell. What do you measure it against?


Infinite computations are not the only computations that are impossible to perform. For example, if I asked you to enumerate (not calculate) the number of X time units in all the time from the start to the end of the universe, setting X to the closest time unit to the time an operation took on your chosen hardware (past, present or future) you would not be able to complete this computation.

For instance, if the fastest hardware available to you performed about one operation each femtosecond, it would not have the time to enumerate all femtoseconds from the birth of the universe to its death. And that number is a finite quantity.


Gödel incompleteness applies to any system capable of basic arithmetic.


I'm unsure how this matters? The physical universe does not prove itself and does not need to. Godel's theorems just say that certain types of mathematical systems can't prove themselves, which seems quite irrelevant to simulating the universe. Please explain if I'm missing something.


How many particles make up a human, and how many transistors does it take to simulate the behavior and interactions of all those particles? How much energy does it take to power that many transistors? How does that compare to the Sun’s output? There are a few of the questions we need to answer before confidently predicting that we will ever be able to make a physics-based human simulator.


Yep. This is not a strong proof that humans are simulable, and I don't intend it to be. And I don't meant to suggest that simulating a human particle for particle will ever be an efficient way to model human behavior. I suspect that even medium fidelity neural simulations will someday be sufficient.

Instead, the list is meant to inspire an engagement at the level of: "which of these assumptions do I disbelieve?" For it to be the case that computers cannot mimic human-level intelligence, at least one of these items have to be false. And I consider the last one to be the weakest to attack, because all you're really saying there is that it will never be practical, not that it will never be possible. I think disbelieving any of the other assumptions makes a stronger statement, but I also think they are the harder ones to disbelieve.


I’ve thought along these lines myself, and I do wonder if we’re going to approach general AI by finding ways to “compress” a simulation of a human brain, or just by coming up with new neural net architectures. The second approach is the one everybody’s working on, and to me it seems more practical.


This would be a very good paper if it were titled, "What makes general AI hard", and it didn't try to make any claims about uncomputability.

Beyond the somewhat useful collection of some of the prickly points of whatever it is that humans do that we call Intelligence, this particular discussion isn't bringing much to the table in support of its incredibly strong claims. It is functionally an extended application of Searle's Chinese Room argument to these hard points, usually built on question-begging premises (for example, regarding "biography" as a component of dialogue, quote, "Because machines lack an inner mental life – as we do not know how to engineer such a thing – they also lack those capabilities".)

The paper addresses the traditional response to Searle thus: "How, then, do humans pass the Turing test? By using language, as humans do. Language is a unique human ability that evolved over millions of years of evolutionary selection pressure... machines cannot use language in this sense because they lack any framework of intentions". This is even blunter than Searle's actual counter, that there's something specific about biological machinery that makes it more capable in this regard than digital machinery. Instead, we're simply told that language is a special Human thing, Humans are not Turing-computable, and thus it's probably something computers can't do.

I am a big proponent of anti-hype in AI technology and of the idea that language cannot be separated from the general human experience of Intelligence. I'm very frustrated when people assume we've solved a given problem in AI because we've been able to tackle some toy examples. And I'm a big fan of proving what can't be done. But this is not a particularly valuable exercise in any of those things, perhaps beyond prodding some of the hubris of the current cult of "we're almost there".


Part 2 of the article is really good. It’s a shame if people are put off by the premise of the article. The authors have already pre-judged the outcome though.

“Because machines lack an inner mental life...”

Right, well that’s it then. Case closed. No point in researching general AI any more, might as well put all those researchers on unemployment benefits.

The authors do say we don’t know how to engineer a machine with an inner mental life and consciousness, and this is true. It’s why like you (dmreedy) I’m a skeptic of claims that general AI is just round the corner. It isn’t and the Singularity is a good long way off. Our current efforts in AI are pitifully primitive, at best many orders of magnitude dumber than a fruit fly. That doesn’t lead me to believe therefore that we will never learn to solve this problem, or that this problem is in principle not solvable.


For those skeptical that you even need an inner mental life to be intelligent, Peter Watt's novels "Blindsight"* and "Echopraxia" are a must-read.

* https://rifters.com/real/Blindsight.htm


These are great novels, but they are just novels. To the best of our knowledge, all intelligent species in the universe have an "inner sight," or, the ability to observe at least some of their own mental machinery. Although the starfish of Blindsight seem possible, whether they are likely seems like another question entirely.


The paper seems to basically say, as I read it, "the current approaches for modeling human behavior are unlikely to be perfect enough so no approach will ever work." I find that to be filled with a lot of unsupported strong assumptions. Specifically, it talks about modeling language with machine learning based on input-output pairs.

But, for example, if you took a human brain, deconstructed it's physics down to individual chemical reactions then you're no longer trying to predict a black box with input-output pairs. You literally have a copy of the black box in mathematical terms.

Like most of these papers it basically boils down to positing a solution to a problem as the only solution and then claiming that solution doesn't work so no solution would work.


> if you took a human brain, deconstructed it's physics down to individual chemical reactions

Frankly, the idea that we can mathematically model 10^21 simultaneous [unobserved] chemical reactions in an individual's head in real time sufficiently well to result in a generalised model of cognition which can be applied to other uses seems more of a stretch than modelling language with ML...


Why on earth does it have to be real time? "simulate a human brain" isn't a practical suggestion for a path forward, it's an existence proof that says at least one route is possible (and thus others probably are too).


> Why on earth does it have to be real time?

Because you're not going to pass a Turing test with an approach which takes days to simulate the process that generates a reply. (And since otherwise you've got to align the speed of the inputs to the brain simulation with the speed it simulates the chemical reactions in response to them, slowing things down may not be simplifying things anyway). It's not an existence proof if a simulation of a human brain depends on technology and processes no more grounded in things which exist than "ask the omnipotent deity to do it for us". If you can simulate each individual chemical reaction in a byte, you're still using a year's worth of internet traffic to simulate the number of simultaneous reactions going on, so even if you Moore's Law away the issue with the required processing power and assume into existence the technology to observe whole brain activity at the molecular level, you probably need Yahweh or Krishna to program the simulation...


I think we're using different values of "existence proof" here. You seem to be taking it as "we can't build this, therefore it is not a proof". It was intended as "it is theoretically possible to build this", and "it will run really, really slow" isn't a disqualifier there. (And it could still pass a turing test - you'd have to give the human equivalent constraints, but that's true for any "text on a screen" test anyways)

How about this. Would being able to perfectly simulate a Caenorhabditis elegans brain be an existence proof for whole-brain simulation, or do you think that there's some sort of discontinuity before you get to human brains?


Not really, I'm arguing that "it is theoretically possible for humans to build and program Turing machines to undertake human whole brain simulation" relies entirely upon magical thinking. It's not so much a case of "we don't have the processing power", though we don't, as "there's no theoretical basis for assuming the human mind[1] has the capability to parse and understand the information content of a molecular structure so complex it contains the human mind within it in sufficient detail to program an accurate simulation of a human mind" (the real time stuff is moot, but since the whole point of a Turing test is a human can be convinced that another human is sat the other side of a terminal, an extended delay whilst the machine parses the complexity of "what is your name?" is a pretty hard fail).

I think it's pretty obvious there's a discontinuity between simulating mechanical responses of a nematode worm to stimulation of fewer neurons than I've lost typing this sentence and achieving AGI by human brain emulation, though we're finding the worm pretty tough. Nobody is arguing nematode worms have intelligence, for a start...

[1]obviously one could assume sufficiently powerful AGI could do it, but if the prior existence of AGI is a prerequisite for a particular approach to AGI, we can safely ignore it as a proof of routes to AGI.


Skimmed a bit and found some snippets, from which I can't take this paper seriously as it dismisses unsupervised learning / language models over large datasets. Yes, sec 4.3.4 briefly discusses recent work in this area, but only briefly and dismisses it by cherry-picking the least positive result of many.

"Only if we have a sufficiently large collection of input-output tuples, in which the outputs have been appropriately tagged, can we use the data to train a machine so that it is able, given new inputs sufficiently similar to those in the training data, to predict corresponding outputs"

This ignores of recent work with large language models that do generalize, zero-shot, to novel tasks.

"supervised learning with core technology end-to-end sequence-to-sequence deep networks using LSTM (section 4.2.5) with several extensions and variations, including use of GANs"

This reads like something generated from a LM (e.g. GPT-2): * Where is any mention of attention or Transformer? * GANs? Have any recent works used GANs successfully for text? There are a few, e.g. CycleGAN, but not widespread afiact.


>> This ignores of recent work with large language models that do generalize, zero-shot, to novel tasks.

Which work is that?


OpenAI trained a large (1.5B parameter) Transformer model called GPT-2 on a diverse set of pages from the web. From their paper, GPT-2 "achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting"

Blog entry with link to the paper: https://openai.com/blog/better-language-models/


Thank you for the link.

I'm not sure I'm convinced by OpenAI's claim that their model performs zero-shot learning. It depends on what exactly do they mean by zero-shot learning. My understanding, from reading the linked article (again; I remember it from when it was first published) is that, although their GPT-2 model was not trained on task-specific datasets there was no attempt to ensure that testing instances for the various tasks they used to evaluate its zero-shot accuracy were not included in the training set. The training set was a large corpus of 40 gigs of internet text. The test set for e.g. the Winograd Schema challenge was a set o 140 Winograd schemas (i.e. short sentences followed by a shorter question), so it's very likely that the training set had comprehensive coverage of the testing set, for this task anyway. I don't know about the other tasks.


This sentence sums up the paper:

"Turing machines can only compute what can be modelled mathematically, and since we cannot model human dialogues mathematically, it follows that Turing machines cannot pass the Turing test."


It feels more like an argument that chatbots will never exhibit general AI.

Which ought not to be controversial. The point of the Turing test isn't to provide a blueprint (just optimize human dialogue and you'll eventually get general AI) but a test to see that your general AI works. You might build a general AI that fails the Turing test, but you won't be able to pass the test without a general AI. That's the idea.

Unfortunately people have taken the wrong idea from the Turing Test and decided to attack the "faking human communication" thing directly. Which is fun! But anyone who in 2019 thinks that better chatbots will eventually develop general AI are delusional. I don't know if anyone with more than a passing interest actually does believe this, so it feels like this paper is arguing against a straw man.


Quite, the idea that mathematical modelling of language will lead to general AI is absurd. The simplest way to defeat chatbots and mathematical language generation models is to teach them something new, like a game or other rules based system and ask them questions about it and then play it. They fall flat on their face immediately because they have no ability to build, interrogate and adapt models of systems.

The authors’ credence of Searle’s Chinese Room argument is telling. The Chinese Room is misdirection. We are invited to consider an agent in a room manipulating symbols on cards and asked could such a system be considered conscious. In fact there might need to be trillions of these agents in rooms covering an area many orders of magnitude larger than the Earth, manipulating millions of trillions of symbols every millisecond. Asking if a system like that could be conscious is a whole different question.

“Here however Turing commits the fallacy of petitio principii, since he presupposes an equivalence between dialogue-ability (as established on the basis of his criterion) and possession of consciousness, which is precisely what his argument is setting out to prove.”

Sigh, no. Dialogue ability isnt claimed to be _equivalent_ to possession of consciousness, that’s putting the cart before the horse. It’s a possible product of consciousness. You could have a conscious system incapable of sensible dialogue, but the point of the test is you can’t have sensible dialogue without consciousness. That’s a claim and it’s arguable, sure, but dialogue ability doesn’t lead to consciousness. That’s daft. They and Searle look at this from entirely the wrong direction.


And everyone knows there are conscious agents who can't hold a sensible conversation: toddlers. They're more conscious than any chatbot could ever be, but they'd fail the Turing test. So would dogs, and dogs show more recognizably cognitive ability than a chatbot. Let alone nonhuman primates, who are all much, much smarter than a chatbot and would all fail the Turing test.

It would be one thing if we had built an apelike intelligence and found it impossible to make something smarter, but as we can't model them either, worrying about not entirely understanding language seems beside the point.


> Unfortunately people have taken the wrong idea from the Turing Test and decided to attack the "faking human communication" thing directly. Which is fun! But anyone who in 2019 thinks that better chatbots will eventually develop general AI are delusional.

Absolutely. It’s Goodhart's law in action, targeting the behaviour that wins the prize money rather than the intelligence that the test was about. That said, I don’t think the targeting was very good last time I looked (a few years ago now), as the chatbots mostly didn’t have any long-term memory and would have conversations along the lines of “Where do you live?” “Paris” “What city do you live in?” “Newark” “What country do you live in?” “The Former Yugoslav Republic of Macedonia”. If we build something intelligent, it should pass the Turing test, but that doesn’t necessarily mean that something which passes the Turing test is intelligent.


>It feels more like an argument that chatbots will never exhibit general AI.

I feel it's even more specific, that the current approach to training and building chat bots will never exhibit general AI. Theoretically, you could make chat bots in other ways such as directly simulating a human brain down to some arbitrary level. Maybe there's things in-between as well.


Oh, yes, that's what I'm referring to as a chatbot- something built specifically to converse by modeling natural language, rather than trying to model cognition at any deeper level. Not that we can do that successfully either, but at least trying to tackle that has an honest chance of working.

This sort of feels like an updated version of Searle's argument... a chatbot is an awful lot like his 'Chinese room.'


Got it. I think the paper actually makes a reference to 'Chinese rooms.'


The next paragraph has

"Passing the strong form of the test would indeed be clear evidence of general Artificial Intelligence. But this will not happen in the short- or mid-term."

To me this is a more realistic claim, but undermines the rest of the paper. The title and abstract claim that Turing machines cannot pass the Turing test (with the implication being that they can /never/ pass the Turing test), while that quote says that computers cannot pass the Turing test now or in the near future. The latter is a much weaker claim, but seems to actually be supported by the paper. As a disclaimer I only skimmed the paper.


And it is why I find the paper very unconvincing. We do not have a good model right now, but it is shortsighted to claim we will never have one. The whole statement is completely circular and tautalocal.


How does that sentence apply to playing chess, go or StarCraft II?

We can easily simulate humans using an extended version of Lattice QCD [1] that consider the other forces, and get an accurate simulation of a human that can talk. It is discrete, so it is easy to model. The only problem only is the scale [2], so we can model humans mathematically as well as we can model playing chess, go or StarCraft II.

[1] https://en.wikipedia.org/wiki/Lattice_QCD

[2] I'm not sure about the state of the art here, but I guess the biggest models have a few dozen of particles. For a human you need something like 10^28 particles, and a human with a home needs more [3]. And the complexity of the calculation grows exponentially, so the run time is like e^(10^28) bigger than the current calculations, but mathematically I doesn't matter.

[3] Do you think that's air you're breathing now?


I like much of his work in formal ontology and it's weird and personally disappointing to me that Barry Smith is a co-author of such a bad paper. :(


"We don't know how to do it, therefore it is impossible" is silly.

A real result would be "We prove that human-equivalent intelligence is impossible", which would be quite a shocker since we have the existence proof of actual humans.


we do not have a workable definition of intelligence


Since 1950, when Alan Turing proposed what has since come to be called the Turing test, the ability of a machine to pass this test has established itself as the primary hallmark of general AI.

That's... not true. I mean, to the general public at large, sure, they think "the Turning test is the hallmark of AI." But I don't think any serious AI researchers actually agree with that sentiment. And for good reason: among others, the fact that "programming" a machine to pass the Turing test is basically programming it to lie effectively. A useful skill to have in some contexts, perhaps, but not exactly the defining trait of intelligence. Beyond that, the "Turing Test" (or "Imitation Game") as originally specified, if memory serves correctly, was fairly under-specified with regards to rules, constraints, time, etc.

This whole thing also blurs the distinction between "human level intelligence" and "human like intelligence". It seems reasonable to think that we could build a computer with intelligence every bit as general as that of a human being, and which would still fail the Turing Test miserably. Why? Because it wouldn't actually have human experiences and therefore - unless trained to lie - would never be able to answer questions about human experiences. "Have you ever fallen down and busted your face?" "Did it hurt like hell?" "Did you ever really like somebody and then they blew you off and you felt really depressed for like a week?", "have you ever been really pissed off when you caught a friend lying to you?" etc. A honest computer with "human level" intelligence would be easily distinguishable as a computer when faced with questions like that, but it might still be just as intelligent as you or I.


The paper does not have any redeeming qualities and the title and abstract do not even align with the content.

In my opinion, conversation and other high level skills are sort of the icing on the cake of general intelligence. I believe that the key abilities that enable general intelligence are those that humans share with many other animals.

So I think that a research goal of animal-like intelligence will give the most progress as long as the abilities of more intelligent animals like mammals are the goal.

I think that people who have worked closely with animals or had a pet will more easily recognize that.

Animals adapt to complex environments. They take in high bandwidth data of multiple types. They have a way to automatically create representations that allow them to understand, model and predict their environment. They learn in an online manner.

No software approaches true emulation of the subtleties of behavior and abilities of an animal like a cat or a dog.

Obviously it's another step to say that leads to human intelligence. I'm not trying to prove it, but will just say that it seems mainly to be a matter of degree rather than quality. If cats and dogs are not convincing for you, look at the complexity of chimpanzee behavior.

So this is just a half baked comment on a thread, and I would not try to publish it, but I don't think that the paper is actually much more rigorous and yet we are supposed to take it seriously.

arxiv is amazing and we should not change it, but you have to keep in mind that there is literally zero barrier for entry, and anyone's garbage essay can get on there with the trappings of real academic work. So you just have to read carefully and judge on the merit or total lack thereof.


Unless this work disproves the Church–Turing thesis, I suppose it can safely be disregarded.

Well, unless you 1) want to ascribe supernatural powers to the human brain, or 2) assert that human intelligence is not general. The little cynic in me is gleefully considering option 2 right now...


By this argument, prop planes, jets, helicopters and rockets don't fly because they don't flap their wings like general flying creatures do.

To me, it seems the question of general AI is bordering on semantic word games. We'll always come up with new reasons something isn't generally intelligent this way.


Was this written by GPT-2?


Is this paper any good? The way the abstract makes me wary of the results claimed.


It is. It is a thorough study of the necessary components of real human dialogue, and a well-defended claim that there is no model, nor even any existing TYPE of model, which can model human dialogue. Human dialogue, the paper says, is a temporal process, and the two mathematical models for such processes--differential and stochastic--are insufficient.

From the paper: "For example, it is not conceivable that we could create a mathematical model that would enable the computation of the appropriate interpretation of interrupted statements, or of statements made by people who are talking over each other, or of the appropriate length of a pause in a conversation, which may depend on context (remembrance dinner or cocktail party), on emotional loading of the situation, on knowledge of the other person’s social standing or dialogue history, or on what the other person is doing – perhaps looking at his phone – when the conversation pauses."

Optimists in the comments here have hope for advances in mathematics that would give us a new method for modeling that could be applied. Maybe their hope isn't unfounded. I'm just a dude who read an academic paper. But I did enjoy it.


>Optimists in the comments here have hope for advances in mathematics that would give us a new method for modeling that could be applied. Maybe their hope isn't unfounded. I'm just a dude who read an academic paper. But I did enjoy it.

I'd say most of the arguments in the comments boil down to "human brains do it so the paper's argument requires human brains to work on mechanisms other than non-quantum physics, which is mathematically model-able, which is a strong assumption."


As far as I can tell, the strongest assumption around is that, because we can model some physical processes, we can model all physical processes.

We have some maths. We haven't solved all of physics. We don't even know if solving all of physics will help us model intelligence. We have no way to know whether, even if we had an accurate model of the function of the brain and of our intelligence, we would be able to run it as a program, and on an actual computer.

We know so little about what intelligence is and the dead certainty that, because we have maths, we should be able to reproduce it is just as unfounded as the opposite assumption, championed by the article.


I only skimmed through it, but I can not take it seriously. They talk about general mathematical proofs, but it feels more like a few very arbitrarily chosen definitions, without addressing the most standard argument ("what is so difficult about simulating a human brain in principle, except for the (not fundamental) problem of having a big computer and precise classical measurements").


No


This rubbish does not deserve to be on arXiv, let alone HN.


> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

https://news.ycombinator.com/newsguidelines.html


Hasn't the Turning test been discredited as a test of general AI?

Overall from reading the abstract it seems like a pretty obvious conclusion. Same applies to robotics where the only successful cases are very constrained.

It has recently occured to me to ask why we have decided to try solve autonomous driving before solving any other seemingly easier robotics problems. Other than robo-vacuums, which aren't particularly complex, we have jumped straight to trying to solve one of the hardest unconstrained robotics and AI challenges in automotive environments.

edit: Getting downvoted, if you disagree could you reply with why?


Self driving r&d is rewarded in stock valuations. First, because you can create great looking demos and hide the real difficulties.

Second, because you can say you will be selling trips, and have network effects,which seems to be a much more profitable proposition than selling robots.

Can you build a similar story for domestic robots ? It would be much harder. And without that, you can't invest long term.

So robotics is advancing incrementally.


>Hasn't the Turning test been discredited as a test of general AI?

I'm not sure discredited so much as 1.) Turing didn't introduce the test as an explicit test of intelligence and 2.) there have always been criticisms of trying to use the test for this purpose.

Wikipedia has a pretty good run-down: https://en.wikipedia.org/wiki/Turing_test#Weaknesses


>It has recently occured to me to ask why we have decided to try solve autonomous driving before solving any other seemingly easier robotics problems. Other than robo-vacuums, which aren't particularly complex, we have jumped straight to trying to solve one of the hardest unconstrained robotics and AI challenges in automotive environments.

I'd say it's because other problems are actually difficult in hidden ways or lack much of a value proposition given existing mechanical aids. Cars are in some ways simple because they have plenty of space for electronics and have an easy to automate set of controls. That's not even getting into industrial robotics which is very popular.


Industrial robotics are very well constrained and the environment can be adapted to them which is why they're successful. Was thinking more about consumer robotics of which robo vacuum is the only real autonomous product right now.

IMO the answer is that it's very hard to build reliable robots + AI even in simple environments and AVs are still very dependant on the driver. The size of the potential market if it pays off is huge but removing the driver completely is going to take a long time.


Automobiles are some of the most constrained consumer problems in my opinion. There's strong laws on how cars can operate and the environments they operate in. Cars operate in constrained and relatively simple ways. Most other consumer problems are optimized around a human body's abilities which are very complex.


It's still incredibly complex. Reliably handling an area like downtown SF requires a robot to detect, understand and predict a lot of diverse human behavior. All the edge cases lead me to think there will be a supervisory driver in the car for a long time unless we constrain the environment further.


>> It has recently occured to me to ask why we have decided to try solve autonomous driving before solving any other seemingly easier robotics problems.

As far as I can tell, a few years ago, Google decided it was a good idea and then everyone else followed suit, because it's Google so they must be on to something.

Mind, there was earlier work on self-driving cars that was just as impressive as modern efforts, but you rarely hear about it.

Here; You-Again Schmidhuber got the works:

http://people.idsia.ch/~juergen/robotcars.html


Apart from the obvious value of solving the problem, part of the point of taking moonshots is that you invent a whole lot of valuable stuff along the way, even if you fail.

And in this case I mean moonshots literally, because NASA was one.


Sure. I get the value proposition. Just seems strange that we don't have more examples of constrained autonomous robotics products beyond factory/warehouses and vacumns. IMO the answer is that it's very hard to build reliable robots + AI even in simple environments and AVs are still very dependant on the driver.


There are lots of other examples, depending on what you consider to be autonomous robotics - grass mowers, delivery drones, autonomous trains, elevators, vending machines.

Do you have an application in mind which is easier than autonomous cars but where there is no current product?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: