A Conversation for John Searle's 'Chinese Room' Argument

The Turing Test

Post 1

Awix

Searle's argument refutes the Turing Test as a valid test of true intelligence but surely the Test itself has now been pretty much discredited? Quite apart from all the pattern-recognition software that's out there now, it's very simple to (theoretically) conceive of a machine with every sentence of up to, say, 100 words programmed into it along with an appropriate response to that sentence. Potentially a fluent conversationalist but still a wholly mechanistic one.


The Turing Test

Post 2

Lear (the Unready)

I tend to agree (that's why I wrote the article), but I don't think you'll ever get a hardline behaviourist to accept that the human mind is anything other than an unusually complex machine. They've had the idea 'programmed' into them, so to speak, and it'll take more than empirical evidence to get them to think otherwise (if they're capable of thinking at all... smiley - winkeye )

(That was just a joke, before I get flamed by angry AI researchers, or reported to the editors for breaking the House Guide to Being Nice, or whatever.)

I think the really interesting distinction (Searle touches on this) is not between human / machine but between biological / non-biological. The task for AI, is not so much to find a way of emulating human 'intelligence', but rather to design a machine that is capable of emulating the organic, 'self-organising' quality inherent in biological life. Experiments with neural networks would seem to indicate that this is a long way off in the future, if it ever happens at all. A fascinating area of research, though...


The Turing Test

Post 3

Martin Harper

Since there's a fairly hefty reward available to anyone who builds a machine which actually passes the turing test, and it's still unclaimed, I would suggest that it's not as simple as you think. In particular conversations often are modified by what has been said previously, for example:

Alice: "how are you?"
Marvin: "I've got this terrible pain."
Alice: "where?"
Marvin: "In all the diodes down my left side."

Now, what is the 'appropriate response' to the sentence 'where?' ? In this case, as with so many, it's all about context, and your simple conversationalist doesn't have any concept of that.

Standard comments apply - Searle's argument does not convince a large number of people. I particular, while it appears to convince some philosophers it doesn't appear to convince computer scientists one bit. Of course, they just make the dang things, so what do they know? smiley - winkeye


The Turing Test

Post 4

Playboy Reporter

Well, I don't agree with Searle's arguement, but I believe that his conclusion that a computer can never possess 'understanding' is correct.

Are you aware Lear, that a powerful new argument against AI has recently come to light in the form of the work of Roger Penrose?
(Penrose has published two absolutely brilliant books - 'The Emperor's New Mind' & 'Shadows Of The Mind')

Penrose's argument is based on an analysis of 'Godel's Theorem' which applies to any 'formal system' (And Church and Turing proved that any system which can be simulated on a computer is a formal system and visa versa) His argument appears to show that it is logically impossible for any formal system to capture the workings of a mind.

Now it turns out that all the known laws of physics are entirely 'computational' - that is, anything working according to the known laws of science could be represented as a formal system.

Thus no process operating according to known law (ie neural nets, computers etc etc) can possibility create a mind! smiley - aliensmile

He concludes that the operations of the human brain are working according to some as yet unknown laws of physics - laws of physics which are 'non-computational' in nature!





The Turing Test

Post 5

Martin Harper

In extremely compacted essence, Penrose's argument is that any computational system has a godel theorem which is true, but cannot be proved within that system. He then claims that intelligent people could always perform the 'godelisation' procedure which proves such theorems. He therefore claims that there is an inseperable gulf between computational systems and humanity.

There are a few problems, though:
1) The godel theorem for any system complex enough to have a shot at intelligence is going to be *huge*. It is probable that the godel theorem for humans, while big enough for us to express, is too big for us to prove accurately.
2) There are many intelligent humans which would fail this test - it is unlikely that Shakespeare would have been able to perform the godelisation procedure, for example.
3) It is not clear what relation, if any, there is between the existance of a godel theorem and intelligence is. There is no reason why the one should in any way effect the other. In particular, there appear to be few practical applications of the godelisation procedure.


There may well be non-computational laws of physics. If so this is going to severely screw the world up when we discover them. For example, one instance of a non-computational function is the following:

Consider a turing machine, with a given (limited) number of states. Find the turing machine which terminates leaving the longest run of unbroken '1's, when run on an empty string. The function is from number of states to the length of this run of '1's.

Now, this function increases extremely fast, and it has been proven that it does so faster than any computational function. In practice humans, whether we are computational or not, have only found the values for X=1,2,3, and 4. We've found a pretty good machine for X=5, but nobody so far can prove whether or not it is the best or whether there are better ones.

I can't recall the exact numbers, but the series is something like one, three, seven, ten thousand, and at least 10^80(perhaps higher). If any of you happen to know of Ackerman's function, this one can be proved to grow (much) faster than that.


The Turing Test

Post 6

Lear (the Unready)

Playboy Reporter... Searle wasn't really arguing that a computer can never possess understanding - the question of whether or not an 'intelligent' machine can be designed, was one that he deliberately left open. The purpose of his 'Chinese Room' thought experiment was to demonstrate that a Turing-style test would be no adequate way of assessing a machine for understanding, one way or the other. That isn't just a pedantic distinction. I think the thing that makes Searle interesting, philosophically (perhaps not so much from the computer scientist's point of view), is his appreciation of the wide diversity of functions that go together to constitute 'thinking' in a biological organism (notice I don't say 'human' - trying to avoid speciesism here). We do more than just crunch numbers and think logically / mathematically.

And, of course, the really important thing - I didn't develop this much in the article - is that the brain is *biological*, not mechanical. As I say above, the real task for AI is not to develop a 'thinking machine', but to design a machine that is capable of emulating the organic, 'self-organising' quality inherent in biological life. The 'machine' paradigm simply misses the point altogether, in my opinion...

Thanks for the Penrose reference, by the way. I'll chase that one up when I get a spare minute... smiley - smiley


The Turing Test

Post 7

Awix

Isn't there a paradox inherent in what you propose here, though? You talk about the idea of designing a self-organising machine. Now to me the very use of the word 'design' implies organisation and order imposed from without, which is surely at odds with the self-organisational ability being searched for - i.e. the ability to organise oneself.

And on a related point - why should the brain's biological nature be so significant? Surely it's the way the brain functions rather than its' composition that matters, or else we're looking at a variant form of the speciesism you tried to avoid in your earlier post. Why shouldn't a synthetic brain (as opposed to a biological one) operate just as well?


The Turing Test

Post 8

Lear (the Unready)

Perhaps a better word than 'design' would be 'facilitate'. It would, presumably, be more a case of setting up a system in such a way that it could then evolve without further intervention, rather than designing - or 'programming' - it to follow specific instructions (as with a machine). That's what I was trying to get at before, although I think I could have chosen my words more carefully. smiley - smiley

Obviously, since such a thing doesn't exist outside of the natural world, it would involve human intervention to bring it into the world initially. But I was trying to draw a distinction between a machine which faithfully carries out a set of pre-given instructions, and a biological organism that has the capacity to 'self-organise' according to changes in its environment.


The Turing Test

Post 9

Awix

We seem to be moving towards a position where intelligence is a property that can only be exhibited by living, or quasi-living organisms. The self-organising system you describe above can scarcely be described as either inert or mechanical. I don't necessarily disagree with this proposition, I just want to check that it's where we're pointed.

If so it seems we're possibly letting some kind of dualism in through the back door in that we're allocating 'living' organisms potentialities denied to inert ones, potentialities which don't seem necessarily related to the question of whether they're 'alive' or not.


The Turing Test

Post 10

Playboy Reporter

Thanks for the thoughts Lucinda - interesting, interesting... smiley - coffee

Well, you know, until very recently I was never really convinced of Penroses's ideas either, but after I thought about it for a long long time I have finally changed my mind and I have decided that Penrose has a good chance of being right after all smiley - donut

It's true that the great complexity of any algorithm capturing intelligence might be a real problem. May be its just SO complex that no one could ever fully specify it and carry out Godelization in their lifetimes.

But Penrose counters this by saying that each individual step in the algorithm specifying/Godelization procedure is something we should be able to do, and thus the whole proccess is something that IN PRINCIPLE we should be able to do, even if its not possible in practice.

As to the second point, that many people would fail that test, I'm not sure that that is relevant. Penrose was trying to show that there is something that a suitably trained mathematician could do that indicates non-computational processes in human thinking. So what he is really saying, if there is something as logical as maths which needs non-computational proccesses, isn't it reasonable to assume that many other aspects of thinking that are less logical and more mysterious (such as for instance enjoying a sunset) need non-computational proccesses as well?

Now to the third point, that the link between godelization and intelligence is not proven. This is probably the most serious objection, but Penrose does, in my opinion, make quite a good case for a link towards the end of 'The Emperor's New Mind'

He points that 'Godelization' involves self-reference - a process of 'stepping outside a fixed system' in order to model it or understand it. Isn't this exactly what consciousness is all about, asks Penrose? Consciousness DOES indeed seem to be a sort 'self reflective' ability whereby one 'steps outside' fixed ways of thinking.

So I now think, that on balance, Penrose's arguments are actually quite good ones. smiley - coffee

So... non-computable laws of physics. Well, it sure would screw up existing physics alright! smiley - biggrin

Now, to Lear, ok I made a mistake... thanks for pointing out that in fact Searle was simply showing that the Turing test is not a good measure of intelligence.

But I have to agree with A. I can't really see why the 'biological' aspect could make a difference as to whether something is intelligent or not. Remember, that all the known laws of physics are entirely computational. That means, if true, that a non-biological system such a computer COULD perfectly model the workings of any biological system. And if a computer could 'perfectly mimic' a biological system then shouldn't we conclude that the computer is conscious as well? After all, if the simulation is perfect according to scientific method the difference between the biological/computer system is meaningless (ie the question of consciousness would not be falsifiable) smiley - coffee

heh.. even Arpeggio couldn't have reasoned that well smiley - tongueout


The Turing Test

Post 11

Awix

Certainly consciousness is a tricky issue. I say 'I am conscious' and you all believe me (I hope...) as I am like you and you all know you are personally conscious. There's a kind of argument from analogy going on here, where we all have our common ground of being human beings to draw upon.

If on the other hand a computer says 'I am conscious' we are automatically less likely to believe it simply because it's a computer. There's no way of scientifically measuring consciousness per se (that I'm aware of, I'm sure I'm about to be corrected). Perhaps the whole of the AI field is riddled with this institutionalised - what? speciesism? base-elementism? I don't know - where the standards of proof expected from artificial systems is much higher than that expected of everyday homo sapiens...


The Turing Test

Post 12

botogol

I think a lot of people really miss the poiunt of the Turung Test.

Turing wasn't saying that the definition of intelligence is the ability to pass a human.

What he said was: here's a really tough test.. and anything that could pass this test would have to be intelligent to manage it.

In others workds Turing wasn't answering the question: "what is intelligence?" he was answering the question "what would be a good test for the presence of intelligence?"

Not the same at all. Note that

- failure to pass the test doesn't mean lack of intelligence (a hypothetical alien space traveller would probably be acknowledged as having intelligence, but would have a tough time with this test). Perhaps there will one day be software in the same siuation.

- ability to pass the test isn't the DEFINITION of intelligence - it's an excellent INDICATOR (thought Turing) that intelligence (whatever that is) must be present.


I'd say the Turing test is a clever and intriguing idea. Certainly it's hard to think of a better test for the presence of intelligence...


The Turing Test

Post 13

Awix

Well - provided you think the ability to express ideas is the main component of intelligence. There's non-verbal intelligence, there's a vast body of empirical data demonstrating quite advanced intelligence in apes and dolphins, most of whom would struggle with the Turing Test. smiley - smiley

It's been said that in theory one could build an extremely simple machine quite capable of passing the Turing test (although it's not something one could or would ever actually do, which for me weakens this argument).

It seems to me that Turing got a bit hung up on the significance of 'speech' when it comes to intelligence. It seems strange to me that measuring the *degree* of intelligence is relatively simple, when deciding upon a test to decide whether it exists at all or not seems so difficult...


The Turing Test

Post 14

Martin Harper

In theory yes, provide you ignore things like, say, the speed of light, the number of atoms in the universe, and so forth. In practice, since speed of response is one component of intelligence, a simple-but-large machine won't cut it.


The Turing Test

Post 15

Awix

The example I was thinking of was the one of a machine with every possible 50-word-or-less sentence that could be said to it held in a file with an appropriate response pegged to it. Okay, the actual system would have to be a good deal more complex than that, and it would be very tedious create, but the point being made was that if a transparently non-intelligent system could beat the test, even purely theoretically, it wasn't a good enough test for real life.


The Turing Test

Post 16

Martin Harper

Suppose there are around 10,000 words in English. Suppose we want to store a response for every five word sentence. Suppose each response is only a single word. That would require approximately 500 million terabytes of data. Put another way, if you gave everyone on the planet a 100 GB hard disk and wired them all together, then you'd have about enough data.

That's just storage: to pass the Turing Test you also need to be as fast as a human in response time - just indexing those 500 million terabytes is going to take you over that. Creating it would be more than tedious: if every person could fill in one row a second, and we got everyone on the planet working on the task 24/7, without stopping to eat or sleep, it would take a cool million years to finish the task.

In any case, the resulting behemoth would only be able to have a six word conversation before failing the Turing Test. Indeed, *any* finite non-intelligent machine will eventually fail the Turing Test, given a sufficienty skilled interrogator, and sufficient time.


The Turing Test

Post 17

Awix

Well, the point that this argument was making all along wasn't that you could build such a machine and beat the test - as you say, the sheer logistics are rather daunting.

The point of the arument is that if it's *theoretically possible* to build a non-AI device that would beat the test, the test is invalidated. I stress theoretically possible because, yes, data storage and response time and so on make it impossible to do for real. You can argue that this kind of machine wouldn't beat the test anyway, and I'll happily admit that the fact that we'll never be able to tell sort of lends power to your elbow...

The test is seriously flawed anyway, I thought we'd established that.


The Turing Test

Post 18

Martin Harper

I did address the theoretical point too: ANY finite non-intelligent device will fail the Turing Test, given sufficient time, and a sufficiently skilled interrogator. Even merely theoretical ones.

The number of possible conversations in a given time grows exponentially. Any simplistic lookup-table based machine will eventually be overwhelmed by the exponential growth. In practice this occurs VERY FAST, but even in theory it will always occur eventually.

The test is flawed, but only in that it is too hard to pass. It is a sufficient test for intelligence, but not a necessary one.


The Turing Test

Post 19

Awix

I was thinking more in terms of the way that the test, as formulated by Turing, seems to me to be both subjective and relativist.

IIRC, Turing's idea was not that someone should carry out a text conversation and then decide whether or not they'd been talking to (for simplicity's sake) a computer. The idea was that someone should 'talk' to a computer and a person, without knowing which was which, and then make a decision afterwards.

Very subjective, surely, and thus therefore not very good science. Plus, logically, if the judge guessing right as to the human's identity means the computer isn't intelligent, surely the judge guessing wrong must, under the terms of the test, mean the *human* isn't intelligent? But (argument from analogy time again) we 'know' all humans are intelligent, even if they can't pass the test. The test is thus biased in favour of humans.

(Forgive me, I'm just doodling with ideas here, I'm a martyr to it.)

The test isn't an objective measure of intelligence. It's a subjective test of one's ability to pass oneself off as human, as compared to one random human. You may well need intelligence to pass, but it's a very particular and possibly quite limited type of intelligence.


The Turing Test

Post 20

Martin Harper

Turing's original test, and the version I describe, are essentially identical. I think my arguments apply to Turing's original version just as well, unless you disagree?

It is a very limited form of intelligent, but there's a lot of confidence in the concept of "AI-hard" problems: a certain class of problems that all involve significant advances in AI (with the implication that if you solve one, you solve them all). Passing the Turing Test is AI-hard.

If the judge fails to tell the difference (and repeatedly so), it doesn't prove that the judge is unintelligent. Turing starts by talking of a similar test where men have to pretend to be women, and vica versa - you can play this game yourself (IRC is a fun place to try). Sometimes people succeed at this game, sometimes they fail, but we know (heh) that humans are intelligent.

The test isn't a MEASURE of intelligence, which implies a scale, but a test for the PRESENCE of intelligence - one which gives many false negatives, but (it is believed) no false positives.


Key: Complain about this post