A Conversation for John Searle's 'Chinese Room' Argument

Not flawless

Post 1

Gnomon - time to move on

"Searle's argument is not flawless". Indeed. It is so full of flaws that it is not an argument at all and should not be taken seriously. It is quite amazing to me how this so called refutation of artificial intelligence has been reprinted so often as if it were important.

Searle's argument in a nutshell is: "The individual components of the system do not understand, therefore the system does not understand". This is codswallop. The second part of this sentence in no way follows from the first. You might as well say, "the neurons in your brain do not understand, therefore you do not understand".

Understand?


Not flawless

Post 2

Lear (the Unready)

Yes, very clearly. As it says in the footnote, Searle's argument is that, in his Chinese Room, neither the system nor any of its constituent parts have any understanding at all of what they are doing. So therefore the comparison with the brain is irrelevant - in the brain, as you say, while the individual neurons have no understanding of what they are doing, the individual still has conscious intelligence.

So the systems theory objection to Searle is - now, how can I put this? - a load of codswallop.



Not flawless - I agree

Post 3

Dredwerker

I agree neurons don't have an understanding, just as ants dont have an understanding in an ant hill. See Hofstatdter


Not flawless

Post 4

Gnomon - time to move on

Searle claims that the system doesn't understand. He gives two reasons, both equally spurious:

1. The individual components don't understand so the total system cannot. This is totally illogical.

2. The system couldn't possibly understand because that would be ridiculous. Equally illogical.

If you look at the Hofstadter references in detail, you will find that he agrees with me. Searle's argument is just a way of presenting woolly thinking in a guise that covers up the logical leap.


Not flawless

Post 5

Lear (the Unready)

Gnomon, let me repeat what I said in my previous posting, which presumably you didn't read very carefully. Searle does not argue that "The individual components don't understand so the total system cannot," as you put it. Searle argues that, in his Chinese Room, *neither* the individual components *nor* the system as a whole 'understands' anything. Whereas, in a human brain, while the component parts do not understand anything, the system as a whole demonstrates conscious intelligence. We don't know how, yet, but it does. Maybe we'll figure it out some day. Then maybe we can start trying to build machines that emulate or even supersede human intelligence. But until then it is desperately arrogant to suggest that these little number crunching machines we call computers are anywhere near matching the complexity of the human brain.

Of course Hofstadter is going to disagree with Searle. Hofstadter is a behaviourist with a tremendous amount of intellectual capital invested in the notion that logic can answer every problem under the sun. What you call 'woolly thinking' is actually an attempt to persuade such people that what we think of as conscious, rational intelligence has other factors as well as logic. Various examples are given in the article - metaphor is one. Lateral thinking may be mentioned in the article as well - this would constitute another example of rational thinking that 'leaps' beyond mere logic. Then of course there is emotional intelligence - the ability to step outside of one's own limited worldview and understand the world from the perspective of other people.

Machines cannot do any of these things. They can crunch numbers very quickly. I think it's really quite brainless to pretend otherwise.


Not flawless

Post 6

Smiley Ben

As a founding member of the Cambridge University Searle Society (how's that for credentials?) I thought I'd point out why Searle is totally wrong. There are two major problems with Searle's argument.

1) The first problem is that, according to Turing, the only set of rules that would allow someone in the Chinese Room to respond correctly to any question given to him would be an intelligent one. The fact is that any algorithm capable of parsing speech, interpreting exactly what question is being asked, and giving an answer, as well as a human could, would be very complicated indeed. Many believe creating such an algorithm just isn't possible with a computer. Bear in mind that people use exactly the same words in VERY similar contexts to mean a whole host of different questions - and these are just the sorts of things that would give away
(and do give away, whenever such a test is done) the computers when people are attempting to have a continuous conversation with them, since it is precisely the /development/ of a conversation that we're all used to dealing with, and tend to get a lot of practice at. Contexts make it much easier for people to understand what answer is wanted for a question - but that is because they have VERY complicated ways of dealing with contexts; for want of a better word 'intelligent' ways. It may be that programming such a gift into a computer would simply not be possible.

2) The second problem is in fact far more major than the first. Searle hasn't given an argument, he has just stated his denial of the argument again, in a longer fashion. There are, of course, many who believe that a computer could be fashioned able to pass Searle's Chinese Room test, but the usual conclusion from this is not that we have created life, embued computers with souls, played God, etc., but it is that this proves that humans themselves are merely (granted VERY complex) Turing machines.

The jury is still out on what a 'mind' is (it's a whole area of philosophy), but in recent years Functionalist accounts tend to hold the upper hand - ones that claim that we are simply our bodies, that our brains control our movement, and frequently this is tied in with the suggestion that there is no such thing as free will, only the appearance of it. If the make up of our brains (or perhaps our genes) combined with experience always produces the same result then it seems fairly simple for us to be Turing machines with a well-seeded randomizer.

Searle misses this point totally - he fails to see that the more obvious reply that would be given to the Chinese Room would be to question 'If a computer can do it, how can we claim to be any more than a complex computer?'. I'd say we can't, and that the burden is on the Searleans to show why we could. 'Because we understand and they don't' is no answer at all, because understanding appears to be defined as the property that allows someone to correctly address any questions about a topic, and if this is the case then computers clearly /have to/ understand before they could possible pass Searle's test.


Does that clear that up? As with the rest of Searle's arguments, this appears fatally flawed.


Not flawless

Post 7

Gnomon - time to move on

I think Smiley Ben has said this, but in answer to Lear, I'm going to repeat it. Searle does not argue that the Chinese Room doesn't understand what's going on. He just states it. He seems to think that it is ridiculous and therefore does not need to be proved. But this is the whole point. Nobody denies that the individuals in the room don't have a clue. Searle deliberately uses non-Chinese people to point this out. He could just as easily have used stupid robots. But he assumes that the room itself doesn't understand what's going on and deduces from this that the room doesn't understand what's going on, so it is different from a human brain. I'm not saying the room does understand. I'm saying that this argument throws no light on the subject.


Not flawless

Post 8

Martin Harper

We have a CU Searle Society? Clearly people have too much time on their hands... smiley - winkeye

But it is interesting how much people want to believe that they are somehow different from machines... I thought I'd do one better than those who argue that humans are no more complicated than a turing machine - and say that we are no more complicated than a lookup table.

Consider a table which maps inputs to a specific output, where the inputs are "all external input up to some time X, both hearing, seeing, feeling, etc", and the outputs are "the action taken at time X", where the action may be "50% chance of A1, and 50% chance of A2". Such a table could, if it was filled with the correct values, perfectly simulate a human with a lifespan of <=20,000yrs, and still be finite.

But hugely impractical... smiley - winkeye


Not flawless

Post 9

Smiley Ben

Gnomon - that's what I'm saying in point 2. Point 1 is that it seems unlikely that the person in the room /could/ do what Searle suggests without understanding Chinese.

And Lucinda, CUJESS (Searle Soc) has had at least FOUR formal halls at Churchill (amongst other places) - KEEP UP!!!


Intelligence by lookup table

Post 10

Gnomon - time to move on

I'd hate to be the one that had to fill in the values of the lookup table. It would need a God to think of all the possible values, because we humans never cease to amaze each other with the surprising things we come up with. Yet the amazing thing is that most of us are up and running and talking back in about 2 years without any divine intervention.


Intelligence by lookup table

Post 11

Spoadface

I know this is a little late to join in with the argument here, but I think the problem with Searles statement is that he is arguing against an algorithm having intelligence - which seems fair enough.

The problem with his later assertion that because there is no algorithm for intelligence, then there can be no artificial intelligence is I think a false one.

If AI is ever going to come to the fore, it will be because we have been able to model a system on existing, natural structures - these structures would generate intelligence as a function of their complexity, not out of some kind of 'program'

As such, the idea of inteligence by lookup table is indeed, a silly one - but that should be obvious - the secret lies in emergence


Intelligence by lookup table

Post 12

Lear (the Unready)

Hello Spoadface...

It's a while since I wrote this article, but if I remember rightly Searle was never arguing against the *possibility* of artificial intelligence. His target, really, was the Turing Test - the frankly ridiculous notion that something can be said to be 'thinking' simply because it can fool a human observer. He was pointing out that simulating certain very narrow aspects of human thought is hardly the same as emulating it.

I think Searle would pretty much agree with what you say, about intelligence being a function of complexity rather than simply a case of mindlessly following a set of instructions. And I think a lot of AI researchers would as well. I do, too. As I wrote in the article, "If machines of the future are to show intelligent life, they will have to be qualitatively different from anything we have ever attempted to build before. They will have to be self-organising and essentially 'animate', capable of evolving to meet the changing needs of their environment." Not machines at all, then, really, by any conventional definition...


Thanks for posting,

Lear


Intelligence by lookup table

Post 13

Smiley Ben

"His target, really, was the Turing Test - the frankly ridiculous notion that something can be said to be 'thinking' simply because it can fool a human observer."

Well, gosh. Given that that's the only basis on which anyone reading this would credit you, or Searle, or me with intelligence, I'd watch where you're pointing your 'ridiculouses'!

There's a question that is famously asked about functionalism (particularly about the mind): 'Is the functionalist committed to the suggestion that a mind can be constructed out of beer cans and string?'. Given that we can construct a turing machine out of these things, and turing machines could act, for all intents and purposes, intelligently, how can a functionalist deny that we could make a mind of these things? The strength of the argument lies in the common-sense belief that no amount of beer cans and string is going to make an intelligent being.

Sadly, I'd taken another paper in place of the one in which they actually asked that question, so I didn't get a chance to see if I dared give the obvious answer: 'Is the non-functionalist committed to the suggestion that a mind cannot be constructed out of neurons and chemicals?'. Basically, the functionalist can give a good answer when asked to explain what it needs for something to be considered intelligent - that they should act intelligently. So the functionalist can explain why we credit people with intelligence. But Searle, et al, cannot. It's no use saying 'Well, clearly people are intelligent and clearly the chinese room isn't' - that's no argument at all...


Intelligence by lookup table

Post 14

Gnomon - time to move on

So is it only me and Douglas Hofstadter that see through the farce that is Searle's Chinese Room?


Intelligence by lookup table

Post 15

Smiley Ben

I'm agreeing with you, dagnamit.

And I assure you that with one mad goth exception the CU Searle Society is 100% behind you. And really does exist just to mock him. Honest.


Intelligence by lookup table

Post 16

Gnomon - time to move on

Sorry, Smiley Ben! I should have read your reply more carefully. It was really Lear's reply that prompted my remark.

All of Searle's argument seems to be "we all know that a ROOM can't be intelligent, therefore the room isn't intelligent". And this rubbish is repeated in book after book on intelligence. Hofstadter is the only person I've seen who has stuck his neck out and said, not just that he doesn't agree, but that Searle is completely off track.


Intelligence by lookup table

Post 17

Smiley Ben

Oh don't worry, lots of people do disagree with him. I think many feel that it's so obviously wrong that they can't bring themselves to argue against it. I mean, how do you really argue with:
1) People are intelligent, duh.
2) Chinese rooms aren't intelligent, duh.
Therefore:
3) Chinese rooms can be people, DUH!

(Though to be fair, his argument only really gets to point 2...!)


Intelligence by lookup table

Post 18

Joe Otten


OK, so resurrecting this thread again...

What the anti-Searle arguments don't seem to admit is the possibility of a difference between simulation and emulation.

Flight simulators don't fly, and turing-test passing software (if it can exist) doesn't think.

There seems to be an assumption of equivalence going on here between thinking and number-crunching. There may be a necessary functional difference, and there is certainly a subjective difference, in that I would object to being killed and replaced by a functionally equivalent machine. (Such an objection would only be irrational if I believed in this functionalism.)


Intelligence by lookup table

Post 19

Gnomon - time to move on

And presumably, a functionally equivalent machine would object to being decommissioned and replaced by you.smiley - smiley


Intelligence by lookup table

Post 20

Joe Otten


Yes, by definition. So what?


Key: Complain about this post