A Conversation for John Searle's 'Chinese Room' Argument

Nice article...

Post 1

Azimuth

Saw your post on the 'Peer Review' page and thought I'd pop over. You asked for comments, so here they are smiley - winkeye

I think the first important point to mention is that there are considered to be two 'levels' of artificial intelligence. 'Strong' AI is the one you discuss in your article; if it was demonstrating strong AI, a computer could truly be said to be thinking. 'Weak' or 'soft' AI, on the other hand, is a simpler affair; the computer just appears to be thinking. The Turing test can only indicate that a computer can demonstrate 'soft' artificial intelligence.

As far as my research goes, Searle's argument was a direct rebuttal to research performed by Schank and Abelson in 1977. Schank had written a computer algorithm to analyse simple stories in plain English; this program would then allow the computer to answer a simple question about the story with a 'yes' or 'no' that would make the computer appear to have understood the story. Searle was suggesting that he could replicate the algorithm without having understood the story, which is fair enough. However, it's very easy for us to say that, as humans, we 'understand' the story, but what do we mean when we say 'understand'? The neurons in our brains can't possibly understand individual concepts, but the way in which they interact creates the impression of comprehension. How many neurons do you have to have before this happens? If we were able to biologically clone a brain cell-for-cell, would we have cloned the way that it thinks? Do we all 'understand' one particular concept in the same way? Does the language you speak influence the way you think?

I'm not sure about the paragraph that describes simulation versus emulation. You suggest that computer simulations of fire or water wouldn't burn or make things wet; of course, they wouldn't in the physical world, but if your simulation is complete enough, then objects would burn or get wet in the simulated world. I'm involved in molecular simulation; obviously when we run a calculation to simulate a ligand docking with a protein it's a completely theoretical calculation that has no effect on the physical world, but if our parameters are accurate enough, that simulation can precisely mimic the real-world event. It all depends on your point of reference...

You've written a good, thought-provoking article (the alchemy reference is a little provocative, though - one of the major skills of modern science is in approaching a problem you don't understand in a rational and systematic way that allows you to break it down piece by piece - not strictly equivalent smiley - winkeye); it would definitely be interesting to pair it up with one directed more towards 'weak' AI. An additional reference to those who are interested is "The Emperor's New Mind", by Roger Penrose (Vintage Books, 1990).

Anyway, good luck, and I hope it gets accepted!

Azimuth


Nice article...

Post 2

Lear (the Unready)

Azimuth,

Hi, thanks for the comments. Yes, Searle himself makes a distinction between 'strong' and 'weak' AI, but I don't think this is of too much significance for the purposes of the Chinese Room argument. His argument is clearly directed at the strong AI claims that a machine can be said to understand and have cognitive states. 'Weak' AI seems to amount to little more than a recognition that computers can provide a useful aid to human intelligence - an unexceptionable claim that doesn't really need to be refuted because it doesn't really say a lot to begin with.


>'The neurons in our brains can't possibly understand individual concepts, but the way in which they interact creates >the impression of comprehension. How many neurons do you have to have before this happens? If we were able to >biologically clone a brain cell-for-cell, would
>we have cloned the way that it thinks?...'

You seem to be relying here on the 'systems theory' objection to Searle's argument - the notion that, while the individual components of a system do not 'understand' what they are doing, the system as a whole does. Searle's response to this is that, even if he were to tackle, on his own, *all* of the instructions that led to the production of correct answers to the Chinese questions - in other words, if he took on all the aspects of the system by himself - he would still have absolutely *zero* understanding of what he was doing. But if he were doing the same things in English, he would of course be able to understand what he was doing. Therefore, thinking in terms of systems as a whole or of subsystems creating a whole, does not make any difference if there is no possibility of understanding to begin with.

As I say in the article, I think the crucial difference between the machine and the sentient mind is that the latter is more than merely a complex set of neurons arranged together in a network. A sentient being has a mind that will continue to evolve, organically, throughout its lifespan. It will develop and change as it learns from experience and evolves to meet changing circumstances. As far as anybody seems to be aware, machines are simply not capable of doing this at present.


I agree that the 'simulations' / 'emulations' stuff doesn't read too clearly, and could either do with editing or else replacing with something a bit more specific. I think, though, that Searle was basically trying to illustrate the limited nature of mechanical 'ingenuity' in comparison with the varied complexity of human intelligence. The logical / problem-solving approach is just one of many varities of intellectual resource that the human mind is capable of drawing on. Ask yourself, for example, why Searle chose to express his argument by way of analogy. This is precisely the kind of intelligence - metaphorical intelligence, understanding something by way of comparison with other things - that the human mind uses habitually, but which a machine has no grasp of whatsoever. A machine would not even have any way of understanding the 'Chinese Room' argument - the analogy simply wouldn't compute!... smiley - winkeye


The alchemy reference wasn't really meant to be technically accurate. I was using alchemy as a metaphor (another analogy) to show how, in my opinion, the dream of AI (strong AI at any rate) is comparable to many other great 'pipe dreams that humanity has been prey to over the centuries. Maybe I could reword it a bit so it sounds less harsh...


Once again, thanks for your comments.

All the best, Lear.


Nice article...

Post 3

IanG

I don't think Searle's refutation of the 'systems theory' argument holds any water actually. The fact that he may be tackling *all* of the instructions on his own doesn't make any difference. What this fails to understand is that the thing the follows a set of rules and maintains some state is a totally different thing from the behaviour that emerges from those rules and that state. I would find it hard to see how anyone who has ever written a moderately complex computer program could not come to this conclusion. The program is one thing, and the machine that executes it is another, but the resulting behaviour is a third thing, an artefact that is clearly related to the program and the machine, but also quite clearly something different from them. Likewise the brain is not the mind, the brain is what enables the mind to exist. You say that "think the crucial difference between the machine and the sentient mind is that the latter is more than merely a complex set of neurons arranged together in a network". I agree with that entirely, but I would point out that in this context "the machine" could just as well refer to the brain as to a computer. So return to Searle, he might not have any understanding of Chinese, but the system he is hosting does; changing who or what is responsible for the mechanics of it all is irrelevant, because it's not the mechanism that is intelligent, it's the mind.

It would be interesting to hear Alan Turing's reaction to this argument, but I'm guessing that he was dead by the time Searle proposed it? From what I've read about Turing (and in particular about his take on the so-called Turing test) I suspect he'd be somewhat non-plussed and wonder why anyone thought that this Chinese room added anything to the debate. As I understood it the whole reason he suggested the Turing test in the first place was because he thought the mechanical specifics were a matter of supreme irrelevance, and that the *only* thing that mattered in determining 'intelligence' was the nature of the observed behaviour; to suggest otherwise was to imbue 'intelligence' with a somewhat mystical but unmeasureable (and hence not exactly scientifically-sound) quality. The Chinese Room argument is nothing more than the Turing Test couched in different terms, and is in fact trying to make exactly the same distinction as the Turing test was (although Turing made the distinction to highlight the fact that he thought it was irrelevant, so their motivations were totally opposite even though the two thought experiments are effectively identical).

The notion that computers cannot evolve and adapt seems nonsensical, and contrary to experience - I just installed a new piece of software this evening, and now my machine is capable of doing stuff it couldn't before. Clearly this is a world apart from how evolution produces progress in the natural world - this didn't require natural selection (nor millions of years for that to work over). Granted, in most programs the logic of any single program is typically mostly hard-wired, and the only way a machine learns new behaviour is through the highly specialised stimulous/response behaviour that most PCs are capable known as 'installing new software'... But (a) this clearly demonstrates a capability to learn new tricks - there is no denying that today I can do things with my computer that I couldn't do yesterday, and (b) don't forget that really rather a lot of the brain's functionality gives every impression of being hard-wired: look just how consistent languages across the world turn out to be - there are a huge number of grammatical rules that are totally consistent across *all* languages, and such variations as there are appear to come from a combination of a relatively small fixed set of options.

Don't get me wrong - I don't think today's computers are intelligent. But I don't buy arguments that say they cannot possibly be (any more than I buy into arguments that say that they definitely can be - I think the jury's still out), and in particular I don't buy the argument that says Searle's Chinese room isn't intelligent. The machine isn't, but the machine isn't the issue, it's what it hosts that matters. (E.g. the mind, in the case of the brain. And whatever name you care to exist for some entity self-evidently capable of intelligent behaviour which happens not to be hosted by a human brain.) The only thing that's not really clear with the Chinese room is whether it is actually possible to build such a thing. If such a thing is possible to build then refusing to call it 'intelligent' simply means we have to invent a new word that means 'apparently intelligent but not human'.


One other comment on the article: "The current direction of AI - developing neural networks..." I think the vast majority of AI researchers would go apoplectic if they read that! Neural networks are but one area of AI research. Personally I'm far more interested in linguistic areas of AI research - anyone who's taken a detailed look at a compiler for a programming langauge (and today's compilers owe a *huge* debt to Noam Chomsky, and have taken advantage of a lot of early AI research; of course because these techniques are now well understood and widely used, nobody considers them to be evidence of intelligence smiley - smiley) would be happy to say that the compiler "understands" the program it is compiling. It doesn't understand what it's for in any grand scheme, but then it doesn't really need to. Now use of language such as "understands" could be dismissed as part of the tendency of computer science types towards personification. But even this is no accident - in this case a proto-understanding seems hard to deny once one has understood not only the amount of interconnected and context-aware knowledge the program accumulates, but the fact that it is able to act on this knowledge and make deductions about the program's behaviour. Of course people usually shout "but it's just following rules", to which the only possible response is "Yes. So what?" Also it's clearly highly domain-specific understanding - a compiler can't create anything outside of what it's meant to do, but again, so what? It's not trying to be a complete mind, it's desiged to solve a particular problem. What's wrong with saying that it understands the problem at hand?


Nice article...

Post 4

Lear (the Unready)


>'it's not the mechanism that is intelligent, it's the mind' :-

The truth is we know so little about the mind that we can't really say how it works or what is in it, or even what it *is*, for that matter. I don't quite follow. If you're saying that the intelligence is not in the physical entity but in the 'mind', then how can it be ascertained through means of empirical observation? You seem to me to be undermining your own argument here, because you clearly depend on the notion that intelligence *can* be measured.

An insubstantial, non-material 'something or other' that, nevertheless, is the repository and source for all intelligent thought. Hmmm... A bit 'mystical' and 'unscientific', ne c'est pas? And yet you're saying this intelligence can be measured empirically. I think there is a contradiction in here...


>Turing :-

Searle's 'Chinese Room' argument was conceived to prove a completely different point from the Turing Test. The premise of the latter is that, if a machine can 'fool' a human being into thinking it is also human, then to all intents and purposes it *is* thinking. Searle's argument was conceived with the intention of refuting this very limited, 'behaviourist' concept of thought, by making the distinction that simulating intelligence is not neccesarily the same thing as emulating it. As far as I can see, the two viewpoints negate one another.

Turing died a long while ago, by the way, in 1954.


>evolution / adaptation :-

I don't think your comparison is valid. Installing new software into a computer will make it perform different tasks, but the computer has not 'learnt' these new tasks as a result of any experiences it has had in the real world - it has not 'evolved' these skills. When I learn a new task I learn it because I need to, because there is some factor in my environment requiring me to make some adjustments to my abilities. I am the one who decides whether I want to do this, how best to go about it, and so forth. The computer, by comparison, has simply been given, by its owner, a new set of instructions to follow, and it follows them whenever its owner presses the buttons that tell it to do so. I don't see how you can really compare the two scenarios.


I'm not saying that I don't think machines will ever be able to display 'intelligence' in a form comparable to the human mind. But there seems to be a long way to go before they can match the sheer variety of activities that can go together to comprise human thinking. As I said above, the ability to think logically and solve problems is merely one aspect of human intelligence - we also draw on other resources for understanding, for example metaphorical intelligence, the ability to understand by means of comparison. Another example is lateral thinking, the ability to make connections between apparently unrelated events and arrive at new insights as a result of this. The computer, as far as I can see, has nothing of any of this - basically, all it can really do is follow a pre-set sequence of instructions very quickly and very efficiently (actually, this doesn't apply to my computer smiley - winkeye )...


Nice article...

Post 5

IanG

To clarify my point of view, I think that intelligence can definitely be ascertained empirically. (At least I don't have a philosophical problem with that point of view... Not sure I could come up with a convincing test other than Turing's.) What I was trying to say (not very well, evidently smiley - smiley) is what the empirical evidence is all that matters - I don't think the mechanism is particularly important when it comes to saying "is this intelligence or not?"

If you think that insubstantial non-material something or others are in any sense mystical, then either you don't work with computers much, or you have an utterly different understanding of how they work from mine! What I'm saying is that there is can be something which is very real but which does not have any *direct* physical manifestation. To keep it concrete, lets examine the state of a computer program (e.g. Internet Explorer 5.5, which I'm using right now). The physical bits and pieces in my machine are exactly the same as they were before I installed IE 5.5 a few weeks ago, there's no single thing I can point to and say "That there is IE 5.5". I can point to icons on the screen, but these are clearly ephemeral - I can just drag a window over them, and suddenly their physical manifestation has gone away. Likewise, the web page I'm looking at doesn't really exist in any direct physical sense - a program on the h2g2 servers generated some data that represents what the page is, my web browser processed this, and built a whole model of this internally. Now from a user's point of view this page feels real - I can read it, I can move it around, I know that if something scrolls off the top it will still be there when I scroll it back down. However nowhere will you find *anything* that is a direct physical representation of what I experience as 'the page'. The closest thing is what's on screen - this is indeed a direct physical representation of the bit of the page I'm looking at right now. However as a programmer, I know that this is just the result of running a bunch of algorithms on some diverse bits of data.

The whole point of IE5.5 is to make this idea of a page seem real to the user. But it is an abstraction. There isn't any single real thing that is 'the page'. There are lots of pieces of knowledge about the page that the program has assembled internally that it can use to draw what looks like the page, but the page itself isn't real. And yet to anyone who uses a web browser, it's patently obvious that the page is real.

This is what I mean by an insubstantial non material thing that is nevertheless very real. Not a bit mystical, surely. (Unless you have some bizarre theories about how Internet Explorer works. smiley - smiley)

Flitting freely between the world of the abstract and the physical world is second nature for most people who are seriously into how computers work. But it's not just computing - it pervades most engineering (although perhaps less vividly). Take for example a car engine. This is clearly a physical thing - lots of bits of metal, plastic and ceramic. But the running of a car's engine (i.e. what it does and how it behaves when it runs) is clearly not the engine itself - it's a very dynamic system with its own set of characteristics, as anyone who has tried to coax a knackered 15 year old engine into life on a cold morning with careful balancing of the throttle and choke will know. This is a somewhat insubtantial, but very real system. The behaviour of a spinning gyroscope is clearly a different thing from the gyroscope itself. The music on a CD is clearly a different thing from the pits and grooves on the CD itself. The behaviour of a car skidding sideways around a particular corner is neither the car nor the road. Behaviour is real and important, and whilst it is enabled by certain physical items, it goes beyond them, but is still subject to scientific analysis. I could go on for ever with further examples, but I hope you get the point.

For me the fact that the Chinese room argument exists is simply testimony to the fact that a lot of people seem reluctant to accept the idea that for something to exist they have to be able to see it, and the related tendency to equate an object and its behaviour. It shows that Searle totally missed the point of Turing's argument.


I know Turing died a long time ago, so I was mostly speculating as to what he would have said, although you said that Searle proposed this in the 1950s, so I wasn't sure if Turing might have been around at the time.

How would you characterise the difference between imitation and emulation then? My dictionary lists 'imitate' as a synonym for 'emulate'. If Searle is rejecting a behaviourist analysis, what exactly is he suggesting is put in its place? What does he think intelligence is if it's somehow something more than mere observable behaviour?


re: evolution/adapation.

Woah! You just mischaracterised my argument! I did *not* claim that installing software means the machine learns anything. Let me restate my case: you said (if I understand you correctly) that machines cannot learn because they cannot evolve. I was arguing against that point of view by saying that they can evolve. I wasn't saying anything about learning. To put it another way, try this: if machines cannot learn, it has nothing to do with evolution - they are quite capable of changing their behaviour. (I don't think natural selection would count as 'learning' either by the way.)

Computers are mostly configured to do what they are told when buttons are pressed. Anyone who's done any low level systems development will be all too aware of how easy it is to make a computer totally fail to respond to any instructions issued by the user though. And it's also very clear that a computer can be instructed to both ignore user input, and also to start modifying its own behaviour, at which point it is pretty much in control of its own destiny until someone switches it off. In fact programs do this all the time (due to bugs), and the usual result is that the program crashes (with the second most popular result being that it doesn't affect the program's behaviour in any discernable way). Self-modifications that are interesting and open ending tend not to happen by accident (although research projects have shown numerous examples where this kind of self-modifying and self-perpetuating behaviour can be contrived), just as the vast majority of genetic mutations in simple lifeforms are fatal. Certainly the popular view of following a pre-set sequence of instructions is deeply misleading - the quantity of hard-coded program in any computer is typically miniscule - the vast majority is up for self-modification. (And computers wouldn't work if it wasn't.)

Anyway, I'm going down a rathole here, because I'm talking about evolution in the context of natural selection. I think the key discriminator between the sub-intelligent behaviour current technology is able to extract from machines, and intelligent behaviour as we recognise it in people is nothing like natural selection. The key seems to be the ability to direct ones own thought processes. I would agree that nobody has worked out how get a computer to do this yet. I remain unsure as to whether anyone will, but I suspect that a lot of it has to do with the vast amount of context people have (usually known as common sense) that enables them to recognise old patterns in new guises. I'm always sceptical about the genuine capacity for 'new insight' - I've long been a believer that there is nothing new under the sun, and have yet to be disabused - I think most 'new' ideas are old ideas in a new context. smiley - smiley


Nice article...

Post 6

Lear (the Unready)

Ok, first things first. Searle's argument was published in 1980. If you read the first sentence more closely you'll see I was saying that the Turing Test was devised in the 1950s, not Searle's response to it.

>simulation vs emulation :-

I accept the point that the 'virtual world' of a computer (can I use the term 'cyberspace', just for the sake of shorthand?) has a reality about it - I find this as fascinating as anyone else, and am interested in looking at the influence 'virtual worlds' (eg online 'communities' such as h2g2) are having on our notion of what is and what isn't 'real'. Don't get me wrong, I'm not a hard-headed old luddite here.

But what is the basis for assuming that simply because a computer program - eg, Internet Explorer 5.5 - has what you rightly describe as a non-physical but nevertheless *real* existence, it therefore can be said to be in some way comparable to the human mind? This is a 'leap of faith', in my opinion, moving from one idea to another without there necessarily being any substantial reason for doing so. I feel there is a kind of sophistry to this argument - it's as though you are trying to gloss over the distinction that I have been trying to draw all along here (and that is the basis of Searle's argument) - namely, that to simulate intelligence does *not* prove that intelligence has actually been achieved (ie, emulated).

In fact, what you're really doing here (though you will probably go apoplectic again when I say it! smiley - tongueout ) is that you are thinking here in terms of analogy and metaphor, rather than logical deduction. It is an analogy attempting to understand the largely unknown (the 'intelligence' or otherwise of a computer) by means of reference to something we already have experience of (human intelligence). The computer is *like* the mind, metaphorically speaking, in the sense that when you feed a software program into it, it will produce results which are substantially different. But this does not demonstrate that it actually *is* a mind.


>evolution/adapation :-


I'm still not convinced by the idea that a computer can learn from experience in the same way that a human mind can. Fair enough, when you install new software its behaviour becomes modified. But I think this is still substantially different (not just a long way away from, but *substantially different*) from what a human does in order to survive and prosper. Actually, you got my argument the wrong way round - not 'machines cannot learn because they cannot evolve', but rather 'machines cannot evolve because they cannot learn'. I don't want to sound pedantic but it is a distinction worth noting. To me, a species that is capable of evolving is precisely the one that is capable of learning - so that it can apply its knowledge, the fruits of its life experience and the experience of its species, in its struggle to adapt to its changing environment and hopefully survive and prosper within it.

And the key point is that the human mind does this autonomously. (OK, yes - we're hard-wired with certain features, etc, and our autonomy is therefore limited, but we are capable all the same of making conscious decisions and taking actions according to our judgement of the best way to proceed in a situation). You argue that machines 'are quite capable of changing their behaviour', and my point here is simply that they are not. Their behaviour is *being changed* - that's the crucial difference. They are capable of performing tasks that they have been programmed to perform (as long as the have the capacity, naturally). To say they are capable of 'changing their behaviour' implies intentionality, and computers do not have intentionality. Humans, by contrast, do have such a facility...


End of rant. Thanks for reading. smiley - winkeye


Nice article...

Post 7

IanG

Ah, OK, misread the article - I thought Searle's argument was older than that - sorry! However since my initial assumption was that it postdated Turing's death I stand by my right to make up what I think Turing's response to it might have been. smiley - smiley

Well I think it is fair to say that a computer program is "in some way comparable to the human mind" so long as you don't think it's a direct analogy. I don't think computers are intelligent. My main point though is that I don't think it's necessary to assume that they are (or even can be) to reject Searle's argument. (And it's rejection of Searle's argument that I'm interested here, rather than the issue of whether computers are or can be intelligent or not.) It is this distinction between the physical environment in which a system exists, and the system itself that is important. I merely use computers (amongst several other things) to illustrate that this distinction real, and that it is not in any sense mystical. Searle's argument effective requires you to accept that the behaviour of a system *must* be the same thing as the system itself (because he requires that if you accept that the system is intelligent, you must also accept that the components of the system are also intelligent). I reject this philosophy since I think it is obvious that the behaviour of a system is clearly distinct from the physical environment in which it exists. The system is not intelligent, intelligence is an emergent property of the system. This does not alter the fact that I don't think that today's computers are intelligent. (The main reason I don't think that computers are intelligent is that I've never seen a computer behave intelligently, a behaviourist point of view if ever there were one. smiley - smiley)

I still argue with your use of the word 'simulate'. What is the difference between simulating intelligence and actually being intelligent? Obviously this is a hypothetical question, since nobody has successfully built a computer that simluates intelligence, but then I always thought Searle's Chinese room was supposed to be a thought experiment... To put it another way, suppose someone *were* to come up with a computer that could successfully imitate intelligence, what would be your grounds for concluding that it wasn't actually intelligent? (I believe this is the same question that both the Turing test and the Chinese room ask.)

I think you misunderstand my use of analogy. My invocation of current capabilities of computers is simply to demonstrate that a distinction between behaviour and physical environment is both valid and useful. That is all. It seems to me that Searle's argument is based on the denial of such a distinction.


Re: evolution/adaptation:

Let me be clear: I don't think that current computers learn from experience in the same way that the human mind does. I apologise for mischaracterising your argument, but if you say that "machines cannot evolve because they cannot learn", then I must point out that machines *can* evolve! So I still disagree with your point of view, even now that I understand it. smiley - winkeye OK, they do so in a radically different fashion from the way the human mind does, but nevertheless they do evolve.

And surely to say that our capability for evolution is superior to those of a computer, and then to admit that our autonomy is limited, is simply to take a anthropocentric point of view. Sure, we can learn to do things that computers have really hard time doing, but then a computer's behaviour can evolve to do things that we have a hard time doing too. Why do you place a higher value on what humans can do? What is your objective measure for deciding that it is better?

(I'm not saying that computers are superior, I'm simply interested to know what reasons you have for thinking that they are not.)

And to reiterate, your claim that computers cannot modify their own behaviour, and can only have their behaviour modified is, quite simply, wrong! I'm happy to provide examples of this, but first tell me where you're coming from technically - have you done any programming or electronics? I need to know what your current understanding of how computers works is in order to demonstrate this point.


Nice article...

Post 8

Lear (the Unready)


Sorry, I've been holed up for the last few days...

Hmmm... To answer your question, my background is in philosophy rather than computing / electronics / etc, and therefore (as you've probably already noticed smiley - smiley ) I'm approaching the subject of artificial intelligence more from the philosophical perspective of 'How could you prove that a machine is thinking?' rather than from a technical point of view. To my mind Searle's argument seems to demonstrate that the Turing Test is not sufficient to prove the 'intelligence' or otherwise of a machine one way or the other - this was the basic purpose of my argument, although I seem to have let my self get sidetracked a little since then...

I suppose my general understanding of computers is about where most people's is - I see them as number-crunching devices that can be programmed to do an extraordinary variety of things, but which are at bottom drawing only on (very basic resources). Naturally, I would be interested to hear input on a technical level (in layperson's terms!) from someone with a background in the subject, that contradicts this point of view.


>'Why do you place a higher value on what humans can do? What is your objective measure for deciding that it is better?'

I wouldn't exactly say that I'm placing a higher value on the human range of abilities over the computer's - my point is that we work from a far wider range of resources, and that in the end what we mean by 'intelligence' is in effect a result of the interplay between these different methods of 'thinking'. As you say, there are things a computer can do far more efficiently than us, but as far as I'm aware these are all in the field of 'if this / therefore that' sequential logic. I'm not belittling the computer's ability to perform such tasks, simply pointing out that I think the real difference is in the sheer variety of things that go together to constitute human intelligence.

I would also add (I think I'm correct in saying this) that I don't think the computer, as yet, has managed to do anything that we can't, in principle, also do - they just have the hardware to be able to do it far more quickly in practice. Whereas the human mind can do many things that the computer *in principle* cannot do, because they don't have the hardware to begin with.

I'm a little concerned that you think I'm taking an anthropocentric view of things here. I'm not trying to hold on to some notion of the human animal as somehow instrinsically 'superior' to everything else on the planet. If a machine were to develop the ability to develop in this 'organic' way, I would be as fascinated as anyone else to follow its progress. But once something reaches this stage it probably isn't really accurate to call it a 'machine' any longer - it would have become an independent life form...


Nice article...

Post 9

IanG

I think we have rather different takes on the Turing test. I don't view it as trying to provide an answer to the question 'How could you prove that a machine is thinking?' I think what it does is to ask the question 'How could you prove that a *person* is thinking?' with a side order of '...and surely the same criteria can be applied to a machine?'

So I think my question for you is this: What is it about the Turing test that makes it inadequate for determining whether a *person* is thinking, and how would you modify the test to address this failing? (Let's leave the machines out of this for a moment.)

Your understanding that computers are just machines with a very basic foundation is correct: ultimately they are bound by the laws of physics. But then so is the human brain, I believe. So I suppose there are 2 questions to answer: (1) am I wrong - can the human brain in fact tap powers beyond those of mere laws of physics? (2) Are there laws of physics which the human brain is able to tap but which go unexploited by a computer which are somehow crucial to intelligence.

For what it's worth, my answers are (1) I don't think so (although obviously I can't prove it, any more than anyone can prove that the laws of physics are true), and (2) possibly - quantum computation is all about building computers that can exploit certain physical phenomena which current computers are designed to be immune to. So given my answer to (2), I believe that it is certainly possible that the operation of the human brain cannot be fully modeled or emulated by computers as we build them today. So it's *possible* that today's computers are inevitably going to be a can short of a sixpack when it comes to exhibiting intelligent behaviour. But it's equally possible that they're not - there's no reason to believe that the brain is using these physical phenomena - nobody knows one way or the other.

But whilst this is all very interesting to a computer scientist (and possibly other people - I hope so, but if not, sorry for boring you smiley - smiley) I think it's beside the (original) point. The real issue is how might we determine the answer to these unknowns - can we test for intelligence through experimentation? I'm not sure where you stand on this, but if I understand you, you don't believe that the Turing test is a valid test for intelligence. (Although I still don't understand how you think it is deficient.) So do you believe that an experimental test for intelligence is fundamentally impossible? Are you saying that Searle's Chinese Room is demonstrates that this is fundamentally impossible?


On the differences between computers' best efforts, and the abilities of an average human, certainly I would agree that any human has a far better repertoire of ability than any computer. But again I think this is off at a tangent from the debate - the Chinese Room argument actively presupposes that these problems can be solved - the room passes the Turing test.


You also say that "the human mind can do many things that the computer *in principle* cannot do, because they don't have the hardware to begin with". I don't accept this as a self-evident truth. Can you come up with a concrete example of this? Playing a game of chess to human standard was for a long time held up as an example of such a thing. Now that we've worked out how to build computers that can beat most people (and even give grand masters a run for their money) this has now been dismissed as not real intelligence. This is not the first example, and doubtless it won't be the last - the history of AI is a tale of goalposts in perpetual motion. smiley - smiley As soon as someone works out how to do something previously thought to require intelligence, that activity is downgraded to non-intelligent status simply because we have worked out how to get a machine to do it...

So if intelligence is merely anything we can't get a machine to do, then it would seem that the only conclusions that can be drawn from either the Chinese Room or the Turing test are either (1) there's no such thing as intelligence, or (2) both of these thought experiments presuppose something that can never happen - we never will make a machine that is able to exhibit all of the outward signs of intelligence.

I get the feeling from what you're saying that (2) is where your instincts lie, but I also get the impression that you think that stronger conclusions than 'this thought experiment is meaningless because it is impossible' can be drawn, so I'm not sure it does in fact represent your view. (If you are prepared to admit that a machine might be intelligent you get another option: (3) a machine that exhibits all the outward signs of intelligence *is* intelligent. That would be my position, but you've already dismissed this as 'behaviourist'.)


Nice article...

Post 10

Martin Harper

couple things...

first, I think the argument is a load of rubbish smiley - smiley The amount of rules required to actually perform this thought experiment would be extremely large, the probability of Searle making a mistake would be approximately one, the time taken per question would be roughly once per Searle lifetime, and Searle would fail the Turing test on grounds of both ability and speed. That's assuming the instructions didn't collapse into a black hole before he gets the first question.

second thing is the following paragraph...

"It is worth noting, for example, that Searle chose to express his argument by way of analogy. This is precisely the kind of intelligence - metaphorical intelligence, understanding something by way of comparison with other things - that the human mind uses habitually, but which a machine has no grasp of whatsoever. A machine would not even have any way of understanding the 'Chinese Room' argument - the analogy simply wouldn't compute!"

This reminds me very much of similar statements that machines would never be able to play chess well enough to beat even an amateur. If a machine has a representation of language, and a representation of situations, and mechanisms for seeing similarities between situations (IE - pattern recognition), then there's no reason why it can't spot that an analogy is being used, nor why it can't do the neccesary translation of ideas to work out what is actually being said. Personally, I would place the dependance on analogy as a human _weakness_ we have to use analogies here because we can't deal with the ideas on their own terms.


Nice article...

Post 11

Martin Harper

Lear - "I would also add (I think I'm correct in saying this) that I don't think the computer, as yet, has managed to do anything that we can't, in principle, also do - they just have the hardware to be able to do it far more quickly in practice"

Right - that's ripe for disproof. smiley - smiley

Ok, the challenge is - beat the world's *second best* chess playing computer. Because you feel that computers are faster, I'll be generous, and allocate unlimited time for you to take your moves. But I will require that you use no external forms of noting - no pens or pencils. And the computer can have, say, one day per move. My assertion is that the computer will definately beat you (personally), and will probably beat the best grandmaster of the day. Despite the removal of time constraints.

The problem becomes clearer when I go for the second challenge - memorise a large quantity of information (where you get it from is up to you - I only require that it be incompressible) and recite it back to me without error. I assert that a computer can memorise considerably more than you, with a vastly lower error rate.

The reason for your flaw in reasoning is that problems have space constraints, as well as time constraints, and for many problems a computer can have more effective space than a humans, and so solve larger problems. The other problem is that computers can be made arbitrarily accurate at the expense of speed on size, whereas humans cannot.

So, if you have an infinite number of humans (correct answer found by majority voting - most common is correct), with access to an infinite quantity of storage (in read/write paper format), and an infinite amount of time, then a computer can do nothing they can't.

Time, Space, Accuracy - the three reasons that humans would fail a turing test set by computers...


Nice article...

Post 12

jqr

I find this thread extremely fascinating and I would like to thank all of you for having contributed to it. smiley - smiley

I was wondering if anyone had read the news about the computer that designed its own crawling robots, then emailed (as good a verb as any smiley - smiley) the instructions to a prototyping machine, which built them.

I think that MyRedDice is a little extreme in challenging any human to beat the 2d best computer in chess--humans are tool-using animals, and this is (in my opinion) the same kind of challenge as "subdue a lion with your bare fists." Something you'd be interested in watching maybe, but only because the human participant would be operating under a handicap--no weapon available. Could a human with the 2d best computer as an assistant beat the best chess-playing computer? I mean here if you take two computers, one of which beats the other 99% of the time, and both of which beat humans 99% of the time, and give one to the human to use as a helper (or give the slower one the human as a helper smiley - smiley).

This is a very provocative question for me, because it gets to the point of what contributions the human part of the system is making. That's just as much about the study of thought as the building of artificial intelligence.


Nice article...

Post 13

Martin Harper

Well, on that basis I'm as intelligent as Einstein - if you give me a spare Einstein lying around, him and I could at least draw with an Einstein that was alone, by the simple expedient of me saying nothing. (in some competition of intelligence...)

Similarly, human plus chess playing computer will at least draw with a chess playing computer on it's own, by the simple expedient of the human doing whatever the computer suggests. And?
Heck, if you're going to allow the human to use tools, why not the computer? My first tool to equip my computer with will be the internet, I think, followed by a few spare super-computers. Computers are just as much tool-using as humans, possibly more so.

We say that Cheetahs can run faster than humans, despite humans having invented concorde, so it is that computers can play draughts (say) better than humans, despite humans having invented computers. In fact, to borrow your example, we say that lions are better at unarmed combat than humans, at least in the main. It's not a handicap, because both participants are under the same restriction. The other side, of course, is that humans are better at rifle combat than lions, and better at driving cars than cheetahs.

And, I hate to point this out, but speed *IS* part of intelligence. We don't call people who fail kindergarten "slow" for nothing...

Anyway, I've thought of another thing that computers are better at than humans. Surviving for a year, with the only allowed sustenance being directly ingested electricity. And at being computers. Lots of stuff, in fact.


Nice article...

Post 14

Lear (the Unready)

I don't think the chess example really stands, if we're trying to prove machine intelligence. When a computer 'plays' chess it is doing nothing more than running any number of millions of possible moves through its system every time it has a turn. Now, most of these 'possible' moves will be complete gibberish - not possible at all, in fact - but the computer doesn't know that. It has to run through all of these variations until it arrives at the move which is statistically the most likely to be the best.

The fact that the computer has to run through all of these variations, without being able to distinguish between sensible moves and stupid ones except on the basis of probability, seems to me to prove that it does not have independent intelligence - it simply doesn't *know* what it is doing.

Naturally, it has an impressive advantage over the human opponent, because the latter is unable to process anything like the same amount of information in the same time period. But the human chess player probably wouldn't even consider, say, 99% of these variations, because it wouldn't be worth his / her while - as I say, most of the machine's work is in ploughing through impossible or absurd moves that it doesn't *know* are impossible or absurd. It has to go through all of them. Fortunately for the computer, it has the power to do this in a split second.

Therefore, a powerful computer is always going to be likely to stand a chance against even a talented human opponent. But this doesn't prove that the computer understands what it is doing. It doesn't prove that the computer is 'good at chess'. It proves that the computer is good at number-crunching. We knew that already though, didn't we folks...


Nice article...

Post 15

Martin Harper

First off - that's not true - deep blue used a hardware chip that generated valid moves - which it then analysed. It certainly didn''t analyse every single (illegal) move it could make. So there. In any case,
> "most of the machine's work is in ploughing through impossible or
absurd moves that it doesn't *know* are impossible or absurd"

That statement is, frankly, garbage. Discarding absurd moves is done practically instantly - the real effort and time is spent choosing between the 5 or 6 plausable moves that are available - the 1000s of dumb moves are discarded in nanoseconds, and the illegal ones are likely never even generated.

Second off - how do you know this isn't the way human players play chess? Even if you ask chess players, you won't have a clue how their subconscious works. I'd say that humans have the power to discard illegal or unproductive moves in a split second, further that they use intuition of statistics to judge best moves. Human intuition of statistics can be flawed, but it's adequate for the job. Heck, when I play, I often think "Hmm - that seems a risky line of play - but I think the odds are in my favour". Is that not statistics? Am I not human?

Third off - a powerful computer isn't going to "stand a chance" against a talented human - it's going to whip the guys ass to within an inch of his life - at least by the standard definition of 'talented'. Any decently specced computer can beat 99.9% of the population of earth - beating the very best, who have spent their entire lives training - well that takes a little more effort.

Fourth off - computers *ARE* good at chess. The definition of being good at chess is being able to beat lots of other good chess players. No more, no less. Complaining that they're not good at chess - they're good at number crunching - well that's really like saying that motorcars aren't good at travelling fast - they're good at burning petrol, and hence human sprinters are still the fastest thing on earth.

I'm wasn't trying to prove machine intelligence - I was merely reacting to the claim that there is nothing computers can do that humans can't given sufficient time, and it was a convenient example. Personally, I'd count chess-playing as *a type of* intelligence - but if you want to be speciest about it, be my guest. Perhaps when cows moo what they're really saying is "I don't think humans are intelligent - being able to stand on two legs just proves that they are good at number crunching". But we still milk them.


Nice article...

Post 16

Martin Harper

As a side note Lear - what *would* convince you that computers had some form of intelligence? I mean - your predecessors in years gone by would have likely pointed at chess - indeed they did - and claimed that no computer could ever compete.

So what challenge would you set? What task can only be completed by something which is "truly" intelligent? Where, given that AI has faster moving goalposts than anything else in history, would you draw the line?


Nice article...

Post 17

Lear (the Unready)

The goalposts are in the same place they've always been. Right from the start, my basic argument has been that no machine has yet developed the capacity to self-organise, which is what distinguishes sentient beings. So, just to have a quick go at this challenge of yours :-

1 - a machine that is capable of evolving in relation to its surrounding environment, and adapting this environment to suit its own purposes.

and / or

2 - a machine that can redefine 'intelligence' to suit its own terms - in other words, that can set its own goalposts.

Those are fair enough 'tasks' in my view.

PS - where on earth did you get the word 'speciest' from? And what makes you think you have a right to accuse me of a kind of (highly specialised) racism here. At bottom - as my comments above should indicate - I'm not arguing the difference between intelligent humans and dumb machines, but rather the difference between sentient (organic) entities and mechanical implements. That's the real difference. (I assume here that we agree that humans are little more than highly sophisticated animals, and that therefore there isn't any essential difference between humans and other animals).


Nice article...

Post 18

Martin Harper

I should probably apologise for the word speciest - more appropriate would be anthropic or some such. But I suggest that you're asking for rather more evidence of intelligence from machines than you are from, say, cats, dogs, or indeed humans - and hence there is a slight bias there, is there not? Goalposts moving was a comment on humanity in general - not any particular members of it. But back some time, even 'tool-using' was regarded as a rather sophisticaed type of intelligence... then we found that lots of animals did that too, and... smiley - winkeye

ok - you've mentioned three things there - I'll deal with each in turn...

Self-organisation: according to this entry( http://www.h2g2.com/A444421 ) on neural networks - it's been done. He doesn't go into details - quiz the author, or search the web if you want details. To be honest, self-organisation happens in lots of unintelligent entities - simply by the way natural forces happen to work. Crystals are a good example - they can self-organise into penrose tilings. And the universe can self-organise into galaxies and solar systems. A primordial soup to self-organise into featherless bipeds with a fear of the dark...

Next off - your #1. What exactly do you mean by 'evolving'? Just changing? In which case, the former part has been done - robots have been made (Reading, about 10yrs ago) which will learn to steer themselves - their goals are set at the start to make progress, and avoid collisions - they learn from experience that given certain sensor inputs, the correct moves to take to avoid collision are X,Y,Z. Different arenas require different patterns of behaviour to be optimal - and changing the arena causes the behaviour of the robots to slowly adapt - though, as designed, they are a little conservative and risk-averse.
Incidentally, the arena was made was to stop them running off - before the arena was there they temporarily lost one robot - it trundled past three fire doors before someone saw it and picked it up.
The bit about robots adapting their environment is rather harder to find good examples of - primarily because, in research, the environment means the researchers. And few people are dumb enough to put tools capable of modifying environments on machines which, by their nature, are inherently unpredictable.
I'm trying to think of a good example - one which happens in the real world, rather than virtually, because you seem like the sort of person who won't accept it's intelligent unless it's physical... hmm - until I think of one of them - I'll mention the 'bots' in quake - their environment contains a bunch of things, including other bots, which they smack around, pick up, activate, etc, etc. They're not learning on the fly of course - and indeed I don't know very much about them - but they fulfil your criteria. Higher levels "cheat" - but the base levels are operating with the same knowledge of the environment as a good quake player...

The second task is an interesting one - but I'm not entirely clear on what you mean, so to save myself effort, I'll ask for an example or five of humans, and indeed other animals, performing it. But it's a nice idea to define intelligence as "that thing which seeks to define intelligence" - though inaccurate in my view.


Nice article...

Post 19

Lear (the Unready)

Apology accepted, although I wouldn't use a term like 'anthropocentric' to describe myself either - as I said above, I'm more interested in the distinction sentient / non-sentient than human / machine.

>'you're asking for rather more evidence of intelligence from machines' etc...

I wouldn't put it quite like that. I find conscious intelligence in the human beings that I meet when I go about my business in the world, and therefore it's reasonable for me to assume that human beings in general have conscious intelligence (they might not all be rocket scientists, sure, but that isn't the point). I don't really need to ask every one of them to prove it, I can generalise (which is another thing that machines don't seem to be able to do, by the way). It's reasonable to ask for more proof of intelligence from a machine, because, to date, they still have a great deal more to prove in that area.

>self-organisation / evolution :-

Fair enough, as you say self-organisation is a feature of non-sentient life-forms as well, so maybe I didn't choose my words too carefully there. But what I was really getting at, is the point I tried to make in number one about co-evolution. Wouldn't it be fair to say that a sentient being doesn't just organise itself in relation to its environment - it also acts upon its environment and organises that as well, according to its own needs. Whereas a non-sentient being can only do the former.

Which leads into your next point... I don't mean 'just changing'. Surely the change has to be purposeful - by which I mean, a change brought about by something in the surrounding environment, an attempt to adapt to it. And, to go back to my previous point, I would say that evolution should be understood as a two-way process - not just adapting to suit the environment, but also adapting the environment itself, to suit one's own purposes.

I didn't understand the stuff about 'bots' and 'quake'. What is it, a computer game? I can't really deal with that unless you tell me more about it. But I suspect it won't really do an as example of co-evolution, if all we're talking about is icons on a computer screen. As you say, of course I'm talking about adaptation in a physical environment - what else?

>intelligence = "that thing which seeks to define intelligence"...

Why do you say you think this is inaccurate? It sounds like a reasonable enough working definition of conscious intelligence to me - it's more or less what I was getting at. It seems reasonable to assume that any organism that becomes conscious of its own existence is going to start asking questions about itself, and about the world around it, soon enough, and it's going to start developing ways of reasoning, classifying, and so forth - in other words, defining intelligence on its own terms.

As for examples... Well, as I say, we're not all rocket scientists or existential philosophers, but I think even on the everyday level this is something we all do, constantly. We're doing it now, in this discussion, but we also do it when we're dealing with people, trying to decide who might be useful to us, who not. Which sorts of music we like to listen to. Which references we want to follow up in our research, which to leave. Everything, really. In other words, it's an ongoing process - part of evolution itself if you like.


Nice article...

Post 20

Martin Harper

first bit: Ok - fair enough - I can understand the logic in your stance - let's move on...

incidentally, computers can generalise - it's mentioned in that entry on artificial neural networks... generalisation is also an easy thing to directly program. Simply put, you start the machine with a concept map - a set of "is-a-type-of" definitions - and the means to recognise the objects in that map when they occur in the real world. {this corresponds to innate knowledge that humans are born with - darkness=fear, opposite sex=attractive}.

Then the machine can add to it's concept map by learning by example - it might see an object consisting of a rectangle over two upright rectangles, and be told it is a skillengot. Then a rectangle on the floor next to two upright rectangles, and told it is not - this might go on for a while until the machine has learnt what a skillengot is. If it then sees a triangle on top of two upright rectangles it will probably initially decide it is not a skillengot, because it has a triangle in it. If we say that this object *is* a skillengot, then it has the opportunity to make a generalising step - it would probably say that a skillengot is a stable polygon supported by two upright rectangles.

The hard part is not generalising, but knowing *when* to generalise - and to what degree. Essentially the concept here is an equivalent of Occam's Razor - there are (at least) three possibilities:

1) A skillengot is a polygon supported by two upright rectangles
2) A skillengot is a stable polygon supported by two upright rectangles
3) A skillengot is a rectangle or a triangle supported by two upright rectangles

Of these, the middle one is probably the best trade off between the specific (more specific terms are more useful and more common), and the complicated (most single words represent concepts which are simple).

So it can be done... lets carry on.

An interesting distinction - between things which only respond to their environment, and things which act on their environment. But it's not a good one, in my opinion - after all, rivers act on their environment so as too smoothe the way to the sea, and they aren't sentient in most people's thoughts (unless you're a gaia theoretician...). Similarly, somebody who is dreaming remains sentient, despite not currently being able to influence their environment. Indeed - even someone totally paralysed and dependant on a life support machine might remain sentient - such arguments are often used either for or against pulling the plug.

I was drawing a distinction not between random change and purposeful change - purposeful change is clearly more advanced than random change. More I was thinking about change through evolution (IE natural selection, darwin, et al) - and through other methods. To me, these are types of evolution, and which method is used to effect the change is irrelevant - the end effect of the change is. Just the same as it doesn't matter how you managed to behave intelligently - but whether you did.

Quake (or Unreal, or Duke Nuke'em, or...) is indeed a computer game - part of the category known as FPS - First Person Shooters - I'll describe the relevant concepts if we need them - I think there's enough to be going on with so far...

I will attempt to deal, however, with your assertion that only adaptation in a physical environment counts - that adapting to a virtual environment is irrelevant. I (obviously) think this is wrong, and ask you to consider a few problems with that restriction.

Firstly - what I am doing now is clearly (to me) intelligent. But it's not at this very moment I'm writing adapting a physical environment. As soon as you read it, it becomes an act of intelligence, as it affects the physical (you). This means that whether an activity is intelligent depends on things happening after a potentially huge time delay. What happens if you die, will that have rendered the whole activity unintelligent, simply because it has no longer affected the physical world?

Secondly - consider some game which can be played better by intelligent than non-intelligent players - I'd use chess, but you seem to be of the opinion that it is not such a game - perhaps 'Go' or some other game would make you happier - choose appropriately. Assume it is the kind of game which allows players sufficient time that they can learn and improve their playing skills during the course of the game. Assume I play it on a computer, which might be hooked up to either a computer, a 10yr old child, or the world champion.
Now, from your definition, whether my act of playing is intelligent depends on who I'm playing - if I'm playing a computer I'm not adapting my physical environment, only a virtual environment, so it is not intelligent. If I'm playing the child or champion, it is adapting a physical environment, so it is intelligent.

Now, imagine that, when playing the computer, unbeknownst to me, to transfer this game, instead of a bunch of electrons, the following procedure is used:
1) My moves are sent to a computer somewhere
2) The computer performs my moves upon a real-life board with real-life pieces. (it's been done)
3) Another computer reads the move made using vision recognition. (it's been done)
4) This computer sends the move to the computer I'm playing.

All of a sudden, my play has become intelligent, as it is adapting a physical environment - despite me being unaware of that adaption, and indeed the adaption being only a consequence of the transmission method used.

I say that it is inaccurate because the concept of intelligence was invented some time after the dawn of society - you may have the dates - and to say that everyone before that time is unintelligent seems... unfortunate. Further, consider this thought experiment - we place a chip into the head of some human, such that it is sent into a comotose state whenever it tries to redefine intelligence. Then we let the guy loose to try to get on with it's life - is it not still intelligent? In fact, continuing attempts to redefine intelligence would show a lack of intelligence - an inability to see the pattern and come to a conclusion.

The concept of defining goals based on higher goals is certainly something that computers use - as a basic example, path-finding machines often start with a large-scale goal (go to the shops) and work downwards from there to get sub-goals (cross the river, either go left or right around the park, ...). But I suspect you mean more than that...?


Key: Complain about this post