|3. Everything / Maths, Science & Technology / Computers|
3. Everything / Languages & Linguistics / Languages
The Turing Test
I propose to consider the question 'Can machines think?' This should begin with definitions of the meaning of the terms 'machine' and 'think'.
The Turing Test was proposed by the mathematician and computer scientist Alan Turing as a way of gaining some insight into the question 'Can machines think?' The test involves a human judge holding two conversations - one with a human and one with a computer program - over a text-only channel. Both the human and the computer try to convince the judge that they are the human. If the judge cannot reliably tell which is which, then the computer is said to have passed the test, and can be considered to be thinking. Turing's original paper introducing the Turing Test and discussing various objections to it is available on the Internet.
The test was inspired by a party game known as 'the imitation game' in which a man and a woman go into separate rooms and an interrogator tries to tell them apart by asking them questions and getting typewritten answers back. In this game, both the man and the woman try to convince the interrogator that they are the woman.
No computer has yet come close to passing the Turing Test, but this does not imply that no computer will ever be able to pass it. Medical science suggests that the human brain works like a (very complicated and massively parallel) computer itself. It is made of many interconnected cells known as neurons, which individually behave according to simple rules, but together give rise to intelligence and consciousness. In principle, there is no reason why this could not also happen in a man-made machine. (Some people believe that human intelligence comes from the soul, but Turing points out that even if this is correct, there is no reason why God could not give a soul to a computer).
Objections to the Turing Test
In some ways, the test is unfair to the computer. If the human had to pretend to be the computer, the judge need only ask them both a complicated sum, and the human would fail by taking too long to do it. Also, if the judge asked questions about the participants' personal lives, the human could simply tell the truth, but the computer would be forced to make things up or give itself away.
One common objection to the Turing Test is that just because the computer can fool people into believing that they are talking to a human doesn't necessarily mean that it is thinking. It might just be saying what it has been programmed to say without any consciousness of what it means. However, Turing points out that we use the 'viva voce' exam to determine whether a student really understands something or is merely parroting it, and we could do the same thing with the computer.
John Searle's Chinese Room argument elaborates on the idea of the computer's utterances making perfect sense even though it does not understand anything. He imagines that you put him in a room and send in questions in Chinese. He does not understand any Chinese, but he has a set of instructions for manipulating Chinese symbols to produce correct answers to the questions. You could also send in questions in English (his native language) and he would answer them. From the point of view of someone outside the room, his English and Chinese answers are equally good, but from Searle's point of view, he understands all the English and none of the Chinese. The problem with this argument is that Searle is only part of the system which is answering the Chinese questions. The other part is the 'set of instructions' referred to above. This amounts to a program which passes the Turing Test - such a program would have to be very long and complicated, and would probably fill up the whole room. The system's understanding of Chinese resides in these complicated instructions, not in the person who is simply following them.
Many people have written 'chatterbots' - ie, programs which attempt to carry on a conversation with a human. Most of these programs do not attempt to understand what is being said, but simply pick up on key words which trigger appropriate canned responses. For example, the early conversational program Eliza imitated the responses of a psychotherapist, and if you used the word 'mother' or 'father' it might respond with 'Tell me more about your family.' Two more modern chatterbots which you can talk to over the Web are ALICE and Jabberwacky.
Eliza was sometimes mistaken for a human, and other chatterbots have been taken for human on IRC channels. This might be seen as a problem with the Turing Test; some people were completely taken in by programs that, to people who understood how they worked, were clearly not intelligent. However, conversation on IRC is often not very coherent, and some participants do not have English as their first language. Also, the people who were fooled had no idea that they might be dealing with a program, while in the Turing Test proper, the judge is told that one of the contestants is a human and the other is a computer, and asked to spot the computer.
If this is not a sufficient solution to the problem of people being easily tricked by unintelligent programs, we could require that the judge be a computer scientist. However, some critics suggest this is unreasonably difficult, since most human beings are incapable of holding a sustained conversation with a computer scientist. After a moment's thought, they usually add that most computer scientists seem incapable of distinguishing humans from computers anyway.
The Loebner Prize
The Loebner Prize is an annual competition between chatterbots. It is basically a Turing Test - a panel of judges converse with chatterbots and human confederates, without knowing which is which, then rate how human each one sounded. The prizes consist of:
People have been talking about this Guide Entry. Here are the most recent Conversations:
Please note that Not Panicking Ltd is not responsible for the content of any external sites listed.