A Conversation for Artificial Intelligence

Is AI Possible?

Post 1

Joe Otten


To be more precise: Is it possible for a computer to behave intelligently?

An odd consequence of considering AI, is that we seem to be doubting some of the more mysterious features of our human brains - sentience, free will, etc. Perhaps the likening of a computer (including neural networks) to brains is in fact a huge insult to the brain. Computers are just glorified toasters, and brains are different stuff altogether.

(Of course this is not to say that it would be impossible to make an artificial brain - it is reasonable to suppose that one could be made from fat, protein etc.)

Another question - supposing that it is possible to make a computer-brain appear to be intelligent, self-aware etc. Is this any indication that it is these things. It may seem the same to me, but is it the same to it? Is there an "it"?

In "Shadows of the Mind", Roger Penrose suggests four possibilities:

A - It is possible to create intelligence on a computer, and this is the same stuff as human intelligence.
B - It is possible to simulate intelligence on a computer, the lights are on, but nobody is at home.
C - It is impossible to simulate intelligence on a computer, but it is possible on some other physical device (eg a brain)
D - It is impossible to create or simulate intelligence on any physical device (unless you are God)

He gives some good arguments for position C, the only problem being that this demands some non-computable behaviour in physics, which requires some new physics.

Most AI discussion we hear seems to take position A for granted. Or it combines it with B, suggesting that we are all just simulations, and that the feeling that somebody is at home is an illusion. We seem to have deduced something about ourselves from an extrapolation of the behaviour of computers. I don't buy it.


Is AI Possible?

Post 2

R. Daneel Olivaw -- (User 201118) (Member FFFF, ARS, and DOS) ( -O- )

You have a good point there.

"Most AI discussion we hear seems to take position A for granted. Or it combines it with B, suggesting that we are all just simulations, and that the feeling that somebody is at home is an illusion. We seem to have deduced something about ourselves from an extrapolation of the behaviour of computers. I don't buy it."

However, there's no point considering pracitcal AI now with possibillities C and D (we aren't god and don't yet understand how intelligent brais work) and we don't yet konw how to prove A B C or D to be true. Therefore, all but the most theoretical discussions of AI must assume A and/or B until we have a way to work with intelligent brains and understand what we are really doing or to prove or disprove A B C or D.


Is AI Possible?

Post 3

xyroth

you say ...

In "Shadows of the Mind", Roger Penrose suggests four possibilities:

A - It is possible to create intelligence on a computer, and this is the same stuff as human intelligence.
B - It is possible to simulate intelligence on a computer, the lights are on, but nobody is at home.
C - It is impossible to simulate intelligence on a computer, but it is possible on some other physical device (eg a brain)
D - It is impossible to create or simulate intelligence on any physical device (unless you are God)

... these are the classic possibilities.

taking them in reverse order:

D is basically vitalism, claiming that there is something unique instilled by some god or other, which we can't reproduce. a lot of the devout take this position, but then go on to use it to dismiss the entire idea of ai producing anything usefull. given the existance of usefull expert systems already, that position is untenable.

C involves some very dodgy assumptions that there is something about the hardware of the brain which makes it the only possible host for intelligence.

B is searle's position with his chinese room. it is often based upon the fear of the ultra intelligent machine. it depends on there being something about the simulation which makes the answers invalid. in a lot of the cases where they try and use this it haas already been shown to be false. however there may be a few cases where this applies.

A is our remaining option, and is built on a number of bits of science (for most practicing researchers) or faith for most of the rest.

the sort of thing that supports A is things like the research into the functioning of the visual cortex, where we have yet to find a function which we cannot encode into an integrated circuit.

other things include the universal turing machine (especially when looking at multiple instruction, multiple data processing). a lot of the work on parallel processing shows that the main difference is the need to recode to take advantage of the possible parallelism.

finally, a lot of people support A, because we will almost certainly get a lot of usefull knowledge out of the attempt (even if it proves to be impossible).


Is AI Possible?

Post 4

Joe Otten


I remember reading Pinker's dismissal of Searle, and I must say I didn't buy it.

Pinker suggested that if instead of keeping rules on cards and sitting in a box, the english-speaking man had all the rules in his head, and could output the chinese through his mouth in real time we would no longer say that he didn't understand chinese.

Wrong. Such a person when told in Chinese "the building is on fire" could respond "please drag me out of the building", but not know to get up and walk out.

The problem with the dismissal is that it assumes position A rather than proving it.


Yes, C involves some dodgy assumptions, or rather it has some unlikely-sounding consequences. I still find it attractive. Computers seem to have inherent limitations which such things as parallelism, complexity, neural networks etc make no conceptual difference to whatever.


Your talk of expert systems and visual cortices leads me to suspect that you might be talking about something less profound than I am. I guess it makes sense to differentiate between the various brain features and focus on those it seems easier to emulate.

I found section 4 of this rebuttal of Penrose most interesting

http://psyche.cs.monash.edu.au/v2/psyche-2-09-chalmers.html


Your last comment sounds like "We need to adopt A to get funding".





Is AI Possible?

Post 5

R. Daneel Olivaw -- (User 201118) (Member FFFF, ARS, and DOS) ( -O- )

"Pinker suggested that if instead of keeping rules on cards and sitting in a box, the english-speaking man had all the rules in his head, and could output the chinese through his mouth in real time we would no longer say that he didn't understand chinese.

Wrong. Such a person when told in Chinese "the building is on fire" could respond "please drag me out of the building", but not know to get up and walk out."

Depends how complex the instuctions are. The person could be told other things that just to output more chineese in responce. They could say "If you see this character, exit the building." THe person would walk out if told the building was on fire, but wouldn't know that that's what they'd been told.


Is AI Possible?

Post 6

xyroth

dealing with your last point first, it is not suggesting that we need to adopt position A to get funding. it is saying that if you are going to spend years of your life looking into something, it doesn't really make any sense to have any of the other positions.

ALL of the other positions start from the position that there is something unique about the brain, but so far, there is a lack of evidence to suggest that. in fact, all of the evidence so far points to our brains being no more unique than our bodies were when the same arguaments were used against evolution.

the positions resolve to the following:

a, there is nothing special about the brain which makes it fundamentally incomprehensible.
b, what you come up with might be intelligent, but it wouldn't have a soul (vitalism again)
c, only brains can have a soul, so if it isn't a brain, it can't be intelligent.
d, only humans can have a soul, given by

throughout history, certain parts of humanity have been saying "look at humans, were unique", often including "because of god" or "so worhip god". for all that time, they have put forwards different parts of humans which we are just starting to understand, saying "this bit is unique to us", which later turns out to not be unique, so they move the goalposts.

you can't say "it's not a computer", without seriously restricting what a computer is. every part so far suggested as the home for a bit which makes us uniquely human has turned out to be wrong, and I don't see anything yet in the science to support there being such a bit.

suggesting that the logic involved in controling something is somehow intelligent when it is in brains, but not when you abstract the rules from the data seems somewhat contrived.


Is AI Possible?

Post 7

Joe Otten

"Depends how complex the instuctions are. The person could be told other things that just to output more chineese in responce. They could say "If you see this character, exit the building." THe person would walk out if told the building was on fire, but wouldn't know that that's what they'd been told."

Yes, the instructions could be as you suggest, although outputs other than speech are "outside the box" of the chinese room. The idea that they needn't be - that it would be possible for the man to exhibit full understanding in chinese that the building is on fire, but still not know to get up and leave, demonstrates a difference between understanding and the computationally based appearance of understanding.


Is AI Possible?

Post 8

Joe Otten


"ALL of the other positions start from the position that there is something unique about the brain, but so far, there is a lack of evidence to suggest that. in fact, all of the evidence so far points to our brains being no more unique than our bodies were when the same arguaments were used against evolution."

I think there is something unique about the brain. I know nothing else to exhibit intelligence, sentience, creativity, etc.

I don't see how evolution is relevant here. Yes, our brains are what they are today because of evolution, but this doesn't tell us everything about them. You seem to be inferring something about intelligence from the failure of other arguments relating to unintelligent organs, but I can't join this up.

You seem to be arguing against the uniqueness of the brain on the grounds that some people believe it for religious reasons. A => B does not imply not A => not B.

"a, there is nothing special about the brain which makes it fundamentally incomprehensible.
b, what you come up with might be intelligent, but it wouldn't have a soul (vitalism again)
c, only brains can have a soul, so if it isn't a brain, it can't be intelligent.
d, only humans can have a soul, given by "

I don't think "soul" is a useful term unless you can give a definition of it for the purposes of A, so that we can agree that it is something that brains have, and computers, given A, could have.

I also don't think you do justice to the C position that an artificial brain is possible in principle, but would not be computer (i.e. that it relies on something non-computable in physics - probably somewhere in quantum physics)

"suggesting that the logic involved in controling something is somehow intelligent when it is in brains, but not when you abstract the rules from the data seems somewhat contrived."

I agree. I don't suggest that the application of logic requires intelligence. The application of reason probably does.


Is AI Possible?

Post 9

xyroth

"You seem to be inferring something about intelligence from the failure of other arguments relating to unintelligent organs"

not really, only that there is a viewpoint out there that says "humans are unique, and can't be reproduced". this viewpoint has turned out to have a long history of moving the goalposts every time it is show to be wrong.

"I think there is something unique about the brain. I know nothing else to exhibit intelligence, sentience, creativity, etc."

I think we are talking about two different things here, and not always keeping the terminology clear. most people would say it is the mind which has these properties, not the brain.

using computing as an example you could be using linux or windows on intel hardware, but under linux you could relatively trivially change the underlying hardware to be something completely different.

similarly, part of this debate splits the mind & brain into the equivalent (for the arguament) of software and hardware.

D could be rephrased as saying there is something unique about both the hardware and the software components.
C could be saying that the mind software can be modified however you like, but if you run it on something other than brain hardware, it won't work.
B is suggesting that you could get the mind software to work on something other than brain hardware, but there is something about the hardware/software combination which makes the answers fundamentally untrustworthy.
A is then saying there is nothing inherently impossible about either the software or hardware components which makes the whole attempt completely worthless before you start.

using this paraphrasing, you can then address the previous problems and see where the depth charges are likely to come.

We already know that there are quantum level computational capabilities. we are already starting to get a handle on them, so in theory that complication doesn't make the hardware unbuildable. in fact, generaly there is nothing we have found out about quantum computation which makes it fundamentally different from the normal type, it just changes which sort of problems you can handle easy, and which are hard.

then there is the parallelism question. again we know from research already carried out that there are only four ways to build symbol processing hardware. you can have one or more streams of instructions, and these can act on one or more streams of data.

all four types are currently being used in the most recent processors, and again it only affects the efficiency of what you are doing, and thus the types of problem which are simple and which are difficult.

similarly you can't wriggle out of it using the hardware/software split. research already done has shown that this also is only a matter of effficiency. the more you do in hardware, the faster it is, but the harder it is to change.

as to the software aspect, any part of the brains hardware that we have understood well enough, we have managed to build in silicon rather than organic cells. the problem so far is that we have only really had the tools for about 10-15 years, so we really don't know much about the detail.

the main other point raised is this question of your about logic and reasoning. a lot ofthe difference in most people's eyes is to do with the black and white nature of aristotelian logic compared with the "shades of grey" type approaches people use when reasoning. this is largely irrelevent, because you already have plenty of computer systems which run programs reasoning with the non-aristotelian multi-valued (fuzzy) logics which have been worked on since the thirties.

you can't even appeal to neural nets as a way out, because they have been built so you can flick them from learning to being able to analyse them, and it has been found that they have learned fuzzy rules.

even if you go as far as to try and exclude chimps and dolphins from the discusion, you can not really claim that they don't exibit intelligence or creativity. that only leaves you sentience to play with, but at the moment all we can say about that is that I know I am sentient, you know you are sentient, but we have to take each others word for it that the other one is sentient. as bigots throughout history have tried to claim that their chosen hate group is somehow less than human, and thus not sentient, I think we have to take that claim at face value.

this is why I and a lot of other working in the field tend to take position A.everything we know so far just points to there not being anything fundamentally unique about any individual component that makes up a brain or a mind (although every mind is unique for other reasons).


Is AI Possible?

Post 10

Joe Otten

"not really, only that there is a viewpoint out there that says "humans are unique, and can't be reproduced". this viewpoint has turned out to have a long history of moving the goalposts every time it is show to be wrong."

Fine. But this argument doesn't tell us where the goalposts belong.


""I think there is something unique about the brain. I know nothing else to exhibit intelligence, sentience, creativity, etc."

I think we are talking about two different things here, and not always keeping the terminology clear. most people would say it is the mind which has these properties, not the brain."

OK, but if the mind is not a feature of the brain, doesn't that imply position D (or C or B)?


I don't think the hardware/software distinction is useful. One could imagine a machine capable of rewiring itself, or one could consider a neural network which simulates this process.

When I contrast reason to logic, I am not talking about fuzziness, I am talking about things like the ability to do pure mathematics. Nor is it very relevant whether dolphins or monkeys have some of the features we are considering, given that they have similar hardware to us.


Let me put it another way. Computers process data. Insofar as the brain/mind is a data processing machine, a computer should in principle be able to emulate that. This may well include some or even all aspects of intelligent behaviour, but that remains to be proven.

On the other hand, a computer could not emulate my pancreas. It could simulate it, it could control a pancreas machine, but it cannot itself produce the necessary chemical processes to perform the function of the pancreas.

So what I am suggesting is that there are necessary physical processes involved in some or all of sentience, creativity, self awareness, free will, intelligence, of which a simulation is insufficient. A computerised control of such processes may also be insufficient, as in the brain the control may be the other way round.

Of course I am not providing you a proof here against position A. Also unfortunately for my position my reasons for believing that sentience exists are inadmissable under the rules of science - because I cannot show my consciousness to anyone else. I think it is right that science's rules are sufficiently tight to avoid contradictions, but that does mean that there may be physical phenomena that can't be repeated and independently observed in practise. But this is not to say that I am describing something mystical. I am describing something of interest to science, whose evidential statements science has good reason to reject.

Perhaps I should have put the question "Is computer sentience possible?" That is really what I meant.


Is AI Possible?

Post 11

Joe Otten


Actually, forgive me - that is not what I originally meant. I was asking - "is it possible to simulate (all of) intelligence computationally?" (A+B v. C+D) The question "Is it possible to emulate sentience computationally?" is indeed a different one. It looks like I got a little sidetracked.

However the two questions are closely related. The question "is it possible to emulate intelligence computationally" would be A v. B+C+D, and there are analogous statements to A, B, C and D regarding sentience.


Is AI Possible?

Post 12

xyroth

so where exactly are you drawing the line as to the mind/brain split?

I used the hardware/software split in my reply to show various things that we already know about how symbolic manipulation devices work, in particular, that current knowledge shows that the hardware software boundry is largely arbitrary, as is the differing types of hardware.

surely when you get to the point of arguing about the need for a pancreas to define sentience, you are reaching a bit.

I happen to know a bit about endocrinology, due to having a health problem which interacts with it, and the different chemicals output by various organs act like pushing and pulling the sliders on a big control panel (like a mixing desk).

I don't see how calling endocrinology into the arguament affects the functioning of the brain significantly. if you can make all the other fundamental differences I have mentioned become insignificant, then just adding a few variables to the equations will not be enough to make a difference.


Is AI Possible?

Post 13

Joe Otten


I agree about the arbitrariness of the hardware/software split. The point I was trying to illustrate with the pancreas is that there is a difference between simulation and emulation. For example, a flight simulator doesn't actually fly. A flight emulator (or rather an aircraft emulator) would fly.

To prefer position A over position B is to say that simulation constitutes emulation, which I think would be true if and only if we are talking about a purely data processing task.


Is AI Possible?

Post 14

xyroth

I think the flight simulator is a good example.

modern aircraft are designed and implimented in such a way that the pilot can get experience flying the thing before it has been built.

with modern simulation methods, you can get it to be so good that the pilot can train their instincts to do the right thing, even before he has a plane to fly.

the main split between a and b is that those doing practical work don't distinguish between the simulator and the plane for pilot training. as long as they both result in the pilot learning the right instincts, that is good enough. those in the simulation camp tend to concentrate on the "it'll never fly" position, and thus fail to see all the usefull results you get from the act of producing the machine.


Is AI Possible?

Post 15

Joe Otten


I can see that that is a good practical position to hold. The danger is that there may be things that will happen while flying that you won't discover in building a simulator, because it only encapsulates what you already know.


Is AI Possible?

Post 16

xyroth

yes, you are right, which is why after the plane has been show to be safe, the test pilot is asked to fly it well outside the safety envelope, in the hope that things will break when these exceptional individuals are flying it, and can thus bring it back so they can analyse what went wrong.

they also now take generous amounts of telemetry data, to compare with the models, so that they can spot well in advance of crossing any danger lines where those lines are likely to be.

you are also right about there being emergent properties of sytems which you only find out about when you put components together in new and unusual ways.

this is one of the fundamental problems with science and engineering, how do you figure out where your knowledge will stop working? and what do you do about it when you find these limits of applicability?



Is AI Possible?

Post 17

Logicengine

I think any good scientist would say yes intelligence is possible. We know it's possible because it is being doen all the time with all types of mamals. Unless there is something supernatural about the brain then it can be modeled. I don't think AI will even be a coppy of the brain, it is a concept and it will be very different from a human brain in it's pure form.


Is AI Possible?

Post 18

shakychrisdeeming

On bbc radio 4 news, sometime in Feb 06, I seem to remember hearing part of a story about the failure of major AI project, which had failed after many years of programing because running the computer program itself was estimated to take thousands of years. Can anyone confirm this? I dont think it related to CYC but sounded similiar - see below - and I cant find the news items.smiley - wah


CYC is a $25 million, 20 year project in Artificial Intelligence. Aimed at beating the common sense knowledge problem outlined by employing workers whose full time occupation consists of entering volumes of common sense data into the ever-growing AI. Its database, at the time of writing, contains millions of hand-entered facts, and is maintained in an entire room full of computers. In short, the project headed by Douglas Lenat is one of the most impressive AI undertakings ever.smiley - cake


Key: Complain about this post