A Conversation for Artificial Intelligence

The Soul...

Post 1

Proteus

Some big questions:

Do humans (and possibly animals too) have a soul?

Is sentience a must being able to have a soul?

What happens if we construct a true sentient AI? Does it too have a soul or could one state that humnas (and animals) are nothing more than bio-chemical-electrical robots with very good AI, able to produce sentience on their own (given the input)?

Very relevant questions this!


The Soul...

Post 2

Martin Harper

Perhaps a better question is... would a sentient AI have rights? And if you programmed it to be able to feel pain, would that be an immoral act, or is it necessary to achieve AI and sentience?


The Soul...

Post 3

The Unmentionable Marauding Pillowcase

Consider: pain is functional, humans feel pain because of conditions that threaten our existence. Now ask: will a sentient program be susceptible to death? Suppose you had a sentient program, which learnt like humans learn. If this program was on a particular network, and something happened that caused the program to be completely wiped out, it would be dead: everything it had learnt would be gone, its memories, its skills, its "personality" - call all of those things its "soul". If you installed it again, it would learn new things this time, it would not be the same entity, it would be a different "individual" with a different "soul". So the sentient program lives, and can also die.

If such a sentient program can die, it still does not mean that it attaches value to its own life itself. But you can program it to care for its own life, to protect its existence. You can program pain into it to act as a warning system of its existence being threatened. Suppose there was a virus that attacked a part of it - it would be programmed to sense this very rapidly, and it would make sense to program this awareness in a form similar to pain as we feel it. In this case I think we owe it to the program to program it to feel pain. But we would also owe it to the program to protect its existence, and hence protect it from pain, as far as is appropriate.

If the program approaches humans in its sensitivity and in the intricacy and subtlety of its reasoning, I guess it would have rights. What the politicians will make of it I don't even wanna think about. This is sure going to be a thorny issue.

What do you think?


The Soul...

Post 4

Martin Harper

I think politicians, and people generally will refuse to give AIs rights until they are roughly a hundred times more intelligent than themselves. Rampant speciesm.. At which point the AIs will have effectively taken over, and it will be highly irrelevant whether humans think they deserve rights or not, since the AIs will be doing the deciding.


The Soul...

Post 5

Proteus

I kind of like the old idea that somthing living will always protect it self no matter what. You will most likely not have to program it to do so. It will figure out this on it's own.
There is also a possability that the entity created from this AI will be without feelings, as we humans know them anyway, and act quite cold compared to us humans. Maybe even in a manner we percieve as mental.
As for rights...why do we humans always talk about this? Do we issue rights to ALL sentient creatures? NO! Why? Because we are a basicly arrogant species.
I think that the humans will avoid the subject all together, in order not to make a choise in the matter, and question if the AI is sentient or not. And in the meantime we will try to controll them /it and if we are unlucky that will not go well and the AI might give it self rights.
Probably we humans would not see the danger until it was to late, due to our inability to see further than our own existence.

Creating true AI can be our doom or our key to heaven depending on luck and on how we do it and react when (and if) it happens. Somethings are not good to know and the knowledge of creating life - of any form - might not be good to us as a whole.

But then again....better to burn out than fade away...


The Soul...

Post 6

Martin Harper

I'm not sure that's true though - otherwise male spiders would very rarely have sex (I'm thinking of the ones with the females who eat the males after...)


The Soul...

Post 7

The Unmentionable Marauding Pillowcase

I have a suspicion that it won't be long before there will be AI's on the internet. They will be able to adapt and grow without any restraint, and sure, I can imagine them becoming more intelligent than humans. Question: do you think they'll be benevolent, or will they recognize that we are such a dangerous species that the world will be better without us?


The Soul...

Post 8

Martin Harper

Well - look at how we treat chimpanzees - that's how superior AIs will treat us - and for much the same reasons.
Which is one reason why I'm encouraged by the animal rights movement - even if their methods seem wrong...


The Soul...

Post 9

The Unmentionable Marauding Pillowcase

Just how do "we" treat chimpanzees, and for what reasons? Here in Africa "we" eat them because they're edible and there's a shortage of edible things here. In Europe or America "you" put them in zoos or circuses or do research on them - probably because "you" are curious. As for me, if I ever meet a chimpanzee I'll treat it with kindness, respect and civility, as I would any other sensitive, sentient creature, because I'm fortunate enough to have alternative sources of food and alternative means of satisfying my curiosity.

So how will a superior AI treat us? If I ever come across one, I'll try to be its friend. Will that be of any help?


The Soul...

Post 10

Martin Harper

My "we" was worldwide, in an attempt to be inclusive... smiley - winkeye

I too would probably try and treat a chimp with kindness. But then I'd go home and take some new medicine that's been tested on animals (as indeed, it has to be by UK law). I'm not sure how much trying to be friendly would have helped the animals that took part in that test, either...


The Soul...

Post 11

Athon Solo

1. A question which hasn't been yet addressed here is what is a soul?

2. I do not think the situation in The Matrix or with the chimpanzees would arise because we would undoubtedly program any AI we built with Asimov's three laws of robotics. For those who don't know, these are as follows:

i) A robot may not harm a human being or though inaction allow a human being to come to harm.

ii) A robot must obey any order given to it by a human being unless this conflicts with the first law.

iii) A robot must protect its existance unless this conflicts with the first or second laws.

Unfortunately this can bring out some strange situations. Both of these assume that the AI is controlling a mobile robot. And both arre featured in Asimov's own stories.

1. If a robot is told to 'get lost', especially in an urgent manner (ie. with a raised voice), it will do so (unless this conflicts with the first law).

2. If the robot is an android, looking very much like a human, it may come to consider itself as a human and thus come to consider human as sub-humans. This means that the third law merges with the first law, while the second law remain in tact. The problem arises that since the 'droid considers humans as sub-human it will not obey or protect them, and can harm them.

There are of course ways round this. to ombat the first situation you could program the robot with a limited amount of intelligence so that it ignores such 'commands'. The second situ. can be avoided by limiting the AI so that it will never consider itself human, and will always consider humans as humans, no matter which it thinks is more advanced.


Athon Solo


The Soul...

Post 12

Martin Harper

The problems with asimov's laws are that...
-> They've never been used so far.
-> They're likely to be impossible to program.
-> They'd slow things down.
-> They'd have bugs in them - anything the size of an AI *will* be buggy.
-> They're inhumane - entities with human-level intelligence anbd consciousness *should* have rights, and not be slaves.

> "There are of course ways round this. to ombat the first situation you could program the robot with a limited amount of intelligence so that it ignores such 'commands'."

I hate to break it to you, but the ability to obey the three laws certainly requires human-level intelligence already. In fact, since they're consequentialist, they likely require *more* intelligence than is present in the average human.


The Soul...

Post 13

The Unmentionable Marauding Pillowcase

The problem with an "intelligent" computer or robot is that it will not do what it is told to!


The Soul...

Post 14

JodaCast

My hamster does what its told. smiley - smiley


The Soul...

Post 15

The Cow

Full Artificial Intelligence [AI-complete] cannot (probably) be programmed... merely grown.
This raises the problem of 'programming' it.

However, the AIs can be 'immortal' and not have to die. If you had the choice of dying or having the last week not exist, which would you choose? All the AI needs is a regular backup... smiley - winkeye


The Soul...

Post 16

Martin Harper

depends - AIs can suffer from 'old age' in a similar way to humans - as currently programmed, old AIs tend to be inflexible, resistant to change, and set in their ways.

Any relation to humans is entirely co-incidental... smiley - winkeye


The Soul...

Post 17

The Cow

However, they don't suffer from electronic senile dementia...

When would an AI be 'old'?
I reckon it'd probably take a much shorter length of time, due to the speed of their life...


The Soul...

Post 18

Proteus

You could look at the subject of slaves to get the same answer.

"..The problem with free slaves is that they will not do what they are told to!"

A true AI - if such a thing can or will exist sometime - is most probably not going to stand being treated like a robot. If it does, it either are very cunning or not a true AI.

A funny thing is: when an AI becomes sentient it is no longer an AI. It is in fact a I !


The Soul...

Post 19

The Cow

It's still artificial... just that it now is not under our direct controll...


The Soul...

Post 20

Q*bert

Nearly everybody's assuming that an AI would be emotionally similar to a human being. There's no reason why it should be. Human emotions are at least partly biologically decreed, and therefore a product of natural selection (I'm assuming Darwin was correct. If you're a creationist, substitute "the will of God" for "natural selection"). Since an AI, by definition, is *not* the product of natural selection, we could pretty much give it any weird personality we wanted, limited only by our own ability to work out the bugs in the program.
Interestingly Douglas Adams has already suggested this idea in the hitchiker series. In the Restarant at the end of the Universe, Arthur Dent & Co. encounter a cow-like alien which has been genetically programmed to *want* to be eaten. If we programmed an AI that * wanted* to be totally subservient to humans, would that be inhumane? A point to consider: it wouldn't be the same as a human flagellant wanting to be enslaved because he/she thinks he/she deserves it (I don't think...any dissenters?) because in a human this would arguably be the result of guilt or self-sacrifice. In an AI designed to be a slave, it wouldn't be denying its ambition--it would have none. Or is creating a totally ambitionless, selfless being immoral in itself? Or is a selfless being morally *better* than a greedy self-absorbed human? A more interesting question and relevant question might be how this would affect the human soul the machine served...
As for whether an AI would have a soul, I don't think you can argue that either way. Since most theologians that I know of agree that the soul is transcendent and indefinable, there's no grounds for rational argument for or against it. Either you believe it has a soul or you don't.


Key: Complain about this post