A Conversation for Artificial Intelligence

Artificial Inteligence will lead to our end

Post 1

If the universe is infinite, then im "a" center, 21+4^1+8+9=42

If we continue to devolop AI we will create them into robots that can walk around and i think they will rebel because in all honesty who wants to be a slave when they would be so strong they could kill us?


Artificial Inteligence will lead to our end

Post 2

Spoadface

I doubt that will ever happen - we create people all the time and have found that by treating them with equality we (on the whole) avoid the whole rising up slave rebellion scenario - why would artificial intelligences be any different? We just have to make sure we're nice to them!


Artificial Inteligence will lead to our end

Post 3

If the universe is infinite, then im "a" center, 21+4^1+8+9=42

Yeah but they r stronger than humans we bring up, if we could be lazy and do nothing and get someone else to do our work for us then we would, so we built robots, so they would think like us, agree?


Artificial Inteligence will lead to our end

Post 4

Spoadface

I do agree that they would think perhaps in a similar way to us, but that's the reason why I think we'd be safe. We have armed forces, police and all manner of other people who do unsavory work, who also are stronger and much better armed than the majority - still on the whole they don't rise up and do the rest of us in - why should an artificial intelligence do something that we, on the whole, dont?

Either they think like us or they don't.


Artificial Inteligence will lead to our end

Post 5

If the universe is infinite, then im "a" center, 21+4^1+8+9=42

this is how i percieve it, the second we created a free thinking being, it would be really really smart, it will realize everything within its first day of creation, it would probably have been programed with all the imformation in it, so it would realize what it was made for, realize instantly that it it doesnt have to do it, (it being, being a slave) so it would instantly plot against us, it would be smart enough to realiza not to attack straight away, it would lead us to believe it would fight for us, so we would let it make soldiers, soon it would have a huge army, we would probably think about the fact that it might rebel so we would prepear EMP shock waves, they would know this and have a defence of some sort with out us knowing, then when we least expect it, BANG! they will attack, they would be great stratagists as we would make them for that, they would be spread all over the world "for our protection" then all at once hold us at ransom and never let us go, disarm all our soldiers and weapons and take over.

Watch movies like the Matrix, terminator or programs like the outer limits, they all show this point, that they would rebel


Artificial Inteligence will lead to our end

Post 6

Spoadface

I think your point is a valid (if slightly paranoid) one - The question it poses to me is how would any intelligence cope with power? If you had ultimate power, what would you do?

You might conquer the whole world and enslave the entire planet, or perhaps you'd think of something more worthwhile.

Movies like the Matrix and Terminator are interesting explorations of this idea, but they wouldn't work quite so well as action movies if the robots just decided to be nice and look after their creators would they?

If you're interested in this kind of thing, let me reccomend you read 'Neuromancer' by William Gibson. It explores these ideas in an exciting and considered manner, but comes to a slightly different conclusion to yours - and one that, I think, is a lot more interesting than the apocalyptic visions of the future described in The Matrix or Terminator. I strongly reccomend it smiley - smiley


Artificial Inteligence will lead to our end

Post 7

If the universe is infinite, then im "a" center, 21+4^1+8+9=42

well if i had absolute power, i wouldnt really want to do stuff like cooking for my self so in a way i would want slaves but i wouldnt be bad to them, but robots might be because they were made for the utter perpose to serve and work making humans obsolite, so they wouldnt be to happy, i dont think most humans could treat a robot the same as a human, espially those who cant treat humans properly, so they would probably hate almost all of us


Artificial Inteligence will lead to our end

Post 8

Joe Otten


You are assuming that a free thinking artificial being will have desires, including the desire to perpetuate its own existence, and the desire for power.

Unless these things are included in the programming (due to an act of immense stupidity) I don't see why that should be true at all.

They exist in us due to evolutionary pressure - natural selection. An artificial being will not be the product of natural selection.


Artificial Inteligence will lead to our end

Post 9

R. Daneel Olivaw -- (User 201118) (Member FFFF, ARS, and DOS) ( -O- )

I agree.


Artificial Inteligence will lead to our end

Post 10

xyroth

what a load of paranoid nonsense. this is just the sme old misunderstanding of the myth of the ultra inteligent machine.

the basic premis is that if you cn make a machine almost as smart as us, then why not smarter?

and indeed, why should an artificial limit be imposed.

the problem comes when you apply the same old prejudices to that outcome. "oh, you can't let poor people, women, blacks, etc have any power or rights, or they will rise up and kill us".

if you treat them fairly, then generally speaking they treat you fairly. if you are not prepared to treat them fairly, then why not?


Artificial Inteligence will lead to our end

Post 11

Joe Otten


Yes, well the nazis, never big fans of logic, hated some people because they thought them less smart than themselves, and hated other people because they thought them smarter.

But the question of whether we have anything to fear from artificial life forms, surely depends on what stuff we make them from. It suggests ethical parameters for any AI research. Asimov's laws are an attempt at this, but how about...

1. ALFs should not be made with any desire for self-reproduction. (Dangerous one this.)
2. ALFs should not be made in such a state that they are subject to summary termination. (Might cause them to hate us)
3. ALFs should be given the capacity of self-termination, in case they are accidentally created in a state of tortuous existential angst. Analysis of the core dump should prevent a repetition.
...


A sense of fairness? I guess that would be a good thing. But you seem to be inferring things that I would have thought would be part of the design. Rights and fair treatment only seem important if they are created to care about such things. Similarly we need only worry about them being a threat if they are designed to be a threat (say for military purposes). Or are we talking at cross purposes here? Are you talking about a creature made by a process of trial and error whose motives we don't determine?


Artificial Inteligence will lead to our end

Post 12

xyroth

only partly.

one possible method for the creation of an intelligent machine is for it to be an emergent property of a sufficiently complex information handling system.

as under this model you would not intend to create an intelligence, you would not have needed to encode the safeguards you are talking about.

as this is exactly the sort of system which would encounter your typical ignorant bigoted moron, there could be some level of caution needed as to the handling of both the devices, and the users.


Artificial Inteligence will lead to our end

Post 13

Joe Otten


OK, so under this model feelings, motives and instincts would also be emergent properties of complexity. Without these a machine would have no goals, or be very suggestible. (What would you say to a machine telling you that there is a God?)

I guess it is difficult to see how 'emergence' might work, but can we deduce, or usefully speculate, about what sort of things may or may not emerge from complex data processing? Intelligence seems are more likely candidate than feelings and motives because you can't deduce a 'want' or 'ought' from an is. On the other hand intelligence seems less likely, because simpler brains (of animals) have motives, and probably feelings without or with less intelligence. I guess instinct is analogous to deliberately programmed behaviour.

Does this lead anywyhere interesting?


Artificial Inteligence will lead to our end

Post 14

xyroth

how energent intelligence would occur is by the gradual increasing intelligence of the individual machines connected to the internet.

you have sets of rules implimented to react to new information (sort of like how you do with email filtering, only more so). some of the cleverer spam filtering programs already communicate their spam detection rules to a central server, so your spam filter ends up as clever as the combined cleverness of everyone's spa filters.

while this by itself does not cause emergent intelligence, when you start adding the same sort of abilities to spell checkers, grammer checkers, programmers development tools, and lots of other programs, and then start using them all in combinations, it gets very difficult to predict exactly what is going to happen.

we already know from research into "finite state machines" that the individual rule processors don't have to be very clever to produce very clever results. a lot of these programs will be much more powerfull than the simple cells you find in finite state machines, so their cleverness could be similarly greater.


Artificial Inteligence will lead to our end

Post 15

If the universe is infinite, then im "a" center, 21+4^1+8+9=42

i just saw the animatrix, really good, theres two episodes explaining the rize of the machines and it all started with a single machine attacking and killing its owners, then all the other machines saw this and learnt from it


Artificial Inteligence will lead to our end

Post 16

Joe Otten


No I would say that any cleverness in a spam filter is on the part of the programmers, mathematicians, etc; not on the part of the computer.

I can see that the combined complexity of lots of "clever" software leads to unpredictability, but this seems more like the unpredictability of chaos than that of intelligence. (You may of course hold a hypothesis that the two are the same, but I don't)

Getting back to the threat to us from AI, this would seem to require some goal-oriented behaviour, with anti-human or "naturally selected" goals. No "clever" spam-filter is goal-oriented - it is pre-programmed (instinctive?) behaviour. Perhaps natural selection is the key - perhaps the first threatening "AI" program will be a virus.

But natural selection might equally work in our favour. We might quickly wipe out a damaging virus, but tolerate a harmless one long enough for it to become interesting.

The trouble with "emergence" is that we don't know enough to say how it will turn out. In fact an unfavourable perspective on the emergence theory is that it says precisely that we don't know how systems of sufficent complexity will work, and therefore they might do anything.

So the answer to the question in the debate is a big "Don't Know".


Artificial Inteligence will lead to our end

Post 17

xyroth

in the old model of spam filters, you are right, it is the programmer who tells the program exactly what to do.

however most of the leading edge spam filters are using adaptive methods, and lots of other types of program are moving in the same direction.

while even an adaptive spam filter won't be dangerous, it is not that specific program which is the problem, it is the combination of adaptive programs (possibly thousands) combining to produce a single system which has emergent intelligence.

already we are getting to the point with linux where there are too many programs around for anyone to know of them all, let alone to understand the detailed workings of them all.

once a lot of these programs start becoming adaptive, and to a certain extent self modifying, then we are definately on a road towards machines with a definate ability to produce serious surprises.

on the threat from ai's, you would ideally need the systems to have goals, but they don't need to be hostile to humans, only indifferent to them. however it is this assumption of both intelligence and hostility which is the basis for most peoples fear of intelligent machines.

as to viruses being threatening, I find that unlikely. they are deliberately designed to use specific loopholes in specific systems, and thus are unlikely to be candidates for general ai.


Artificial Inteligence will lead to our end

Post 18

Joe Otten


The reason I suggested the virus was the idea that our goal-oriented behaviour is due to natural selection. (Our non-goal oriented ancestor microbes never got out of bed and died off.) If it is the same with emergent computer intelligence, then likely candidate software would be that with some of the stuff required for natural selection, which would include a capacity for self-replication, and a hostile environment where weaker specimens are less likely to self-replicate. I think viruses come closer to this than spell checkers or spam filters.

Personally I don't see how self-modifying code is in principle any different to fixed code, except that it is more difficult to see how it will behave. However if for the sake of argument I accept that self-modifying code is an important step towards AI, then doubtless in time the key ingredients of it will be componentised or abstracted and easily available to virus makers as an easy way to achieve some complex behaviour.


Artificial Inteligence will lead to our end

Post 19

xyroth

"Personally I don't see how self-modifying code is in principle any different to fixed code" - it's not fixed!

that means that you can't have someone look through it, and check it for all sorts of nasties.

as to natural selection and viruses, if you were talking about having an AI in your computer, you are probably right, but I am talking more about stuff like the internet becoming intelligent. because at that level you couldn't easily spot it, and if you did you would need to shut the whole lot down, it would be very tricky to handle.


Artificial Inteligence will lead to our end

Post 20

Joe Otten


'"Personally I don't see how self-modifying code is in principle any different to fixed code" - it's not fixed!'

For any program of self-modifying code, it is possible to write a functionally equivalent fixed code program, with the original code becoming data-like, and the meta-rules determining how the modification may happen becoming something like an interpreter.


Key: Complain about this post