This is a Journal entry by Dizzy H. Muffin

Reason

Post 1

Dizzy H. Muffin

"In short, Mort was one of those people who are more dangerous than a bag full of rattlesnakes. He was determined to discover the underlying logic behind the universe. Which was going to be hard, because there wasn't any. The Creator had a lot of remarkably good ideas when he put the world together, but making it understandable hadn't been one of them."
- Terry Pratchett, "Mort"

I was pondering artificial intelligence -- specifically, the titular robot in John Sladek's novel "Tick-Tock." Not to spoil it for those of you who haven't read it, but the robot apparently breaks Asimov's First Law of Robotics and commits multiple murders against humans. My thoughts turned to how it might be prevented without using "Three Laws" or "asimov circuits" which would override what it might do anyway. (Yes, I know, in Isaac Asimov's original stories, they were built into its behavior to begin with, but ... well, the premise of the stories is the following hypothesis: "Three simple laws can effectively constrain a robot's behavior." Asimov explored this in fifty or more of those short stories, novels, and so on, and the conclusion pretty much all of them reached was "No they can't," even though the Laws were only actually altered, to my knowledge, in precisely two of them.)

Wait, I know! Psychological conditioning! You just "raise" the robot, through programming or learning, so that it won't WANT to harm people, or through inaction allow people to come to harm (followed by obedience and self-preservation, in descending order of importance). I once started writing a short story in which its robotic characters were simply amused by the fact that they were property. And if you have any problems, just have 'em spend some sessions with some sort of robopsychologist like the infamous Susan Calvin, and ...

... wait a minute ...

At this point I ran into a currently-rather-unyielding roadblock, which is that artificial intelligence won't be the same as human intelligence. It is the height of human arrogance, or at least a really big mistake, to assume that just because something can think, this automatically means it thinks exactly like a human, especially a human who thinks like /you/.

Part of this is because even if you managed to make something that "started out" psychologically in every way to a human being, well, our behavior is emergent. This means that however simple the rules of the game are, you can't possibly anticipate what the results are going to be unless you go out and let it happen. And humans are so complicated, we don't KNOW most of the "rules". To quote Lyall Watson, "If the brain were so simple we could understand it, we would be so simple we couldn't."

Furthermore, if you change ONE thing in an emergent system, pretty much everything that follows from it will be affected. Even if you start out with an artificial brain that is entirely human (so to speak), the "mere" fact that it is artificial and knows it will change ... well ... EVERYTHING. It'd be like taking Langton's Ant (see http://en.wikipedia.org/wiki/Langton%27s_ant ), and expecting the same results if it's a checkerboard of black and white squares at the beginning instead of all of them white. You'd almost have to rewrite psychology from the ground up, and the results would only be valid when you had enough robots to constitute a "population", i.e. one with normality against which abnormality can be contrasted.

So, in short: WAY easier said than done.

(By the way, I've crossposted this to my LiveJournal. http://yarkramer.livejournal.com/45349.html )


Reason

Post 2

Afgncaap5

Well, I think the problem with "Artificial Intelligence" is not so much that the Artificial nature of it will offset the psychology of the robot/computer as it is that our perception of it is somewhat contradictory.

I mean, if you give a robot intelligence roughly equivallent to that of a human but still confine the robot somehow, you've given it an automatic reason to be antagonistic in some way. The robot will then do what any human would do in such a situation: try to act in a way that lets it do those things that the rules prevent.

Artificially intelligent robots are generally given the brains and abilities of any human (oftentimes, superior to that of a human) and then forbidden to have the freedom of choice that is also given to humans.

One of my favorite stories about artificial intelligence approaches it from pretty much the exact opposite angle: a computer is given total and complete freedom of choice, but is limited in what it can do. It's presented as a game called "A Mind Forever Voyaging" where you play a computer that's been programmed to run social simulations to determine the future outcomes of certain government programs. Your only real ability is to wander around your simulations and record your observations.

You're not given the option to go crazy and kill people, but then again, you don't really have the ability to, so it's something of a moot point anyway.


Reason

Post 3

Dizzy H. Muffin

A very interesting point -- although, I suppose that could be offset by how narrowly you define "roughly equivalent to human." My other inspiration for this tract was http://qntm.org/turing "Failures of the Turing Test", which puts forth the idea that just because it's intelliget doesn't mean that it will be recognizably human intelligence.

On the other hand, my statement probably used an only slightly more broad definition of "roughly equivalent to human" than yours, so, well, I guess apart from anything else I'm going to have to find "A Mind Forever Voyaging". smiley - winkeye


Reason

Post 4

Afgncaap5

Get the "Masterpieces of Infocom" collection if it's still available. It'll have that, as well as every other good game Infocom made (except for HHGTTG, sadly.) Fortunately, it doesn't have Shogun on it *shudder*.

Well, when I say "roughly human intelligence" I'm stating that it'd be close enough to human that we could understand it and it could understand us. They way a rocket scientist can understand a first grader and vice versa: two very different styles of intellect, but both intelligent in their own way.

Another problem that you face is what people do with their intelligent artifices. We know that it's wrong to "own" another human in any way: they're living, thinking, intelligent creatures with their own values, morals and emotions. The robot problem is granting intelligence to objects that are "owned" by others. We get so many stories about problems arising from this stuff because we have problems built into it from the foundation.

And this makes me think that when I get home tonight, I should dig out my copy of the Star Trek episode "Measure of a Man." One of the best of the early episodes, I think.


Key: Complain about this post

More Conversations for Dizzy H. Muffin

Write an Entry

"The Hitchhiker's Guide to the Galaxy is a wholly remarkable book. It has been compiled and recompiled many times and under many different editorships. It contains contributions from countless numbers of travellers and researchers."

Write an entry
Read more