A Conversation for The Freedom From Faith Foundation
- 1
- 2
Artificial Intelligence
Fragilis - h2g2 Cured My Tabular Obsession Started conversation Jul 26, 2000
This thread is for general discussion of artificial intelligence. So far, we have touched on the following:
* hypothetical markers of sentience
* practical definitions of 'sentience,' 'computer', and 'machine'
* timeline for the creation of sentient computers/machines
* advantages and disadvantages of computers/machines with complete artificial intelligence
Of course, there is considerable disagreement on all of the above.
I would add on the last subject that like all other technology, artifical intelligence can be a benevolent or destructive force depending on how it is used. Few would argue that sentient robots programmed with instructions to destroy Earth would be destructive. But do the possible advantages outweigh this possibility?
The real question, of course, comes when we consider that these artifical intelligent beings might go outside their programming. Do we trust them to keep mankind's interests at heart? Will they consider themselves part of mankind?
Artificial Intelligence
Talene Posted Jul 26, 2000
I would argue that the definition of sentience means questioning what you're told and functioning within a set of self-proscribed ethical constructs. Therefore, robots "programmed to destroy earth" might not necessarily be sentient, or if they were, they might not necessarily destroy earth after they thought it over for a while. Conversely, sentient robots might decide on their own accord to destroy humans after examining the way humans have been mucking up the works ever since they climbed down out of the trees. Which might not, all in all, be such a bad thing for the planet even if it would mean Bad Things for humanity.
Artificial Intelligence
Fragilis - h2g2 Cured My Tabular Obsession Posted Jul 26, 2000
I dunno. I think we're being equally facetious whether we say the Earth is better with us, or that it would be better off without us. Surely we don't presume to know what is best for the planet based on our measly understanding!
Does it work if the robot's ethical construct was pre-programmed? And if it were, what's the different between a sentient bot and one that isn't? It should be possible to produce both kinds with a sort of ethics.
I am seeing a common thread, though. Many of us have mentioned that a computer/machine/robot violating its own programming is showing signs of true intelligence. So here's another conundrum. What if we created a program where the robot is designed to periodically violate its own programming? Would that make it sentient? Hmmm.
Artificial Intelligence
Blatherskite the Mugwump - Bandwidth Bandit Posted Jul 27, 2000
We are sentient, self-aware animals with a strong urge for self-preservation. We don't really care what is in the earth's best interest, only so long as it parallels our own interests. What we're really concerned about is "is it best for humanity? More importantly, is having a bunch of sentient robots going to make it possible for me to live like aristocracy, or are they going to take all my jobs and drive me to poverty? Wouldn't sentient robots insist on equal treatment, but given their advantages in strength and memory accuracy, completely subvert my own purpose in the cosmos? Reams and reams of sci-fi speculate these questions, and we're no closer to any real answers. Basically, humans will be able to coexist with such creatures as long as there are limitations programmed into them which they cannot subvert through their own independant logic (something like the instinctual urges for preservation and reproduction in the human animal) and the manufacture of such beings is controlled by carbon-based life forms.
Artificial Intelligence
Twophlag Gargleblap - NWO NOW Posted Jul 27, 2000
Richard Calder wrote an excellent sf series where nanoengineering at the quantum level creates fractal matrices with indeterministic 'strange loops' (a la Turing) providing sentience to a series of automata called Gynoids (l'Eve Future). The gynoids are capable of reproducing themselves by infecting human hosts with nanoware that modifies the host's reproductive capabilities. The offspring of any such infected person invariably turn into other gynoids when they hit puberty. These gynoids, in addition to being sentient, are also capable of altering reality (which for them, because they interpret reality at the level of quantum indeterminacy, isn't consensual). Things get a bit out of hand when the spread of this virus (being metainformational in character) evolves and infects reality itself, rewriting past, present, and future into a metadrama of the human gender power struggle. Pretty gripping, if a bit weird, but metaphysically insightful. Dead Girls, Dead Boys, Dead Things are the three books.
Artificial Intelligence
Fragilis - h2g2 Cured My Tabular Obsession Posted Jul 27, 2000
Here's another problem with sentient robots. Since when do programmers know anything about ethics? Or the corporations and governments who control them, for that matter?
Thd Richard Calder books do sound interesting. But there's one thing I don't understand. Why would anyone create Gynoids in the first place? They are basically parasites. Where's the incentive?
Artificial Intelligence
Twophlag Gargleblap - NWO NOW Posted Jul 27, 2000
For the same reason that people write computer viruses and stockppile anthrax I guess. In the book the inventor is sort of a mad-doctor type, having created all the female gynoids (called Lilim) to have rows of teeth in their vaginas and the quantum-fractal matrice of their AI located in the womb, with an aperature at the navel. The book is sort of demented like that.
I guess what I think is cool about the series is that it looks beyond the "C-3P0/Deep Blue" take on AIs and looks closely at ideas like viral replicating information patterns with evolving epiphenominal traits. Might not be too long before we're downloading copies of our own brains into recepticles (a little detailed work on the human genome data, some advances in nanotech matter-editing, neural interfacing and organic circuitry all being on the horizon).
Artificial Intelligence
Talene Posted Jul 27, 2000
Well, I do think that a sentient, non-human entity might be able to look around and see what humanity has done to this planet and decide we need to go. I mean look at the destruction of the rainforests, the pollution, the things human beings have done not only to the environment but to each other. As a human, it is probably difficult for me to step back far enough to be really objective about this, but what if someone not human were to take the question?
I guess you could always program a robot or whatever so that it automatically self-desctructs if it breaks it's programming in a way we prefer it not to
Artificial Intelligence
Fragilis - h2g2 Cured My Tabular Obsession Posted Jul 27, 2000
I'm reading a comic book called Transmetropolitan which includes people who have downloaded themselves permanently into computer matrixes. They are eccentric people, neither universally benevolent nor tyrannically elitist. I highly recommend the comic, BTW, for people who don't mind futuristic theories mised with heavy doses of sarcasm.
I have always had difficulty with the "sentient machines revolt to save planet Earth" theory. First, I think a computer would not jump to conclusions in the same way human environmentalists do. In our hubris, humans assume we are making a sizeable difference to the planet. Our research leans this way because this is where our thinking starts.
It is more likely that, in reality, our changes are small and temporary from a fully geologic or evolutionary viewpoint. We freak out when the planetary temperature changes, but in reality it is always changing and did so since long before we got here. Perhaps species die out all the time, whether we do anything to cause/prevent it or not. After all, isn't that how natural selection is supposed to work? Since intelligent machines could afford to take longer views of the issue, it's more likely they will see our "changes to the ecosystem" as utterly unimportant.
Also, I somehow doubt they will have the same overreaching nostalgia about this planet that we do. We view ourselves as part of the ecosystem, but perhaps they will consider themselves firmly above such concerns. Perhaps they would be embarassed at our sentimentalism, and far less "environmentally aware" than we are. I expect they would consider our deep concern over species we have no legitimate interest in as silly at best, and dangerously sappy at worst.
So here is my guess. I imagine robots would be more interested in getting away from our weak attempts at temporarily ostentatious displays by leaving the planet and founding a few of their own. Perhaps they would enjoy a planet with a much greater concentration of metals and fewer native species to muck up their centuries-long projects. Who knows?
Artificial Intelligence
Martin Harper Posted Jul 27, 2000
programmers have ethics courses nowadays - part of the prerequisites to joining the BCS et al...
of course, most of my year skipped them...
Thinking about it, though, you could try to hard-code in ethics. Leave that part of the programming in read-only memory... the problem would be asimov's laws of robotics-like : Ethics are tricky, and will either be too strict - reducing power, or too lenient - allowing workarounds. We could fix that by letting ethics evolve along with the computer.... but can we trust that? And what ethical decisions would a sentient computer come to?
Artificial Intelligence
Fragilis - h2g2 Cured My Tabular Obsession Posted Jul 28, 2000
In Denver, Colorado, many programmers are self-taught or get industry certification tests that take just a matter of weeks to prepare for. Even among those who attend a legitimate school, most attend a technical school where the mechanics of programming are all you are expected to learn. Very few programmers even consider ethics here.
I think anything truly sentient leaves the door open to a future where each computer/machine/robot has its own sense of ethics -- just like humans. Do you rehabilitate wayward robots, or merely dissasemble them?
Artificial Intelligence
Martin Harper Posted Jul 28, 2000
this is true - but such practically-based programmers are less likely to develop the first sentient program. It's much more of a pure research thing at the moment... the first person to make a sentient computer will be working in a pure research lab, with a degree... (well, probably... )
Artificial Intelligence
ZenMondo Posted Jul 29, 2000
Some of us coders are actually diriven by ethical considerations. Though I must admit, the ethical awakening in my programming actually came about from reading a book from MIT press called "Computer Ethics". In short, it discussed the real-world effects that computer programming has on people. There is a very scary story about a software bug that caused cancer patients to get overdosed with radiation and it is that story that I decided to never code for medical or transportation equipment. There is always some situation that you did not consider that gets you in trouble. I would much rather the result be the loss or corruption of data than physical harm.
Though I would feel safer if I knew the equipment I put my life in the hands of had my same coding philosophy as I do. But more than likley these applications are put in place by those who only exhibit technical skill and not artfullness in thier code.
I believe that computer code should have an elegance and beauty in and of itself. A good program is the melding of technical skill with artistic inspiration. Back in the olden days, resources were precious. Clock speeds were slow. You had to code elegantly to deal with these constraints. Today even a crappy system is expected to have 64 Mb of RAM. We are overpopulated with bloatware. I just got my first Palm device. I can't wait to code for it. The limited resources reminds me of the good ol' days.
I would aspire to be a code-architect on the level of Frank Lloyd Wright. His work had a solid base of technical skill, and within that skill, there was artistic expression. I want to write code that is like "Falling Waters" by FLW. Most code out there is equivalent to the local Wal-Mart designed by a nameless drone. Pure Function and UUUUUUGLY. We need more "Falling Waters". Function and Form melded in perfect harmony.
Artificial Intelligence
Fragilis - h2g2 Cured My Tabular Obsession Posted Jul 29, 2000
Zen ethics and programming -- I like it. It sounds like if such were the norm, our programs might occasionally surprise us.
Artificial Intelligence
Twophlag Gargleblap - NWO NOW Posted Jul 29, 2000
Pardon me for injecting some semantic chaos here. It seems like just about any ethical quandary that might arise from the development of artificial life, sentience, or intellect, would probably stem from the fact that we still haven't developed a comprehensive definition of what life, sentience, or intellect is in the first place. We lack any real sense of perspective that could place our own existence into context; so any attempts to push our confused ideas about ourselves onto our creations is going to encounter problems almost immediately. For example, it would be tempting to call computer viruses the first 'artificial' life, except that even so called 'real' biological viruses aren't exactly alive. How do we know that our own intelligence isn't artificial? Conversely, if ours is "natural", and we were to build machines capable of thinking, would their intellect not then be a natural extension of our own? Nature and artifice are both pretty tenuous concepts in and of themselves.
Artificial Intelligence
Fragilis - h2g2 Cured My Tabular Obsession Posted Jul 29, 2000
There is difficulty is defining "sentience" and "artificial intelligence." Part of the problem is that we know so little about ourselves and the human brain that we can't pinpoint what makes humans different from other species. I think this may be resolved in part as we continue to explore and understand the mechanics behind our thinking processes.
I think there probably is a mechanical construct to what we are thinking. Whether this is "natural" or "artificial" is ultimately irrelevant. Even a purely natural construct could be imitated through artificial means.
Would an artifiically sentient being act as an extension of our own intellect? I think here perhaps our hubris is showing again. Are our children an extension of their parents? In a sense, they are. They grow up in the same world. They are familiar with their parent's beliefs and ethical constructs. They may pick up all sorts of habits and preferences because they are the default ones within their family. But they are more than mere extentions, and so would an artificially intelligent being be. At least, I feel it would have to be -- or else I wouldn't call it sentient.
Artificial Intelligence
ZenMondo Posted Jul 30, 2000
Perhaps part of the problem is the human tendancy to anthropomorphize just about everything. With AI, its possible to realize the anthropomorphized machine to the ulitimate. But perhaps machine intelligence will be so alien to us that we will fail to recognize it when it arises. All the descriptions of intelligence and sentience have been trying to describe them as they appear in us humans. Perhaps such things will always need to reside in the unknown. There ultimatly can be no proof as to if someone, something, or some-entity is indeed sentient and self aware. Again, we find ourselves in the familiar quandry named "Faith". Even our own self awareness is unprovable to our nearest neighbor.
But to continue the thread of an AI emulating the human soul:
The other night I had the pleasure of baby-sitting my 3-year-old nephew. The video he brought with him this time was _The Iron Giant_. This film was a wonderful portrayal of a sentient machine. In the film, we watch the process of the Robot begining as an amnesiac blank to a fully realized person.
Geared towards children, a child character answers the robot's concerns about mortality. He reasons that because the giant has feelings, that means that he has a soul. Simplistic but I think that it reads true. If a machine can express an emotional state, then at some level it is indeed to be considered alive. (I'll avoid the debate about determining a machine's emotional state if any, and just say as in the movie, instead of a test, leave it up to personal interaction). The other part of the Iron Giant's sentience is evidenced in the moral of the story, "You are who you choose to be". The Iron Giant demonstrates his sentience by becoming the master of his own fate. Making a concious choice of WHO he wants to be instead of letting his pre-set programming determine who he is.
The same can be said to humans. One of the things that demonstrate our own sentience is the ability through intellect and will to overcome our biological programming that is, our instinct. Perhaps we will know when a computer becomes sentient when the computer is given a task, to which the reply is "no".
Artificial Intelligence
Martin Harper Posted Jul 30, 2000
Surely artificial in this context is anything created by man (directly or indirectly), and natural is anything that isn't.
This is a very homo-centric viewpoint, but as a human I feel this is justified...
Artificial Intelligence
Fragilis - h2g2 Cured My Tabular Obsession Posted Jul 30, 2000
I don't think are necessary for sentience. There are many humans who refuse to show emotions to others, and I certainly wouldn't consider them less sentient for it. And it would be relatively easy to program a computer to randomly experience "emotions" that would change their reactions to various commands. The game Starship Titanic is a good example of this. Would anyone argue that the virtual robots in Starship Titanic are sentient?
Artificial Intelligence
Fragilis - h2g2 Cured My Tabular Obsession Posted Jul 30, 2000
I don't think *emotions are necessary for sentience. Of course, that's what I meant to say.
Also, I have trouble defining artificial as anything made by man. By that definition, human babies are artificial.
Key: Complain about this post
- 1
- 2
Artificial Intelligence
- 1: Fragilis - h2g2 Cured My Tabular Obsession (Jul 26, 2000)
- 2: Talene (Jul 26, 2000)
- 3: Fragilis - h2g2 Cured My Tabular Obsession (Jul 26, 2000)
- 4: Blatherskite the Mugwump - Bandwidth Bandit (Jul 27, 2000)
- 5: Twophlag Gargleblap - NWO NOW (Jul 27, 2000)
- 6: Fragilis - h2g2 Cured My Tabular Obsession (Jul 27, 2000)
- 7: Twophlag Gargleblap - NWO NOW (Jul 27, 2000)
- 8: Talene (Jul 27, 2000)
- 9: Fragilis - h2g2 Cured My Tabular Obsession (Jul 27, 2000)
- 10: Martin Harper (Jul 27, 2000)
- 11: Fragilis - h2g2 Cured My Tabular Obsession (Jul 28, 2000)
- 12: Martin Harper (Jul 28, 2000)
- 13: ZenMondo (Jul 29, 2000)
- 14: Fragilis - h2g2 Cured My Tabular Obsession (Jul 29, 2000)
- 15: Twophlag Gargleblap - NWO NOW (Jul 29, 2000)
- 16: Fragilis - h2g2 Cured My Tabular Obsession (Jul 29, 2000)
- 17: ZenMondo (Jul 30, 2000)
- 18: Martin Harper (Jul 30, 2000)
- 19: Fragilis - h2g2 Cured My Tabular Obsession (Jul 30, 2000)
- 20: Fragilis - h2g2 Cured My Tabular Obsession (Jul 30, 2000)
More Conversations for The Freedom From Faith Foundation
Write an Entry
"The Hitchhiker's Guide to the Galaxy is a wholly remarkable book. It has been compiled and recompiled many times and under many different editorships. It contains contributions from countless numbers of travellers and researchers."