A Conversation for Artificial Intelligence
Bass Ackwards
Researcher 164530 Started conversation Dec 24, 2000
You're going at this AI thing bass ackwards. All'o'ya.
I call this the Bradbury AI System (patent pending), because Bradbury is a rather etheral writer and this is a rather etheral system. The "patent pending" was added because that's what the patent is currently doing.
What I'm doing is programming into this thing how to learn and generalize from previous knowledge (for instance, if it has a photosensor and bright lights hurt, and it rams itself into the bright light, knocking it over, it would tend to think running forward at full speed was a Good Idea). You give it a memory and a set of conditions where it must act (like, say, having it's wheels arbitrarily stop when they're supposed to be moving) and a set of conditions where it should stop and wait (like connecting with the battery recharge station). This would give you something that works quite well in static conditions, but for every new variable the results become more and more random (because it's initial responses are based on a Random Number Generator).
Now, my major crux is the Generalization subroutine. Each response known to the system is given a label of 1, assuming it works in removing the circumstances that caused the decision-making program to kick in in the first place. Should it work twice in removing adverse conditions, it's given a label of 2. Three times, a label of 3. Should a given "3" response not work for three consecutive states of adverse conditions, the response is downgraded to "2" status. Should a "2" response not work for three consecutive states of adverse conditions, the response is downgraded to "1". If a "1" response doesn't work three times, then it's erased from the memory and isn't considered again unless the system RNG brings it up again.
I think the system rather works, based on my initial experiments. Any thoughts? Any nitpicks in the theory?
Bass Ackwards
xyroth Posted Mar 28, 2001
one slight hole in the theory. The system you are talking about works using a variant of a well known system that uses weighting of results at one level to feed into the decisions at the next. The problem comes from the fact that this type of system only works well if you have got all of the right variables in a usable form so that you can draw a straight line between the different groups that you are trying to pidgeon hole. If the groups are fuzzy, or overlap, or in some other way can't be cleanly seperated using the variables that you have defined, then for those groups, the whole system doesn't work, and can't work.
Bass Ackwards
quadrilateral Posted Jun 21, 2001
There is another problem as well. Since you are useing a random probability generator and programming that erases past mistakes, chances are that it will start a vicious cycle of negative actions this is easily remedied with a log of previous "thoughts" that it could acess. But then you would have to enable it to rewrite its own programming.
Key: Complain about this post
Bass Ackwards
More Conversations for Artificial Intelligence
Write an Entry
"The Hitchhiker's Guide to the Galaxy is a wholly remarkable book. It has been compiled and recompiled many times and under many different editorships. It contains contributions from countless numbers of travellers and researchers."