A Conversation for The Turing Test
My take on AI...
Judd Started conversation Oct 16, 2000
Bear in mind that the following comments have no basis in research or anything posh like that...just what I think.
Turing's AI concept is flawed because it alludes to an AI consructed by a human. In my opinion, we have yet to reach the stage of being able to produce a 'program' or 'AI' that we cannot understand. OK, we might not be able to understand how a neural net works, but the people who designed it will.
Think of it like this: Ask the programmer 'How exactly does your brain work?' the answer will be (should be!) 'I don't know'. Ask him how his code works and the answer should be (I hope!) 'Well, ...'
Producing a sentient AI will not happen through simulation, or emulation, but through evolution. If we can produce a system that can design and produce its own, improved, next generation, and so on,
we might get to sentient AI.
Its the only way a machine intelligence can achieve the complexity necessary to become sentient ... how else can you design something you don't understand? The nature of design is specification of a system, we can't specify systems beyond our understanding.
Of course, we're still left with the problem of producing a system that can replicate and improve itself, but I think that is the most promising avenue in the search for sentient AI.
My take on AI...
Martin Harper Posted Oct 27, 2000
Hi Judd.
Actually, on a basic level, we know how neural nets work, but we don't know exactly WHY they work. They were created by experimenting, and it turned out that they were a reasonable way of computer-learning. There is an approximate theory starting to emerge now, sometime later, but it's still inaccurate, and nets seem to remain very much a black art - the practitioners have an intuitive grasp of how many neurons and layers are required for certain tasks, but unable to justify their intuition.
The way that a trained neural net works is similarly difficult or impossible to understand. An example is a neural net that was trained to signal process - distinguishing a square and sine wave of differing frequencies. It was implemented in field-programmable hardware. When it had been trained to do it's job, the trainers looked at the finished product, and saw an awful lot of connections which didn't appear to do anything. So they removed them.
It stopped working.
It started working again when they put them back. As far as I know, we still don't know exactly how the thing works, but it seems to rely on these unconnected things to increase capacitance and hence delay through differing parts of the circuit. We think...
> "Ask him how his code works and the answer should be (I hope!) 'Well, ..."
Hate to dash your hopes... but no one programmer understands Windows 2000. It's a joint effort by a huge quantity of programmers, some of whom are now changed job, retired, senile, or dead, with some code dating back a decade and more. I think this is your key mistake - most large programs are the results of thousands of manyears - not the work of a single programmer, so naturally they can be beyond the understanding of a single person with a limited memory. And hence more intelligent than a single person.
That said, I agree that evolution is a promising avenue atm, but I think that learning systems, and even direct programmed systems, are still possibilities, and they each have differing areas where they are superior.
Hope there wasn't too much technobabble there...
My take on AI...
Richard Posted Jul 20, 2001
I'm not quite sure what you mean about nobody knowing why neural nets work. No neural net, regardless of how many layers or internal feedback etc. is any more than a large algebraic function. All neural nets will boil down to a complex equation given that each neuron is based on a very simple algebraic expression containing only addition and multiplication.
If you remove neurons, you remove part of the equation, hence you throw away the networks ability to handle certain circumstances for which it has been trained.
Real intelligence would be a self modifying neural net.
Anyway, I was under the impression that the scientific definition of an intelligent entity is one that is self-aware.
Cheers.
My take on AI...
furtim - Zaphodista Sympathiser Posted Aug 12, 2001
The trouble, then, is finding a way to prove self-awareness. A program could easily be written to assert its own self-awareness without actually having to BE self-aware. For example, just replace "Hello, World!" with "I think, therefore I am." I think you may get my proverbial "drift" at this point.
My take on AI...
Martin Harper Posted Aug 12, 2001
I missed Ricky's earlier comment, so I'll respond to that...
The first is that just because we know that something is a large algebraic function doesn't mean that we know what it is or how it works. People want to know the answers to questions like: "why does THIS algebraic function(neural net) map scrawled lines to the numerical digits?", "how accurate is it?", "is there a smaller function(net) which will do the same thing?"
Similarly, nobody really knows why back-propogation of errors works as a method of training nets. There are some intuitional ideas, but why should it converge at all? And how can convergence be sped up? What are the best set of examples to give? How do self-categorising networks work, exactly?
My take on AI...
Researcher 186130 Posted Oct 17, 2001
They have designed something that learns. It is learning right now and has the inteligence of a 18 month old baby. That's impressive since it started only with learning algorithms. It's name is HAL and you can read about it at [URL removed by moderator]. It's some pretty crazy stuff.
Key: Complain about this post
My take on AI...
More Conversations for The Turing Test
Write an Entry
"The Hitchhiker's Guide to the Galaxy is a wholly remarkable book. It has been compiled and recompiled many times and under many different editorships. It contains contributions from countless numbers of travellers and researchers."