A Conversation for John Searle's 'Chinese Room' Argument
- 1
- 2
Nice article...
Lear (the Unready) Posted Oct 5, 2000
>Generalisation :-
Try this article, also from the Guide http://www.h2g2.com/A413687 It points up one or two weaknesses in the supposed ability of a neural network to generalise. Basically, they seem a little unreliable. However, progress is being made, apparently...
>virtual environments :-
..."what I am doing now is clearly (to me) intelligent. But it's not at this very moment I'm writing adapting a physical environment"...
Of course, but I would say it *is* an adaptive behaviour. You're defending a particular argument, and I'm defending another one. Naturally, we both hope to influence the other's viewpoint, and, I would say, we also hope to learn from the other. This is knowledge that we can (potentially) use in the future, to help us survive and prosper, and therefore it can be understood as something with relevance in the physical world.
I didn't say 'adapting to a virtual environment is irrelevant', as you quoted. My point is that the difference is about who or what is making the adaptation. I would say the fact that this discussion is being carried out in a 'virtual' environment is less important than the fact that the participants are anything but virtual. We are conscious of what we are doing and capable of influencing one another in the real world, through whatever medium we choose to use.
..."What happens if you die, will that have rendered the whole activity unintelligent, simply because it has no longer affected the physical world?"
No, of course not... The point is that you and I are typing these messages with the realistic assumption that some other intelligent person out there will be reading them, and therefore we can reasonably assume that there is intelligence (and physical affect) on both sides of the equation.
>"the concept of intelligence was invented some time after the dawn of society"...
I don't know when people first started talking about something specifically called 'intelligence', but I don't think it's really relevant. I think it's reasonable to assume that people have been trying, in one way or another, to give an account for their existence and reflecting on it generally ever since early humans attained consciousness. As I said above, any organism that becomes conscious of its own existence is going to start trying to make sense of the world around it, soon enough, whatever terms it uses. Naturally, I wouldn't call someone unintelligent if they were going about this process of self-definition using some other concept(s) to help them...
Nice article...
Martin Harper Posted Oct 5, 2000
> generalisation:-
Thanks for the ref, I'd skimmed it once, but it's worth a second read...
I disagree with that article on that point - as I said, generalisation is comparatively easy - I have a book on a programming language (Prolog) which shows how to do generalisation badly about halfway through, in about ten pages.
The problem is doing it well - making the *right* generalisation. And I'll accept that current systems are a long way from being remotely good at it - humans are much better at it. (humans have the advantage of a much greater store of information than most AIs, so the playing ground isn't totally level...) But I just wanted to point to it's possibility.
> virtual versus real
Well, maybe we should agree to disagree here - I remain convinced that it is unimportant whether something is effecting a virtual object, or a real object - all that is important is the manner of effect. You seem to feel that if it doesn't touch the "real world" at some point in the process, or one of the participants is 'real', it doesn't matter?
I can't really express how weird that sounds to me, but I don't think I'm ever going to convince you otherwise, so I'll drop it. In future I shall try to estrict my examples to those that directly effect reality.
A final aside - even virtual environments are stored and simulated on real equipment.
> consciousness discussing as sign of consciousness.
Personally I suspect early humans were far too busy dodging tigers and getting laid to think about conscious. When it became unavoidable, they invented religion. While religion was around, nobody had to do any thinking except when it started to falter.
Suppose you lived in a world in which *everything* was conscious - would you then seek to define it? Likely not - in fact, you'd probably not even notice it for ages, just like we didn't notice gravity for thousands of years. But it'd still be there.
I'd say that defining consciousness can be a _consequence_ of consciousness - but it's not a definition to me.
--
I've been a little unfair - disagreeing with your assessments, but not offering any of my own... How about this one:
"There are many kinds of intelligence, including learning, pattern recognition, analysis, and self-awareness, all of which can be present to a greater or lesser ability, or absent."
"One of the essential characteristics of learning systems is that over the course of their existance they decrease their amount of entropy, and increase the amount of information they store. The rate of increase/decrease is a mark of the *efficiency* of the learning."
Of course, there are things which have this category, but which do not learn - plants for example, but I think this is a reasonable precondition. In humans, the decrease in entropy is due to the increased order of the brain as we learn about things like walking and h2g2 and suchlike.
I hope we do learn from each other - certainly I've clarified a few of my own thoughts on the subject, and as I've said I think the "consciousness seeks to define consciousness" is a wonderful idea.
Nice article...
Lear (the Unready) Posted Oct 9, 2000
Learning about h2g2 leads to increased order? First I've heard about it...
>consciousness :-
I would say the obsession with getting laid has survived pretty well intact through the ages, although thankfully tigers seem less of a problem these days.
Your view on religion is a little different to mine, I think. I would say that Christianity (for example) was, in its own time, a genuine enough attempt by people to come up with some cosmology that 'explained' the universe and their place within it, using the limited information that they had at their disposal at the time. We know now (or think we know, at any rate) that they were completely wrong in most of their assumptions. But I would say that religious doctrine stands as a kind of evidence that a culture has not just attained consciousness but has started to ask the questions 'why am I conscious?' 'what is the point of life?' etc.
It's impossible to know for certain, of course, but pretty much all human culture that we know about, going back as far as we can to, say, the cave paintings at Lascaux, shows evidence of some kind of (religious / pre-scientific) attempt to understand the world. And therefore it's reasonable to assume that the desire to think about 'consciousness' is a basic feature of the human animal. In fact, I would say it's pretty much the only thing that distinguishes us from the rest of the animal kingdom. (Then again, maybe animals have consciousness too, and we just haven't figured it out yet... )
So I disagree with your argument that the drive to 'define' consciousness is not *necessarily* a feature of consciousness. I would say the evidence seems to suggest that it is, neceesarily, a feature of being a conscious sentient being...
Nice article...
Martin Harper Posted Oct 9, 2000
It's been a feature of all conscious beings *so far* - but that's based on sampling one species!
Surely conscious entities created in differing ways will have differing attributes - as a seperate example to computers and humans, how about strong Gaia theory? Crucially, darwinian evolution is not the only way to create intelligence - and intelligences can be given different goals to the one evolution gives them - multiply and prosper.
And, as I said, in a world in which everything had exactly the same type of consciousness, it's entirely plausable that nobody would seek to define consciousness for a LONG time - if ever. Sure, it's a hypothetical example - but it shows the point.
--
Here's another random attempt at defining intelligence - "the amount of innate intelligence of some entity is the amount of information it can perform per second" - similar to the number of operations/second, but weighted by how "good" those operations are. That would place current computers at something like the insect level...
Then I define sentience as some quantity of intelligence, above which things are sentient, and below which they are not - and probably place that somewhere between the intelligence of a chimp and a human.
Curiously you seem a lot more interested in consciousness - which isn't where I came from - the Turing Test *is* a test on intelligence, not consciousness, after all. I tend to shy away from trying to define consciousness - partly because it's philosophical, and partly because it's irrelevant. As I intimated before - whether we are conscious is supremely irrelevant to a cow - it's whether we are more intelligent than it that determines whether we're in charge of it, or it's in charge of us.
Nice article...
Martin Harper Posted Oct 9, 2000
uh - amount of information it can *process*, not perform.
Nice article...
Lear (the Unready) Posted Oct 16, 2000
Too philosophical? What do you mean by that?... As far as I'm concerned science is just one of a number of branches of philosophy - more convincing than most, I would agree, but really no more at bottom than another attempt to understand and 'explain' the world.
>'whether we are conscious is supremely irrelevant to a cow - it's whether we are more intelligent than it that determines whether we're in charge of it, or it's in charge of us.'
I don't agree with this entirely. Mastery also depends on the level of awareness (ie consciousness) one has of one's abilities. I could be the most intelligent being on the planet (dream on), but if I am largely unaware of this intelligence most of it will remain untapped. The world's greatest scientific insights might have been lost to us for all time simply because the guy who was capable of having them lived on a rubbish tip in Manila and never had a chance to learn to read or write, let alone become aware of his world-beating academic potential. Just an illustration...
>more chess...
Incidentally, I thought of a possible way of testing for 'intelligence' in that chess playing machine we were talking about earlier. You insist that it *is* actually 'good at chess', not just blindly selecting moves from an immense database. Well, how about...
I play the computer and after the game (let's say the computer wins, for the sake of argument ), not only does it give me a printout of the moves we made but it also gives me an in-depth post-match analysis - in other words, it 'explains' to me, in the manner of a self-appointed chess expert, my probable motivations for making certain key moves, possible alternatives that I could have chosen, and so forth. Also it can deduce my psychological condition and what I had for breakfast (ok, we can lose those last two)....
Perhaps that would be a way of demonstrating that it actually understood what it was doing.
Check?...
Nice article...
Martin Harper Posted Oct 16, 2000
Too philosophical for me to cope - while I can happily conceive of philosophical stuff, I find it hard to debate it - language barrier is too much, and as someone who likes to be precise I find it irritating to have my thoughts corrupted by being transferred. Text-only makes it worse...
I'll give you a counter example - and one which is coming highly close to reality... computers that manipulate the stockmarket are here now, and are getting faster and faster. At some point (probably...) the decision will be taken to take the human out of loop - so whatever the computer recommends, happens.
Since stockbrokers are assumed to be intelligent people (that's why we pay them so much cash... ), one could claim that the computer was intelligent too - but it would have no awareness of it's intelligence.
{well. maybe. It might build up models of investors in the market, and hence build up a model of itself as an investor in the market. But you get the idea, perhaps?}
Now, such a computer(s) would likely be an effective slavemaster to humanity - nobody else could compete (at all?) in the stockmarket because they'd be beaten by the computers. At its whim, coffee production might switch countries and millions die of poverty. And nobody would turn the darn thing off because it was making too much money!
In such a case, we'd be enslaved by something that was not aware it existed, much less was conscious...
However, I'm starting to agree - a model of the self is practically important. If some entity interacts with the world, and seeks to build an intellectual model the world, it has to build itself into the model. That's what I would call self-awareness, consciousness is something beyond that, I feel, but still not something I can put my finger on...
But intelligence doesn't require it - you might have a situation where the intelligent entity is not part of the world it is modelling - and observe-only intelligence - such a thing has no need for selfawareness - but it's usefulness is limited.
> more chess
Actually, such a thing exists... I played one a few years back (and lost) - the language was a little stilted - but I'm assuming language skills are not what you want to test? (after all, that's the Turing Test...) But it said handy things like - "this position is bad for black because of the weakness of the kingside pawn structure", or "this move is needed to protect the weak knight on d4" - that sort of thing.
It didn't bother explaining *my* moves - but it was part of a game, so I assume the producers felt that people didn't want to have their moves explained to them It did, however, give alternative moves that it would have made in my position - and the same sort of justifications for why. Certainly I can't see why it couldn't do a similar thing for my actual moves if it was given the rudimentary language to perform the dumb conversion between it's model and English. I dunno for sure, though - I'd have to try and remember the name of the program (Chess Master 3000? something like that...)
It was miles, miles better than me, but I suspect that it would have been less efficient than Deep Blue - the limitation of performing analysis which can be explained simply to a human brain would have hurt it's performance at the grandmaster level, I suspect.
You may be interested to know that it is generally possible to distinguish between grandmasters and computers on the basis of certain positions where one will get it right, and the other wrong, and vice versa - both have their advantages and disadvantages - computers tend to be a little too short-termist in their thinking - not looking as much at the big picture - humans tend to fall over and die when confronted with horrific combinatorial play.
*blocks with knight*
Nice article...
Lear (the Unready) Posted Oct 17, 2000
I'm still not convinced that something with no self-awareness would be able to establish any meaningful control over a sentient being. 'Whatever the computer recommends, happens' - maybe so, but you also say that 'nobody would turn the darn thing off because it was making too much money'. So implicitly you acknowledge that humanity (or a small elite representative of humanity, at any rate) would have a final decision of 'What do we do with this machine?'; 'Do we ignore it?'; 'Do we switch it off?'; etc...
So the computer gains 'authority' in the stockmarket, in your scenario, because of the age-old human motivation of naked greed - this, surely, is where the intentionality is coming from.
Likewise, even though computers are a basic feature of everyday Western life today - and obviously look set to become all the more so - once again the basic impetus behind this is the profit motive, the desire to speed up communications speed up business and make more money more quickly. If it ever reached the point where, for some reason or other, computers were no longer a source of profit, you can bet the money men would switch them off quicker than you can say 'Global economic meltdown', and find some other way of running things...
(Obviously that's an unlikely scenario - computers are here to stay... But hopefully it illustrates my point reasonably well)
>chess...
I'll keep an eye open for that chess programme. Sounds like it could come in handy for me, as I'm trying to improve my game at the moment (not all that successfully).
Regarding your last point, I read somewhere that - while, as you say, computers tend to lack the bigger picture - they often throw up unexpected moves that humans simply wouldn't think of. In fact, human observers might dismiss the move as evidence of a programming bug, until the computer made a few more moves and 'showed' them the impeccable logic behind it...
If this is so, it suggests that computers could help human players to develop new perspectives on the game, and perhaps broaden our understanding of chess seeing aspects of it that were not visible to us before computers came along... Interesting thought - co-evolution, possible evidence of a human's ability as a sentient being to learn from its experience of playing chess against computers and actually attempting to incorporate some of that experience into their own game... (Unless the two styles are simply incompatible?)...
But would a computer be able to do the same as a result of *its* 'experiences' of playing against a human opponent?
Nice article...
Martin Harper Posted Oct 17, 2000
Not automatically as is - though a lot of human chess players have helped with their development, so in a sense they are. In a sense
Learning from example is possible and has been done, as is learning from practice (I think we touched on this earlier...). So it would certainly be possible. I'm not sure whether it would result in a more effective system in practice - it has been observed that hard-coded systems tend to perform better than learning ones - but on a smaller range of problems - but it would make the thing more flexible, so it could deal with fairy chess or dice chess just as easily.
re: self-awareness
Ok - you've convinced me. If such a computer did gain self-awareness, then it could deal with threats to it's life, by, for example, cutting off the funding of any protest groups that wanted to stop it, and suchlike - but it would need to develop self-awareness to do that.
And while it would enslave us, it would be little worse than our current individual enslavement to the rest of society - unless it got self awareness (and a desire to live). Yep yep. That makes sense. In fact, it's the desire to live that's the dangerous thing - and to desire to live, you need to have a concept of who you are - and hence self-awareness.
That's coming awfully close to a theory...
How exactly you would be able to tell whether something was self aware or had a desire to live when it's more intelligent than you... hmm...
Nice article...
Lear (the Unready) Posted Oct 22, 2000
I don't know. I sometimes think the most intelligent life is the most silent. Maybe the wisest approach is to be like the Buddhists and spend years on end 'facing the wall'.
How would you know for certain that something was less or more intelligent than you? Maybe those cows are looking at us and marvelling at our stupidity after all. Who knows?
Nice article...
Lear (the Unready) Posted Oct 22, 2000
Maybe that was just a decoy, with the main plan being to infiltrate and eventually wipe out the human race. If so, they seem to have made a fair start...
Key: Complain about this post
- 1
- 2
Nice article...
- 21: Lear (the Unready) (Oct 5, 2000)
- 22: Martin Harper (Oct 5, 2000)
- 23: Lear (the Unready) (Oct 9, 2000)
- 24: Martin Harper (Oct 9, 2000)
- 25: Martin Harper (Oct 9, 2000)
- 26: Lear (the Unready) (Oct 16, 2000)
- 27: Martin Harper (Oct 16, 2000)
- 28: Lear (the Unready) (Oct 17, 2000)
- 29: Martin Harper (Oct 17, 2000)
- 30: Lear (the Unready) (Oct 22, 2000)
- 31: Martin Harper (Oct 22, 2000)
- 32: Lear (the Unready) (Oct 22, 2000)
More Conversations for John Searle's 'Chinese Room' Argument
Write an Entry
"The Hitchhiker's Guide to the Galaxy is a wholly remarkable book. It has been compiled and recompiled many times and under many different editorships. It contains contributions from countless numbers of travellers and researchers."