A Conversation for Writing Right with Dmitri: Thinking About Thinking
Built to Last ...
Willem Posted May 9, 2017
Hi Chris! Well the way I envisage it, an artificial intelligence for qualifying as a mind needs first of all a specific focus point of awareness, where all sensory input would go to. If we're speaking of an artificial mind based on computer chips, then to this 'centre of awareness' a lot of different kinds of input can be sent. The artificial mind can be given a virtual body as well as a virtual world ... we're not at all far from being able to do that. I personally do think that an aritificial intelligence would need a body or body equivalent. Such a mind can easily be linked up with real-world stuff like robotic bits that can perform various tasks, cameras, microphones, so it can do things and make observations of the real world. As such I'm sure it could get a solid sense of itself. If it was close to a human mind in sophistication, then it could manage with a quite complex assortment of peripherals. (Personally I would not give it too much outside capability for the sake of safety concerns ... it can have many 'virtual' abilities in addition to a virtual body and world.) It would also have relationships, primarily with its own creators, and might very well be speaking to lots of people - I am sure it can get a sense of community from that. As for history, it is going to *be* history! It might very well be much impressed with its own significance. Of course it could be given a proper sense of humility!
Built to Last ...
Chris Morris Posted May 9, 2017
Willem, that's an interesting vision but, surely, it takes us back to the beginning of this conversation; if it's that straightforward why has so little progress been made in the last 60 years and why do humans have such a problem with being self-aware?
http://www.wired.com/2014/10/future-of-artificial-intelligence/ gives a different view
Built to Last ...
Elektragheorgheni -Please read 'The Post' Posted May 10, 2017
Willem, where would it get this 'humilty'? It is rare in the human species, unless you are lucky enough to enter a Total Perspective Vortex imagined by Douglas Adams. It would be a hard algorithm to come by. Too few examples in history. Humans are most likely to bite off more than they can chew just look at this place.
Built to Last ...
Willem Posted May 12, 2017
Hi Chris! Than article is not about artificial intelligence as I see it - it is about artificial cleverness, which they themselves admit towards the end. They even say there that such clever programs would want to *avoid* being aware - awareness being 'messy'. That may indeed be so with programs intended to be *useful*, which is what that article is about. Indeed such 'intelligent' programs can be very useful and supply humans with valuable 'expert' information ... but they're not minds as I use the term. To be a mind means to be aware ... not necessarily 'self-aware', just aware. And in spite of what that article suggests, this is *not* likely to be an accidental 'bug' of a clever program. We will need to understand much, much better what awareness is, and on what kind of physical substrate or phenomenon it is based. And that is what my own interest in AI is - as I see it, real AI where there's actually a real mind, so it's perhaps even wrong to call it artificial. But I'm interested to know if a mind could be based on something different than the kind of biological cellular machinery we possess. Thus: can mind exist in more than one medium? Can it exist in an electronic medium, not just in biological cells? I'm philosophically interested in *that* question ... and also I feel there's immense value for us in pursuing that question, because in the process we will learn what our own minds are. And other minds also ... we'll have a better understanding of mind all round, and may predict what other kinds of minds might exist. So far, clever programs don't really teach us anything about that.
Also ... I do think that artificial minds can be produced (and I just want to make clear, that I'm saying this with the optimistic attitude in that before we've really tried doing something, we shouldn't conclude that it is impossible). But I am very sure that we are very far from being able to do that in practice ... never mind that putting it in a sentence or two makes it seem very straightforward. The fact is that we're stumbling on the very first step, namely, having a subjectively aware kind of 'focus' to which sensory input can be sensed and which can then endeavor to make sense of them with the view towards arriving at an understanding. All the intelligent programs and things we have thus far, don't yet show the slightest sign of being awareness-based ... we simply have algorithms that can search through huge databases, and we have self-modifying programs that thus can 'learn', and we have parallel processing going on to break a task up into bits that can be simultaneously performed by many different processing units ... but all of this is still just computing, same as the very first electronic calculating machines did. We're not at the level of mind, yet.
But like I said, I'm here interested in the issue of mind-ness ... not so much cleverness, and indeed, there's more money to be made from cleverness ... but mind-ness is for me where the really important insights are needed.
Built to Last ...
Willem Posted May 12, 2017
Hi Elektra! Well, we can't create humans the way we'd like them to be, we're stuck with what we get. Actually I don't think humans per se are that arrogant, but those humans who are in power mainly are. Maybe we can put a perspective vortex into a guiding conceptual framework for an AI. Why not? Seriously ... I am sure that in getting to the point of making an artificial mind, we will be able to define 'parameters of personality' including such factors as humility and many others. Going by my previous posting ... in doing so, we're going to learn *a lot* about what makes our own minds tick.
Built to Last ...
Chris Morris Posted May 12, 2017
Willem, thanks for the comprehensive reply. The article is, indeed, about artificial cleverness rather than artificial intelligence; that's actually the point I was making. What you seem to be suggesting is that we are simply machines with added information, that it doesn't matter what form the information takes. What I've been saying in all of these posts is that mind is more than just information, it's grown out of the physical world through evolution and trying to translate it into some different form makes for something completely different or perhaps nothing at all.
Built to Last ...
Psiomniac Posted May 14, 2017
Chris,
Sorry it has taken a while to respond to your #75, I'll give that a go now. I'm going to try to deal with your points in turn, and then if I can improvise anything as a conclusion, I will!
Your objection to Musk's answer seems twofold: if I have this right, broadly you reject his reliance on computational metaphors (input/output) as symptomatic of his reductionist/materialist view of consciousness. The other objection seems political, in that his statement about us all being cyborgs ignores the situation of a large proportion of the world's population. The second point I think is true but given the context, understandable in my view. Your first objection is at the heart of the thread and relates to the question Willem just raised about whether we could make self-aware AI one day.
This is philosophically interesting because the question of whether consciousness can be instantiated via a non-biological substrate depends on whether consciousness can be regarded as a knd of information processing or not (c.f Dennett vs Searle and The Chinese Room Argument). I think I understand your stance to an extent, but saying you don't think AI could think of something like the NHS seems to beg the question.
The point you make about Musk's view being based on a kind of Cartesian Dualism seems unwarranted to me, for example I can find nothing incompatible between Musk's answers and Dennett's position, and of course Dennett explicitly rejects CD. I agree with what I interpret as your assessment that foundationalist projects have failed, but unless we cling to some transcendental arguments we are all in the same boat there aren't we? You might adhere to the most up-to-date embodied cognition paradigm, but if you want a rigorous derivation of your worldview in contradistinction to more information processing paradigms, how will you do that?
Your solution seems to be to accept the impossibility of this, which is consistent, but I wonder whether you might fall into an Argument From Adverse Consequences as a result? In other words, as you spelled out, you want it to be true that there is an unresolvable contradiction in trying to describe the nature of awareness as we describe other phenomena. You want Chalmers' Hard Problem to turn out to be insoluble. I don't know whether it is or not, but I'm betting neither do you.
Your points about moral relativism are interesting, although I wonder whether people make the elision between indexical relativism and nihilism too easily. There is an analogous mistake often made with Kuhn's paradigm shift and the notion of incommensurability. It does not follow that we have no good reasons to prefer one scientific model over another. Similarly, it does not follow that since moral systems are situated in cultures and practices, there is no basis for preferring ours over North Corea's. MacIntyre makes the point that incommensurability problems occur only when you try to abstract moralaty from its particular cultural and social standpoint. But if you reject foundationalism, why would you do that? "What rendered Newtonian physics rationally superior to its Galilean and Aristotelian predecessors and to its Cartesian rivals was that it was able to transcend their limitations by solving problems in areas in which those predecessors and rivals could by their own standards of scientific progress make no progress." (MacIntyre, 1981, p. 268)
In my view, the answer to moral relativism is twofold. 1) Realise that given moral codes are complex, contingent and context-dependent ways in which cultures attempt to express and promote their values. 2) Abandon attempts to find a rational foundation for values and pick a side. You could pick the side of empowerment and general human flourishing, or you could pick suppression of the many by the few. In this I agree with Hume, reason is the slave of the passions, you have to care about people and their wellbeing to choose the first option, but nobody has convincingly derived an 'ought' from an 'is' as far as I am aware (c.f. Pat Churchland). There is no rational basis for compassion (appeals to evolutionary theories of altruism make the same error of violating is/ought).
The last part of your argument was interesting, anomalous monism is a hoot isn't it? It seems to me though, that Cartesian Dualism still holds sway in your view, you even make the same kind of move, substituting the claustrum for the pineal gland. But if Libet's experiments tell us anything (see also Bear, 2016; Matsuhashi, 2008; Wegner, 2004), it is that we have reasons to suppose that the 'Cartesian Theatre' is a user-illusion.
You want free will to be the Real Thing, but in my view Cartesian Dualist views face the interaction problem and quantum swerve theories can't solve the problem of substituting randomness for freedom. Which leaves us with compatibilist arguments, which won't preserve the Real Thing. I agree with Dennet, it is better to consider varieties of free will worth wanting and how these might be compatible with determinism rather than offering an Argument From Adverse Consequences. We give up Free Will with capitals and settle for free will (or maybe free won't).
Bear, A., & Bloom, P. (2016). A Simple Task Uncovers a Postdictive Illusion of Choice. Psychological Science, 27(6), 914-922.
MacIntyre, A. (1981). After virtue. London: Duckworth.
Matsuhashi, M., & Hallett, M. (2008). The timing of the conscious intention to move. European Journal of Neuroscience, 28(11), 2344-2351.
Wegner, D. M. (2004). Précis of the illusion of conscious will. Behavioral and Brain Sciences, 27(05), 649-659.
Built to Last ...
Dmitri Gheorgheni, Post Editor Posted May 14, 2017
'Reality is that which, when you stop believing in it, doesn't go away.' --Philip K Dick, 'How to Build a Universe That Doesn't Fall Apart Two Days Later', 1978.
Built to Last ...
Psiomniac Posted May 14, 2017
There's also a phrase attributed to Dallas Willard: "Reality is what you bump into when you are wrong."
Built to Last ...
Chris Morris Posted May 14, 2017
Psi, thanks for taking the time to make sense of post 75, probably more sense than I did when I wrote it apart from a couple of minor points where I’ve not made myself clear.
Paragraph 1:
Your assessment of my intentions is broadly correct here, although I don’t distinguish strongly between the political and the philosophical; the idea of people being nothing but autonomous individuals inevitably has political consequences. I would be interested to know why you find his lack of recognition of political realities understandable in the context of that debate; I generally think it’s a good thing that anyone in a position of power such as his should have his views challenged at all times. I’ll let Willem reiterate his views on the difference between aware and self-aware.
Paragraph 2:
My mentioning the NHS was a way of illustrating what I would regard as the mistaken idea that AGI could be developed that has an individual identity (that would necessarily entail political awareness) rather than what Willem and I have agreed to call artificial cleverness.
Paragraph 3:
I may have misinterpreted his words as it is not clear what he means by output but, as he mentions a direct neural interface, I have assumed he regards output as something separate from the person and that can be transferred directly to a machine. This, to me, suggests some form of dualism of an autonomous individual cut-off from an alien “Outside.” As for clinging to some transcendental arguments, I think we are all in the same boat if by that you mean struggling to make sense of our reality. You and I (and everyone else) each have a world in which causal processes, for instance Hydrogen being converted into Helium through nuclear fusion caused by the force of gravity and giving off heat and light, are completely predictable and always have been and your world and mine have lots of other things in common, presumably, including a common language and an interest in philosophy. We don’t need a transcendental world to explain these things; that was a useful metaphor to help us imagine being outside ourselves in the same way that the unconscious was a useful metaphor for Freud. I love the phrase “embodied cognition paradigm”, that’s exactly what I think I am and, yes, I would like a rigorous derivation of that to differentiate it from reductionist views; it’s what I’m trying to do in this conversation but it may well be impossibly difficult – having spent about 50 years thinking about it, I will probably die still trying to explain it and having enjoyed every minute of it.
Paragraph 4:
Absolutely correct!
Paragraph 5:
Relativism is a fascinating debate but it only has a slight relevance to this conversation so I won’t spend too much space on it. Suffice it to say I would entirely agree with the views you express in Paragraph 6.
Paragraph 7:
Anomalous Monism looks very interesting, when I get time I’ll have a proper read of Davidson. The Claustrum and Bereitschaftspotential were included as possible proofs that my views are wrong; I’ve tried at all times to adhere to the spirit of Willem’s idea for this conversation which is one of honest investigation. Until something better comes along I’m happy to go along with Dennett and, talking of Dennett, thank you Dmitri for that deepity!
Built to Last ...
Psiomniac Posted May 15, 2017
Chris,
I'll try to clarify the points you raise on each paragraph.
Paragraph 1:
I didn't interpret Musk's position as being that we are nothing but autonomous individuals, quite the opposite. He seems to be talking about ways to alleviate bottlenecks in communication, that is connecting people via technology to further decrease the island status of individuals. His point about cyborgs is that modern humans in the developed world (the 'we' in the context in which he was speaking) already have technological prosthetics.
Paragraph 2:
This still begs the question, since if it is possible to develop self-aware AI then it might well be able to think of something like the NHS. I'm skeptical that having an individual identity entails political awareness though, given some people I know!
Paragraph 3:
I'm puzzled as to why you think this entails dualism. Musk mentions 'meatsticks' which is us typing this to each other. (Unless you are using voice recognition!) So imagine our speech centres, which formulate the language we will use to express ideas, were wired up to an interface that would allow our ideas to be interchanged much faster than this. Why would that suggest dualism any more than what we are doing now? You are reading my output.
As far as transcendental arguments go, I am using the term in the Kantian transcendental idealist sense, but yes we are in the same boat, bailing like billyo! We cannot have a rational foundation for believing in our shared reality, we just have to get on with it.
On embodied cognition, there are lots of psychologists working on it. Here is a reasonable definition:
"The embodied paradigm, originally developed in the fields of
theoretical biology (Varela, Thompson, & Rosch, 1991) and lin-
guistics (Lakoff & Johnson, 1999), holds that human cognition,
even in its highest level inferential processes, is rooted in senso-
rimotor processes that link the agent to the world in which she is
embedded." (Sciavio, Menin & Matyja, 2014).
Paragraph 7:
I think it is good that you present possible objections to your view. The Bereitschaftspotential might undermine your idea of Free Will, but I am not sure why the Claustrum would, unless you are saying that it seems to offer a mechanistic account of awareness. But in that case, it would only do so if you were already in sway to the Cartesian Theatre idea. If we reject that, then all of the workings of the brain and body could still account for awareness without it all having to 'come together' somewhere.
Built to Last ...
Dmitri Gheorgheni, Post Editor Posted May 15, 2017
I have read and thought, and decided that Bereitschaftspotential is merely a demonstration of the mind's ability to act backwards in time. I have a private term for this kind of action, which I call 'back-vectoring', since a decision made at Time B determines the position of a state of affairs at Time A. It's kind of like the way bad writers design a tv series: once the character is established, he suddenly develops a backstory explaining how he learned Chinese, acquired agoraphobia, etc, etc. Previously, I'd only had anecdotal evidence for this from such unreliable sources as Aleister Crowley, who claims to have done this to a watch thief by 'magick'. The Bereitschaftspotential experiments might actually be empirical proof that back-vectoring is not only possible, but an everyday occurrence.
(The fact that I watched the film 'Deja Vu' last night had nothing to do with this theory. Nothing at all...)
Ignore this muttering from the lunatic fringe, and continue with this highly interesting and rigorous discussion.
Built to Last ...
Psiomniac Posted May 16, 2017
Dmitri,
I'm getting the impression you aren't being entirely serious...
Built to Last ...
Dmitri Gheorgheni, Post Editor Posted May 16, 2017
I may sound flippant, but actually, I'm not sure that's not a viable theory. I suspect we may miss possible explanations because we are locked into our view of temporal linearity.
But I doubted anyone else would take it seriously, and I didn't want to interrupt the flow.
Built to Last ...
Psiomniac Posted May 16, 2017
Dmitri,
In that case, let's talk about it seriously for a minute. Backwards causation was at one point suggested by physicist Paul Davies to solve the puzzle of the strong anthropic principle, meanwhile varieties of backwards projection (or retrospective construction) has been offered to solve problems posed by Libet's experiments and the Bereitshaftspotential. So you're not alone in considering these things.
However, I don't think you are justified in invoking the Bereitshaftspotential as evidence for back-vectoring. There are many cognitive processes happening with different latencies, which are then bound into a subjective present, lasting 2-3 seconds. Thus there is no guarantee that the timeline of subjective experience matches the order in which these processes occur.
In summary, there are reasonable models which explain these phenomena much more parsimoniously than considering something like back-vectoring. That doesn't mean it isn't true of course, we might live in an epiontic universe not bound by the asymmetry of time's arrow. But the issue is: how would we know that?
Built to Last ...
Chris Morris Posted May 16, 2017
Psi, thanks for your points. I’ve read the transcript of Musk’s answer many times now and I have to disagree with your interpretation; the fact that he talks about restricted output tells me that he is seeing humans as isolated individuals who can only communicate with other individuals through a linear stream of information, presumably in a form accessible to AI (unless what he means is simply some mechanical form of amplification which seems unlikely). As for the cyborgs, even if you are correct and he is restricting his reference to us lucky few with sufficient money and infrastructure to have access to this technology, I still find it extremely alarming that it didn’t even occur to someone in his position to acknowledge the unlucky many who don’t.
I’ll concede the point that I really have no grounds for denying the possibility of AGI developing political ideas, similarly neither have I grounds for denying the existence of God but both of them seem equally unlikely to me. And, yes, we probably all know people who make us question the very existence of a human race but I think we can all agree that Margaret Thatcher’s idea that there is no such thing as society is as comical as her view that no Prime Minister should be without a Willy.
Paragraph 3: Yes, it is a very puzzling view and one that, I think, few people really get philosophically (a lot of people get it as a matter of faith but that’s too easy and not very useful). So here I am sat at my computer, listening to Julian Bream playing Albeniz on my headphones, still tasting the cheese I’ve just eaten and typing away with two of my meatsticks so slowly that it generally takes me several days to compose these replies. I’ve just explained in some detail all of the points I’m making here and the reasoning behind them to my wife which only took about ten minutes and now I’m sitting here trying to imagine that our conversation is information being transferred from one speech centre to another but that isn’t how I see it. What I see is a vast shared space in which there are words that can suggest a variety of ideas and we are pointing out some of the words and hoping, with a bit of empathy, we can share some of the meaning. Any sort of interface is really just an “other” that takes our words away from us and slows down the generation of ideas.
Now, the definition of embodied cognition you’ve found is so perfect for the views I want to express that I might devote a whole reply just on that but, if the sun’s shining tomorrow we’ll probably have a day at the seaside (being retired is wonderful!) so it might be a few days before I can do that. In the meantime, I will say that this definition precisely and literally extracts the urine from the idea of embodied cognition and because we occupy a shared space you will know the words I am not using and will understand both meanings of the phrase.
Built to Last ...
Chris Morris Posted May 16, 2017
Ah I wanted to add something about Dmitri's time theory. I was actually reading an article about a physicist who's name escapes me at the moment in which he concluded that the real solution to the difficulties with membrane theory was that it required two dimensions of time. Your post popped up as I was reading it. You missed your vocation...
Built to Last ...
Dmitri Gheorgheni, Post Editor Posted May 16, 2017
We may never know which way - or how many ways - time goes, but I'm tickled to find out I'm not alone in my speculations.
Talk on, you experts.
Built to Last ...
Psiomniac Posted May 18, 2017
Chris, #97
I think sometimes the most interesting parts of the dialogue happen when we are both looking at the ostensibly the same thing, but can't agree. On Musk, I suppose he could have said something like 'We (in the developed economies with access to computers) are already cyborgs...' but off the cuff my intuition is that he was speaking in a context that made such a qualification superfluous. Perhaps it is unlikely we will change our opposed intuitions on that, so we should agree to disagree.
I am still puzzled why you think his point about output has the implication that humans are isolated individuals who can only communicate with others through a linear stream of information? Do you think you and I are isolated individuals? Presumably not, yet here we are communicating via typing! Rather, I think Musk was making the point that we could increase the bandwidth of communication with a 'digital tertiary layer', namely AI.
So AI thinking of an NHS equivalent seems unlikely to you. You draw a parallel with grounds for denying the existence of god, well it seems to me that you have pretty good grounds for rejecting the assertion that a specified god exists, which is precisely why you think said existence is unlikely. The question is, do you similarly have good grounds for supposing AI could never think of an NHS?
On paragraph 3, I think I have a reasonable grasp of Cartesian Dualism but I am still unsure as to why anything in your account of Julian Bream and cheese suggests Musk's view implies anything of the sort, or why any of that rich phenomenology of experience would be negated by having a faster interface than a computer keyboard? Could you clarify?
I look forward to your thoughts on embodied cognition. Yes a shared space, the background as some philosophers have called it, allows me to infer your meaning. AI might be able to do that also someday.
Key: Complain about this post
Built to Last ...
- 81: Chris Morris (May 7, 2017)
- 82: Willem (May 9, 2017)
- 83: Chris Morris (May 9, 2017)
- 84: Elektragheorgheni -Please read 'The Post' (May 10, 2017)
- 85: Willem (May 12, 2017)
- 86: Willem (May 12, 2017)
- 87: Chris Morris (May 12, 2017)
- 88: Psiomniac (May 14, 2017)
- 89: Dmitri Gheorgheni, Post Editor (May 14, 2017)
- 90: Psiomniac (May 14, 2017)
- 91: Chris Morris (May 14, 2017)
- 92: Psiomniac (May 15, 2017)
- 93: Dmitri Gheorgheni, Post Editor (May 15, 2017)
- 94: Psiomniac (May 16, 2017)
- 95: Dmitri Gheorgheni, Post Editor (May 16, 2017)
- 96: Psiomniac (May 16, 2017)
- 97: Chris Morris (May 16, 2017)
- 98: Chris Morris (May 16, 2017)
- 99: Dmitri Gheorgheni, Post Editor (May 16, 2017)
- 100: Psiomniac (May 18, 2017)
More Conversations for Writing Right with Dmitri: Thinking About Thinking
Write an Entry
"The Hitchhiker's Guide to the Galaxy is a wholly remarkable book. It has been compiled and recompiled many times and under many different editorships. It contains contributions from countless numbers of travellers and researchers."