Re: How would a computer know if it were conscious?

From: David Nyman <david.nyman.domain.name.hidden>
Date: Sun, 24 Jun 2007 23:54:27 +0100

On 24/06/07, Brent Meeker <meekerdb.domain.name.hidden> wrote:

BM: I think I agree with your concern

DN: Ah...

BM: and I think the answer is that "conscious" implies "conscious of
something". For a computer or an animal to be conscious is really a
relation to an environment.

DN: Yes

BM: So for a computer to be conscious, as a human is, it must be able to
perceive and act in our environment.

DN: My point precisely.

BM: Or it could be running a program in which a conscious being is
simulated and that being would be conscious relative to a simulated
environment in the computer.

DN: I'm prepared to be agnostic on this. But as your 'or' rightly
indicates, if so, it would be conscious relative to the simulated
environment, *not* the human one.

BM: In the latter case there might be an infinite number of different
interpretations that could be consistently placed on the computer's
execution; or there might not. Maybe all those different interpretations
aren't really different. Maybe they are just translations into different
words. It seems to me to be jumping to a conclusion to claim they are
different in some significant way.

DN: Not sure... but I don't see how any of this changes the essential
implication, which is that however many interpretations you place on it, and
however many of these may evoke 'consciousness of something' (or as I would
prefer to say, a personal or observer world) it would be the simulated
world, not the human one (as you rightly point out). From Bruno's
perspective (I think - and AFAICS, also TON) these two 'worlds' would be
different 'levels of substitution'. So, if I said 'yes' to the doctor's
proposal to upload me as an AI program, this might evoke some observer
world, but any such would be 'orthogonal' to my and the computer's shared
'level of origin'. Consequently, no new observer evoked in this way could
have any ability to interact with that level. As an aside, it's an
interesting take on the semantics of 'imaginary' - and you know Occam's
attitude to such entities.

Anyway, I'm prepared to be agnostic for the moment about such specifics of
simulated worlds, but the key conclusion seems to be that in no case could
such a 'world' participate at the same causal level as the human one, which
vitiates any sense of its 'interacting' with, or being 'conscious of', the
same 'environment'. AFAICS you have actually reached the same conclusion,
so I don't see in what sense you mean that it's the 'answer'. You seem to
be supporting my point. Do I misunderstand?

David


> OOPS! I accidentally hit the "send" button on the wrong copy.
>
> Here's what I intended to send below:
>
> David Nyman wrote:
> > On 23/06/07, *Russell Standish* <lists.domain.name.hidden
> > <mailto:lists.domain.name.hidden>> wrote:
> >
> > RS: Perhaps you are one of those rare souls with a foot in
> > each camp. That could be be very productive!
> >
> > I hope so! Let's see...
> >
> > RS: This last post is perfectly lucid to me.
> >
> > Phew!! Well, that's a good start.
> >
> > RS: I hope I've answered it
> > adequately.
> >
> > Your answer is very interesting - not quite what I expected:
> >
> > RS: In some Platonic sense, all possible observers are already
> > out there, but by physically instantiating it in our world, we are in
> > effect opening up a communication channel between ourselves and the
> > new consciousness.
> >
> > I think I must be missing something profound in your intended meanings
> of:
> >
> > 1) 'out there'
> > 2) 'physically instantiating'
> > 3) 'our world'
> >
> > My current 'picture' of it is as follows. The 'Platonic sense' I assume
> > equates to the 'bit-string plenitude' (which is differentiable from 'no
> > information' only by internal observers, like the Library of Babel - a
> > beautiful idea BTW). But I'm assuming a 'hierarchy' of recursive
> > computational emergence through bits up through, say, strings, quarks,
> > atoms, molecules, etc - in other words what is perceived as
> > matter-energy by observers. I then assume that both 'physical objects'
> > and any correlated observers emerge from this matter-energy level, and
> > that this co-emergence accomplishes the 'physical instantiation'. IOW,
> > the observer is the 1-person view, and the physical behaviour the
> > 3-person view, of the same underlying complex emergent - they're
> > different descriptions of the same events.
> >
> > If this is so, then as you say, the opening of the 'communication
> > channel' would be a matter of establishing the means and modes of
> > interaction with any new consciousness, because the same seamless
> > underlying causal sequence unites observer-world and physical-world:
> > again, different descriptions, same events.
> >
> > If the above is accepted (but I'm beginning to suspect there's something
> > deeply wrong with it), then the 'stability' of the world of the observer
> > should equate to the 'stability' of the physical events to which it is
> > linked through *identity*. Now here's what puzzles me. ISTM that the
> > imputation of 'computation' to the physical computer is only through the
> > systematic correspondence of certain stable aspects of its (principally)
> > electronic behaviour to computational elements: numbers,
> > mathematical-logical operators, etc. The problem is in the terms
> > 'imputation' and 'correspondence': this is surely merely a *way of
> > speaking* about the physical events in the computer, an arbitrary
> > ascription, from an infinite possible set, of externally-established
> > semantics to the intrinsic physical syntactics.
> >
> > Consequently, ISTM that the emergence of observer-worlds has to be
> > correlated (somehow) - one-to-one, or isomorphically - with
> > corresponding 'physical' events: IOW these events, with their 'dual
> > description', constitute a single 'distinguished' *causal* sequence. By
> > contrast, *any* of the myriad 'computational worlds' that could be
> > ascribed to the same events must remain - to the computer, rather than
> > the programmer - only arbitrary or 'imaginary' ones. This is why I
> > described them as 'nested' - perhaps 'orthogonal' or 'imaginary' are
> > better: they may - 'platonically' - exist somewhere in the plenitude,
> > but causally disconnected from the physical world in which the computer
> > participates. The computer doesn't 'know' anything about them.
> > Consequently, how could they possess any 'communication channel' to the
> > computer's - and our - world 'out there'?
>
> I think I agree with your concern and I think the answer is that
> "conscious" implies "conscious of something". For a computer or an animal
> to be conscious is really a relation to an environment. So for a computer
> to be conscious, as a human is, it must be able to perceive and act in our
> environment. Or it could be running a program in which a conscious being is
> simulated and that being would be conscious relative to a simulated
> environment in the computer. In the latter case there might be an infinite
> number of different interpretations that could be consistently placed on the
> computer's execution; or there might not. Maybe all those different
> interpretations aren't really different. Maybe they are just translations
> into different words. It seems to me to be jumping to a conclusion to claim
> they are different in some significant way.
>
> The importance of the environment for consciousness is suggested by the
> sensory deprivation experiments of the late '60s. It was observed by people
> who spent a long time in a sensory deprivation tank (an hour or more) that
> their mind would enter a loop and they lost all sense of time.
>
> Brent Meeker
>
> >
> > Of course I'm not claiming by this that machines couldn't be conscious.
> > My claim is rather that if they are, it couldn't be solely in virtue of
> > any 'imaginary computational worlds' imputed to them, but rather because
> > they support some unique, distinguished process of *physical* emergence
> > that also corresponds to a unique observer-world: and of course, mutatis
> > mutandis, this must also apply to the 'mind-brain' relationship.
> >
> > If I'm wrong (as no doubt I am), ISTM I must have erred in some step or
> > other of my logic above. How do I debug it?
> >
> > David
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sun Jun 24 2007 - 18:54:50 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:14 PST