Re: How would a computer know if it were conscious?

From: Brent Meeker <>
Date: Sun, 24 Jun 2007 13:32:51 -0700

OOPS! I accidentally hit the "send" button on the wrong copy.
Here's what I intended to send below:

David Nyman wrote:
> On 23/06/07, *Russell Standish* <
> <>> wrote:
> RS: Perhaps you are one of those rare souls with a foot in
> each camp. That could be be very productive!
> I hope so! Let's see...
> RS: This last post is perfectly lucid to me.
> Phew!! Well, that's a good start.
> RS: I hope I've answered it
> adequately.
> Your answer is very interesting - not quite what I expected:
> RS: In some Platonic sense, all possible observers are already
> out there, but by physically instantiating it in our world, we are in
> effect opening up a communication channel between ourselves and the
> new consciousness.
> I think I must be missing something profound in your intended meanings of:
> 1) 'out there'
> 2) 'physically instantiating'
> 3) 'our world'
> My current 'picture' of it is as follows. The 'Platonic sense' I assume
> equates to the 'bit-string plenitude' (which is differentiable from 'no
> information' only by internal observers, like the Library of Babel - a
> beautiful idea BTW). But I'm assuming a 'hierarchy' of recursive
> computational emergence through bits up through, say, strings, quarks,
> atoms, molecules, etc - in other words what is perceived as
> matter-energy by observers. I then assume that both 'physical objects'
> and any correlated observers emerge from this matter-energy level, and
> that this co-emergence accomplishes the 'physical instantiation'. IOW,
> the observer is the 1-person view, and the physical behaviour the
> 3-person view, of the same underlying complex emergent - they're
> different descriptions of the same events.
> If this is so, then as you say, the opening of the 'communication
> channel' would be a matter of establishing the means and modes of
> interaction with any new consciousness, because the same seamless
> underlying causal sequence unites observer-world and physical-world:
> again, different descriptions, same events.
> If the above is accepted (but I'm beginning to suspect there's something
> deeply wrong with it), then the 'stability' of the world of the observer
> should equate to the 'stability' of the physical events to which it is
> linked through *identity*. Now here's what puzzles me. ISTM that the
> imputation of 'computation' to the physical computer is only through the
> systematic correspondence of certain stable aspects of its (principally)
> electronic behaviour to computational elements: numbers,
> mathematical-logical operators, etc. The problem is in the terms
> 'imputation' and 'correspondence': this is surely merely a *way of
> speaking* about the physical events in the computer, an arbitrary
> ascription, from an infinite possible set, of externally-established
> semantics to the intrinsic physical syntactics.
> Consequently, ISTM that the emergence of observer-worlds has to be
> correlated (somehow) - one-to-one, or isomorphically - with
> corresponding 'physical' events: IOW these events, with their 'dual
> description', constitute a single 'distinguished' *causal* sequence. By
> contrast, *any* of the myriad 'computational worlds' that could be
> ascribed to the same events must remain - to the computer, rather than
> the programmer - only arbitrary or 'imaginary' ones. This is why I
> described them as 'nested' - perhaps 'orthogonal' or 'imaginary' are
> better: they may - 'platonically' - exist somewhere in the plenitude,
> but causally disconnected from the physical world in which the computer
> participates. The computer doesn't 'know' anything about them.
> Consequently, how could they possess any 'communication channel' to the
> computer's - and our - world 'out there'?

I think I agree with your concern and I think the answer is that "conscious" implies "conscious of something". For a computer or an animal to be conscious is really a relation to an environment. So for a computer to be conscious, as a human is, it must be able to perceive and act in our environment. Or it could be running a program in which a conscious being is simulated and that being would be conscious relative to a simulated environment in the computer. In the latter case there might be an infinite number of different interpretations that could be consistently placed on the computer's execution; or there might not. Maybe all those different interpretations aren't really different. Maybe they are just translations into different words. It seems to me to be jumping to a conclusion to claim they are different in some significant way.

The importance of the environment for consciousness is suggested by the sensory deprivation experiments of the late '60s. It was observed by people who spent a long time in a sensory deprivation tank (an hour or more) that their mind would enter a loop and they lost all sense of time.

Brent Meeker

> Of course I'm not claiming by this that machines couldn't be conscious.
> My claim is rather that if they are, it couldn't be solely in virtue of
> any 'imaginary computational worlds' imputed to them, but rather because
> they support some unique, distinguished process of *physical* emergence
> that also corresponds to a unique observer-world: and of course, mutatis
> mutandis, this must also apply to the 'mind-brain' relationship.
> If I'm wrong (as no doubt I am), ISTM I must have erred in some step or
> other of my logic above. How do I debug it?
> David

You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to
To unsubscribe from this group, send email to
For more options, visit this group at
Received on Sun Jun 24 2007 - 16:33:02 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:14 PST