Re: How would a computer know if it were conscious?

From: Brent Meeker <>
Date: Mon, 04 Jun 2007 09:54:54 -0700

Stathis Papaioannou wrote:
> On 04/06/07, *"Hal Finney"* < <>> wrote:
> Part of what I wanted to get at in my thought experiment is the
> bafflement and confusion an AI should feel when exposed to human ideas
> about consciousness. Various people here have proffered their own
> ideas, and we might assume that the AI would read these suggestions,
> along with many other ideas that contradict the ones offered here.
> It seems hard to escape the conclusion that the only logical response
> is for the AI to figuratively throw up its hands and say that it is
> impossible to know if it is conscious, because even humans cannot agree
> on what consciousness is.
> In particular I don't think an AI could be expected to claim that it
> knows that it is conscious, that consciousness is a deep and intrinsic
> part of itself, that whatever else it might be mistaken about it could
> not be mistaken about being conscious. I don't see any logical way it
> could reach this conclusion by studying the corpus of writings on the
> topic. If anyone disagrees, I'd like to hear how it could happen.
> And the corollary to this is that perhaps humans also cannot
> legitimately
> make such claims, since logically their position is not so different
> from that of the AI. In that case the seemingly axiomatic question of
> whether we are conscious may after all be something that we could be
> mistaken about.
> A human, or an AI, or a tree stump cannot be mistaken about what, if
> anything, it directly experiences.

That's not so clear to me. Certainly one can be uncertain about what one directly experiences, as illustrated by various optical illusions. And certainly one can be wrong about what one has just experienced. I suspect that, except for some tautological definition of "directly experienced", one can be wrong about them.

Of course that doesn't imply that one can be wrong about having some experience at all, but maybe... On the modular view of the brain, part A that evaluates could be wrong about part B experiencing something.

>However, it could not know that this
> experience corresponds to what any other entity calls "consciousness".
> It is possible that what other people call "consciousness" is very
> different to what I experience, and certainly a computer would do well
> to question whether its experiences, such as they may be, are
> "consciousness" as would befit a human; but it couldn't be in doubt that
> it had some experiences.

I agree; but the question isn't whether it could be in doubt, but whether it could be mistaken. I don't think this question can be answered without first having a good 3rd person theory of what constitutes consciousness.

Brent Meeker

You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to
To unsubscribe from this group, send email to
For more options, visit this group at
Received on Mon Jun 04 2007 - 12:54:59 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:14 PST