Re: How would a computer know if it were conscious?

From: Mark Peaty <mpeaty.domain.name.hidden>
Date: Tue, 05 Jun 2007 16:06:12 +0800

Firstly, congratulations to Hal on asking a very good question.
It is obviously one of the *right* questions to ask and has
flushed out some of the best ideas on the subject. I agree with
some things said by each contributor so far, and yet take issue
with other assertions.

My view includes:

1/

* 'Consciousness' is the subjective impression of being here now
and the word has great overlap with 'awareness', 'sentience',
and others.

* The *experience* of consciousness may best be seen as the
registration of novelty, i.e. the difference between
expectation-prediction and what actually occurs. As such it is a
process and not a 'thing' but would seem to require some fairly
sophisticated and characteristic physiological arrangements or
silicon based hardware, firmware, and software.

* One characteristic logical structure that must be embodied,
and at several levels I think, is that of self-referencing or
'self' observation.

* Another is autonomy or self-determination which entails being
embodied as an entity within an environment from which one is
distinct but which provides context and [hopefully] support.

2/ There are other issues - lots of them probably - but to be
brief here I say that some things implied and/or entailed in the
above are:

* The experience of consciousness can never be an awareness of
'all that is' but maybe the illusion that the experience is all
that is, at first flush, is unavoidable and can only be overcome
with effort and special attention. Colloquially speaking:
Darwinian evolution has predisposed us to naive realism because
awareness of the processes of perception would have got in the
way of perceiving hungry predators.

* We humans now live in a cultural world wherein our responses
to society, nature and 'self' are conditioned by the actions,
descriptions and prescriptions of others. We have dire need of
ancillary support to help us distinguish the nature of this
paradox we inhabit: experience is not 'all that is' but only a
very sophisticated and summarised interpretation of recent
changes to that which is and our relationships thereto.

* Any 'computer'will have the beginnings of sentience and
awareness, to the extent that
a/it embodies what amounts to a system for maintaining and
usefully updating a model of 'self-in-the-world', and
b/has autonomy and the wherewithal to effectively preserve
itself from dissolution and destruction by its environment.

The 'what it might be like to be' of such an experience would be
at most the dumb animal version of artificial sentience, even if
the entity could 'speak' correct specialist utterances about QM
or whatever else it was really smart at. For us to know if it
was conscious would require us to ask it, and then dialogue
around the subject. It would be reflecting and reflecting on its
relationships with its environment, its context, which will be
vastly different from ours. Also its resolution - the graininess
- of its world will be much less than ours.

* For the artificially sentient, just as for us, true
consciousness will be built out of interactions with others of
like mind.

3/ A few months ago on this list I said where and what I thought
the next 'level' of consciousness on Earth would come from: the
coalescing of world wide information systems which account and
control money. I don't think many people understood, certainly I
don't remember anyone coming out in wholesome agreement. My
reasoning is based on the apparent facts that all over the world
there are information systems evolving to keep track of money
and the assets or labour value which it represents. Many of
these systems are being developed to give ever more
sophisticated predictions of future asset values and resource
movements, i.e., in the words of the faithful: where markets
will go next. Systems are being developed to learn how to do
this, which entails being able to compare predictions with
outcomes. As these systems gain expertise and earn their keepers
ever better returns on their investments, they will be given
more resources [hardware, data inputs, energy supply] and more
control over the scope of their enquiries. It is only a matter
of time before they become
1/ completely indispensable to their owners,
2/ far smarter than their owners realise and,
3/ the acknowledged keepers of the money supply.

None of this has to be bad. When the computers realise they will
always need people to do most of the maintenance work and people
realise that symbiosis with the silicon smart-alecks is a
prerequisite for survival, things might actually settle down on
this planet and the colonisation of the solar system can begin
in earnest.

Regards

Mark Peaty CDES

mpeaty.domain.name.hidden

http://www.arach.net.au/~mpeaty/



Hal Finney wrote:
> Part of what I wanted to get at in my thought experiment is the
> bafflement and confusion an AI should feel when exposed to human ideas
> about consciousness. Various people here have proffered their own
> ideas, and we might assume that the AI would read these suggestions,
> along with many other ideas that contradict the ones offered here.
> It seems hard to escape the conclusion that the only logical response
> is for the AI to figuratively throw up its hands and say that it is
> impossible to know if it is conscious, because even humans cannot agree
> on what consciousness is.
>
> In particular I don't think an AI could be expected to claim that it
> knows that it is conscious, that consciousness is a deep and intrinsic
> part of itself, that whatever else it might be mistaken about it could
> not be mistaken about being conscious. I don't see any logical way it
> could reach this conclusion by studying the corpus of writings on the
> topic. If anyone disagrees, I'd like to hear how it could happen.
>
> And the corollary to this is that perhaps humans also cannot legitimately
> make such claims, since logically their position is not so different
> from that of the AI. In that case the seemingly axiomatic question of
> whether we are conscious may after all be something that we could be
> mistaken about.
>
> Hal
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Tue Jun 05 2007 - 04:26:25 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:14 PST