Re: Re: How would a computer know if it were conscious?

From: Mark Peaty <mpeaty.domain.name.hidden>
Date: Tue, 26 Jun 2007 02:36:10 +0800

David,
We have reached some
understanding in the 'asifism' thread, and I would summarise
that, tilted towards the context of this line of this thread,
more or less as
follows.

Existence -
* The irreducible primitive is existence per se;
* that we can know about this implies differentiation in and of
that which exists;
* that we can recognise both invariance and changes and
participate in what goes on implies _connection_.

I am sure there must be mathematical/logical formalism which
could render that with exquisite clarity, but I don't know how
to do it. Plain-English is what I have to settle for [and aspire
to :-]

There are a couple of issues that won't go away though: our
experience is always paradoxical, and we will always have to
struggle to communicate about it.

Paradox or illusion -
I think people use the word 'illusion' about our subjective
experience of being here now because they don't want to see it
as paradoxical. However AFAICS, the recursive self-referencing
entailed in being aware of being here now guarantees that what
we are aware of at any given moment, i.e. what we can attend to,
can never be the totality of what is going on in our brains. In
terms of mind, some of it - indeed probably the majority - is
unconscious. We normally are not aware of this. [Duh, that is
what unconscious means Mark!] But sometimes we can become aware
[acutely!]
of having _just been_ operating unconsciously and this is
salutary, once the sickening embarrassment subsides anyway :-0

For those of us who have become familiar with this issue it is
no hardship but there are many who resist the idea. The least
mortifying example that is _easy to see in oneself_ is what
happens when we look for something and then find it: before we
find it the thing is 'not there' for us, except that we might
believe that it is really. Then we find it; the thing just pops
into view! As mundane as mould on cheese, but bloody marvellous
as soon as you start thinking about how it all works!

But I have to *challenge you to clarify* whether what I write
next really ties in completely with what you are thinking.
I'll try it in point form for brevity's sake.

Behaviour and consciousness -
* Consciousness is something we know personally, and through
discussion with others we come to believe that their experience
is very similar.
* Good scientific evidence and moderately sceptical common sense
tell us is this experience is _intimately and exclusively_ bound
up with the activity of our brains. Ie the experience - the
conscious awareness of the moment as well as the simultaneous or
preliminary non-conscious activity - is basically what the brain
does, give or take a whole range of hormonal controls of the
rest of the organism. This can be summarised as 'The mind is
what the brain does', at least insofar as 'consciousness' is
concerned, and the brain does it all in order to make the body's
muscles move in the right way.
* People's misunderstanding about how we are conscious seems to
centre around how mere meat could 'have' this experience.
* The answer is that the brain is structured so that behaviours
- potentially a million or more human behaviours of all sorts -
can be *stored* within the brain. This storage, using the word
in a wide sense, is actually changes to the fine structures
within the brain [synapses, dendrite location, tags on DNA, etc]
which result in [relatively] discrete, repeatable patterns of
neuronal network activity occurring which function as sequences
of muscle activation
* For practical purposes behaviours usually involve muscles
moving body parts appropriately. [If muscles don't move, nobody
else can be sure if anything is going on]. However, within the
human brain, learning also entails the formation of neuronal
network activity patterns which become surrogates for or
alternatives to overtly visible behaviours. Likewise the
completely internal detection of such surrogate activities
becomes a kind of surrogate for perception of one's own overt
behaviours or for perception of external world activities which
would result from one's own actions.
* Useful and effective response and adaptation to the world
requires the review of appropriateness of one's overt behaviour
and to be able to adjust or completely change one's behaviours
both at very short notice and over arbitrarily long periods
depending on the duration of the effects of one's actions. This
entails responding to one's own behaviours over whatever time
scale is necessary.
* Behaviours, once learned, become habitual i.e. they are evoked
by appropriate circumstances and proceed in the manner learned
unless varied by on-going review and adjustment. Where the
habitual behavioural response is completely appropriate, we are
barely conscious of the activity; we only pay attention to
novelties and challenges - be they in the distant environment,
our close surroundings, or internal to our own bodies and minds.

Who? -
* The concept of responding to one's own responses being the
basis of consciousness causes some to complain that this implies
some kind of infinite regress of observers. What actually
happens is that internal brain behaviours [discrete network
activations] occur as surrogates for all the relevant
environmental features of interest, including one's own body and
the storyline we are following. Where surrogates for
environmental features are linked in with surrogates for 'self'
[body and storyline] and with network activations that stand for
relationships between those features of environment and self,
THAT, moment by moment, is something which exists. So there is
'something it is LIKE something to be' and that is what it is.
The registration of novelty and the responses to it, reviewed in
ceaseless recursive cycles, gives us the basis of subjective time.

I have put this description in terms of 'behaviours' because I
am practising how to deal with the jibes and stonewalling of
someone who countenance only 'behavioural analysis'
descriptions. I am happier recognising that most internal
behaviours can be called 'representations' - it is much more
succinct.

Regards

Mark Peaty CDES

mpeaty.domain.name.hidden

http://www.arach.net.au/~mpeaty/





David Nyman wrote:
> On Jun 20, 3:35 am, Colin Hales <c.ha....domain.name.hidden> wrote:
>
>> Methinks you 'get it'. You are far more eloquent than I am, but we talk of
>> the same thing..
>
> Thank you Colin. 'Eloquence' or 'gibberish'? Hmm...but let us
> proceed...
>
>> where I identify <<<???>>> as a "necessary primitive" and comment that
>> 'computation' or 'information' or 'complexity' have only the vaguest of an
>> arm waving grip on any claim to such a specific role. Such is the 'magical
>> emergence' genre.
>
> Just so. My own 'meta-analysis' is also a (foolhardy?) attempt to
> identify the relevant 'necessity' as *logical*. The (awesome) power
> of this would be to render 'pure' 3-person accounts (i.e. so-called
> 'physical') radically causally incomplete. Some primitive like yours
> would be a *logically necessary* foundation of *any* coherent account
> of 'what-is'.
>
> Strawson, and Chalmers, as I've understood them, make the (IMO)
> fundamental mis-step of proposing a superadded 'fundamental property'
> to the 'physical' substrate ('e.g. 'information'). This has the fatal
> effect of rendering such a 'property' *optional* - i.e. it appears
> that everything could proceed just as happily without it in the 3-
> person account, and hence 'consciousness' can (by some) still airily
> be dismissed as an 'illusion'. The first move here, I think, is to
> stop using the term 'consciousness' to denote any 'property'.
>
> My own meta-analysis attempts to pump the intuition that all
> processes, whether 0, 1, or 3-person, must from *logical necessity* be
> identified with 'participative encounters', which are unintelligible
> in the absence of *any* component: namely 'participation', 'sense',
> and 'action'. So, to 'exist' or 'behave', one must be:
>
> 1) a participant (i.e. the prerequisite for 'existence')
> 2) sensible (i.e. differentiating some 'other' in relationship)
> 3) active (i.e. the exchange of 'motivation' with the related 'other')
>
> and all manifestations of 'participative existence' must be 'fractal'
> to these characteristics in both directions (i.e. 'emergence' and
> 'supervention'). So, to negate these components one-by-one:
>
> 1) if not a participant, you don't get to play
> 2) if not sensible, you can't relate
> 3) if not active in relationship, you have no 'motivation'
>
> These logical or semantic characteristics are agnostic to the
> 'primitive base'. For example, if we are to assume AR as that base,
> then the 'realism' part must denote that we 'participate' in AR, that
> 'numbers' are 'mutually sensible', and that arithmetical relationship
> is 'motivational'. If I've understood Bruno, 'computationalism'
> generates 'somethings' at the 1-person plural level. My arguments
> against 'software uploading' then apply at the level of these
> 'emergent somethings', not to the axiomatic base. This is the nub of
> the 'level of substitution' dilemma in the 'yes doctor' puzzle.
>
> In 'somethingist' accounts, 'players' participate in sensory-
> motivational encounters between 'fundamental somethings' (e.g.
> conceived as vibrational emergents of a modulated continuum).
>
> The critical move in the above argument is that by making the relation
> between 0,1, and 3-person accounts and the primitives *self-relation*
> or identity, we jettison the logical possibility of 'de-composing'
> participative sensory-motivational relationship. 0,1, and 3-person
> are then just different povs on this:
>
> 0 - the participatory 'arena' itself
> 1 - the 'world' of a differentiated 'participant'
> 3 - a 'proxy', parasitising a 1-person world
>
> 'Zombies' and 'software' are revealed as being category 3: they
> 'parasitise' 1-person worlds, sometimes as 'proxies' for distal
> participants, sometimes 'stand-alone'. The imputation of 'soft
> behaviour' to a computer, for example, is just such a 'proxy', and has
> no relevance whatsoever to the 1-person pov of the distal
> 'participatory player'. Such a pov can emerge only fractally from its
> *participative* constitution.
>
>> A
>> principle of the kind X must exist or we wouldn't be having this
>> discussion. There is no way to characterise explanation through magical
>> emergence that enables empirical testing. Not even in principle. They are
>> impotent at all prediction. You adopt the position and the whole job is
>> done and is a matter of belief = NOT SCIENCE.
>
> Well, I'm happy on the above basis to make the empirical prediction:
>
> No 'computer' will ever spontaneously adopt a 1-person pov in virtue
> of any 'computation' imputed to it.
>
> You, of course, are working directly on this project. My breath is
> bated!
>
> For me, one of the most important consequences of the foregoing
> relates to our intuitions about ourselves. We hear from various
> directions that our 1-person worlds are 'epiphenomenal' or 'illusory'
> or simply that they don't 'exist'. But this can now be seen to be
> vacuous, deriving from a narrative fixation on the 'proxy', or
> 'parasite', rather than the participant. In fact, it is the tacit
> assumption of sense-action to the parasite (e.g. the 'external world')
> that is illusory, epiphenomenal and non-existent. Real players -
> participators - inherit the precursors of *all* their characteristics
> from the primitives on which they supervene: hence the fundamental
> 'spontaneity' (i.e. 'given-ness') of the primitives must also emerge,
> mutatis mutandis, at the 1-person level. This is crucial, because we
> can now stop gibbering about 'illusion' (which of course isn't to say
> that we can never be mistaken). Our personal worlds really are
> 'something like' - sensorily, motivationally - the explanatory
> primitives on which they supervene. After all, sans magic, how else
> could it be?
>
> Cheers
>
> David
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Mon Jun 25 2007 - 14:36:52 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:14 PST