SV: computationalism and supervenience

From: Lennart Nilsson <lennartn.domain.name.hidden>
Date: Mon, 11 Sep 2006 08:10:25 +0200

-----Ursprungligt meddelande-----
Från: everything-list.domain.name.hidden
[mailto:everything-list.domain.name.hidden] För Brent Meeker
Skickat: den 11 september 2006 05:35
Till: everything-list.domain.name.hidden
Ämne: Re: computationalism and supervenience


Stathis Papaioannou wrote:
> Brent Meeker writes:
>
>
>>>>I could make a robot that, having suitable thermocouples, would quickly
withdraw it's
>>>>hand from a fire; but not be conscious of it. Even if I provide the
robot with
>>>>"feelings", i.e. judgements about good/bad/pain/pleasure I'm not sure it
would be
>>>>conscious. But if I provide it with "attention" and memory, so that it
noted the
>>>>painful event as important and necessary to remember because of it's
strong negative
>>>>affect; then I think it would be conscious.
>>>
>>>
>>>It's interesting that people actually withdraw their hand from the fire
*before* they experience
>>>the pain. The withdrawl is a reflex, presumably evolved in organisms with
the most primitive
>>>central nervour systems, while the pain seems to be there as an
afterthought to teach us a
>>>lesson so we won't do it again. Thus, from consideration of evolutionary
utility consciousness
>>>does indeed seem to be a side-effect of memory and learning.
>>
>>Even more curious, volitional action also occurs before one is aware of
it. Are you
>>familiar with the experiments of Benjamin Libet and Grey Walter?
>
>
> These experiments showed that in apparently voluntarily initiated motion,
motor cortex activity
> actually preceded the subject's awareness of his intention by a
substantial fraction of a second.
> In other words, we act first, then "decide" to act. These studies did not
examine pre-planned
> action (presumably that would be far more technically difficult) but it is
easy to imagine the analogous
> situation whereby the action is unconsciously "planned" before we become
aware of our decision. In
> other words, free will is just a feeling which occurs after the fact. This
is consistent with the logical
> impossibility of something that is neither random nor determined, which is
what I feel my free will to be.
>
>
>>>I also think that this is an argument against zombies. If it were
possible for an organism to
>>>behave just like a conscious being, but actually be unconscious, then why
would consciousness
>>>have evolved?
>>
>>An interesting point - but hard to give any answer before pinning down
what we mean
>>by consciousness. For example Bruno, Julian Jaynes, and Daniel Dennett
have
>>explanations; but they explain somewhat different consciousnesses, or at
least
>>different aspects.
>
>
> Consciousness is the hardest thing to explain but the easiest thing to
understand, if it's your own
> consciousness at issue. I think we can go a long way discussing it
assuming that we do know what
> we are talking about even though we can't explain it. The question I ask
is, why did people evolve
> with this consciousness thing, whatever it is? The answer must be, I
think, that it is a necessary
> side-effect of the sort of neural complexity that underpins our behaviour.
If it were not, and it
> were possible that beings could behave exactly like humans and not be
conscious, then it would
> have been wasteful of nature to have provided us with consciousness.

This is not necessarily so. First, evolution is constrained by what goes
before.
Its engineering solutions often seem rube-goldberg, e.g. backward retina in
mammals.
  Second, there is selection against some evolved feature only to the extent
it has a
(net) cost. For example, Jaynes explanation of consciousness conforms to
these two
criteria. I think that any species that evolves intelligence comparable to
ours will
be conscious for reasons somewhat like Jaynes theory. They will be social
and this
combined with intelligence will make language a good evolutionary move.
Once they
have language, remembering what has happened, in order to communicate and
plan, in
symbolic terms will be a easy and natural evolvement. Whether that leads to
hearing
your own narrative in your head, as Jaynes supposes, is questionable; but it
would be
consistent with evolution. It takes advantage of existing structure and
functions to
realize a useful new function.

>This does not necessarily
> mean that computers can be conscious: maybe if we had evolved with
electronic circuits in our
> heads rather than neurons consciousness would not have been a necessary
side-effect.

But my point is that this may come down to what we would mean by a computer
being
conscious. Bruno has an answer in terms of what the computer can prove.
Jaynes (and
probably John McCarthy) would say a computer is conscious if it creates a
narrative
of its experience which it can access as memory.

Brent Meeker

Humphrey says it has to have an evolutionary past.
LN



--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---
Received on Mon Sep 11 2006 - 02:12:14 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:12 PST