Re: UDA revisited

From: Colin Geoffrey Hales <c.hales.domain.name.hidden>
Date: Sat, 25 Nov 2006 08:54:20 +1100 (EST)

Hi Quentin,
>
> Hi Colin,
>
<<snip>>
>> ... I am more interested in proving scientists aren't/can't be
>> zombies....that it seems to also challenge computationalism in a
certain
>> sense... this is a byproduct I can't help, not the central issue. Colin
>
>
> I don't see how the idea of zombies could challenge computationalism...
Zombie
> is an argument against dualism... in other way it is the ability to
construct
> a functionnal identical being as a conscious one yet the zombie is not
conscious. Computationalism does not predict zombie simply because
computationalism is one way to explain consciousness.
>
> Quentin
>

Now that there is a definite role of consciousness (access to novelty),
the statement 'functional equivalent' makes the original 'philosophical
zombie' an oxymoron...the first premise of 'functional equivalence' is
wrong. The zombie can't possibly be functionally identical without
consciousness, which stops it being a zombie!

To move forward, the 'breed' of the zombie in the paper is not merely
'functionally identical'. That requirement is relaxed. Instead it is
physically identical in all respects except the brain. This choice is
justified empirically - the brain is known to be where it happens. Then
there is an exploration of the difference between the human and zombie
brains that could account for why/how one is conscious and the other is
not. At that point (post-hoc) one can assess functional equivalence. The
new zombie is born.

Now...If I can show even one behaviour that the human can do that the new
zombie can't replicate then I have got somewhere. The assessment benchmark
chosen is 'scientific behaviour'. This is the 'function' in which
equivalence is demanded. Of all human behaviours this one is unique
because it is directed at the world _external_ to the scientist. It also
produces something that is demanded externalised (a law of nature, 3rd
person corroboration). The last unique factor is that the scientist
creates something previously unknown by ALL. It is unique in this regard
and the perfect benchmark behaviour to contrast the zombie and the human.

So, I have my zombie scientist and my human scientist and I ask them to do
science on exquisite novelty. What happens? The novelty is invisible to
the zombie, who has the internal life of a dreamless sleep. The reason it
is invisible is because there is no phenomenal consciousness. The zombie
has only sensory data to use to do science. There are an infinite number
of ways that same sensory data could arrive from an infinity of external
natural world situtations. The sensory data is ambiguous - it's all the
same - action potential pulse trains traveling from sensors to brain.

The zombie cannot possibly distinguish the novelty from the sensory data
and has no awareness of the external world or even its own boundary.

OK.

Now, we have the situation where in order that science be done by a human
we must have phenomenal consciousness. This is 'phenomena' - actual
natural world 'STUFF' behaving in a certain way. If I was to do science on
a rock...that rock is a natural world phenomena. So is consciousness. The
fact that our present scientific modes of thinking make the understanding
of it as a phenomena difficult is irrelevant. The reality of the existence
of it is proven because science exists.

How does this reach computationalism?

Well if consciousness is phenomena like any other, as it must be, then
phenomena of the type applicable to consciousness (whatever the mysterious
hard problem solution is) must be present in order that scientific
behaviour can happen. The phenomena in a computational artifact - one that
is manipulating symbols - are the phenomena of the artifact, not those
represented by any symbols being manipulated.

So the idea of a functional equivalent based on manipulation of symbols
alone is arguably/demonstrably wrong in one case only: scientific
behaviour. From an AGI engineering perspective it means pure computation
won't do it. So I am not going to use it. I am going to make chips that
create the right phenomena in which to (symbolically) ground all knowledge
acquisition. Merely hooking the AGI up to sensors will not do that.

>From a "computationalism" perspective it means....

....Now perhaps you can tell me what you think it means. I have my own
practical implication...if you tell me I might understand better. I seem
to have a messed-up idea of what computationalism actually means.

cheers,

Colin





--~--~---------~--~----~------------~-------~--~----~
 You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Fri Nov 24 2006 - 16:54:47 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:12 PST