Colin Geoffrey Hales wrote:
> Hi Quentin,
> >
> > Hi Colin,
> >
> <<snip>>
> >> ... I am more interested in proving scientists aren't/can't be
> >> zombies....that it seems to also challenge computationalism in a
> certain
> >> sense... this is a byproduct I can't help, not the central issue. Colin
> >
> >
> > I don't see how the idea of zombies could challenge computationalism...
> Zombie
> > is an argument against dualism... in other way it is the ability to
> construct
> > a functionnal identical being as a conscious one yet the zombie is not
> conscious. Computationalism does not predict zombie simply because
> computationalism is one way to explain consciousness.
> >
> > Quentin
> >
>
> Now that there is a definite role of consciousness (access to novelty),
> the statement 'functional equivalent' makes the original 'philosophical
> zombie' an oxymoron...the first premise of 'functional equivalence' is
> wrong. The zombie can't possibly be functionally identical without
> consciousness, which stops it being a zombie!
You need to distinguish between having a function and being a function.
Locomotion is a function. Legs have the function of
locomotion. But wheels or wings or flippers could fulfil the same
function.
> To move forward, the 'breed' of the zombie in the paper is not merely
> 'functionally identical'. That requirement is relaxed. Instead it is
> physically identical in all respects except the brain. This choice is
> justified empirically - the brain is known to be where it happens. Then
> there is an exploration of the difference between the human and zombie
> brains that could account for why/how one is conscious and the other is
> not. At that point (post-hoc) one can assess functional equivalence. The
> new zombie is born.
>
> Now...If I can show even one behaviour that the human can do that the new
> zombie can't replicate then I have got somewhere. The assessment benchmark
> chosen is 'scientific behaviour'. This is the 'function' in which
> equivalence is demanded. Of all human behaviours this one is unique
> because it is directed at the world _external_ to the scientist.
Surely just about every action is directed towards the external world.
> It also
> produces something that is demanded externalised (a law of nature, 3rd
> person corroboration). The last unique factor is that the scientist
> creates something previously unknown by ALL. It is unique in this regard
> and the perfect benchmark behaviour to contrast the zombie and the human.
>
> So, I have my zombie scientist and my human scientist and I ask them to do
> science on exquisite novelty. What happens? The novelty is invisible to
> the zombie, who has the internal life of a dreamless sleep.
I think you are confusing lack of phenomenality with lack of
response to the environment. Simple sensors
can respond without (presumably) phenomenality.
So can humans with blindsight (but not very efficiently).
> The reason it
> is invisible is because there is no phenomenal consciousness. The zombie
> has only sensory data to use to do science. There are an infinite number
> of ways that same sensory data could arrive from an infinity of external
> natural world situtations. The sensory data is ambiguous
That doesn't follow. The Zombie can produce different responses
on the basis of physical differences in its input, just as
a machine can.
>- it's all the
> same - action potential pulse trains traveling from sensors to brain.
No, it's not all the same. Its coded in a very complex way. It's
like saying the information in you computer is "all the same -- its
all ones and zeros"
> The zombie cannot possibly distinguish the novelty from the sensory data
> and has no awareness of the external world or even its own boundary.
Huh? It's perfectly possible to build a robot
that produces a special signal when it encounters input it has
not encountered before.
> OK.
>
> Now, we have the situation where in order that science be done by a human
> we must have phenomenal consciousness. This is 'phenomena' - actual
> natural world 'STUFF' behaving in a certain way. If I was to do science on
> a rock...that rock is a natural world phenomena. So is consciousness. The
> fact that our present scientific modes of thinking make the understanding
> of it as a phenomena difficult is irrelevant. The reality of the existence
> of it is proven because science exists.
>
> How does this reach computationalism?
>
> Well if consciousness is phenomena like any other, as it must be, then
> phenomena of the type applicable to consciousness (whatever the mysterious
> hard problem solution is) must be present in order that scientific
> behaviour can happen. The phenomena in a computational artifact - one that
> is manipulating symbols - are the phenomena of the artifact, not those
> represented by any symbols being manipulated.
>
> So the idea of a functional equivalent based on manipulation of symbols
> alone is arguably/demonstrably wrong in one case only: scientific
> behaviour. From an AGI engineering perspective it means pure computation
> won't do it. So I am not going to use it. I am going to make chips that
> create the right phenomena in which to (symbolically) ground all knowledge
> acquisition. Merely hooking the AGI up to sensors will not do that.
>
> From a "computationalism" perspective it means....
>
> ....Now perhaps you can tell me what you think it means. I have my own
> practical implication...if you tell me I might understand better. I seem
> to have a messed-up idea of what computationalism actually means.
>
> cheers,
>
> Colin
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Fri Nov 24 2006 - 17:50:15 PST