Re: Implementation

From: Gilles HENRI <Gilles.Henri.domain.name.hidden>
Date: Fri, 23 Jul 1999 16:15:27 +0200

Some remarks in this debate :

* the computationalist hypothesis may be questionable from the beginning
since it tries to relate a subjective process (consciousness) to an
objective one (computation). The only objective-like feature is the ability
to react "like a human" (Turing test) including counterfactuals.
 If you forbid any question implying a link between subjective and
objective notions, it follows immediately that zombies can not exist.
Concerning Olympia, my suggestion is the following : as consciousness is a
subjective notion, there is no clear-cut between conscious and
non-conscious systems. The amount of consciousness we recognize to a system
will depend on the number of counterfactuals it can properly handle (much
like someone drunk will not react properly to normal inputs). So the amount
of consciousness we recognize in Olympia depends on what she is able to
handle, with or without Karas.
 The key point (and the flaw in the paradox) is that replay experiments are
NOT well-suited to test consciousness, unlike the tests of a software ! If
I say something to you, and you answer me, you will probably not answer the
same if I ask again the same question. (most probably you will answer, a
little bit angry "I already answered your question !) The fact that a
system replays the same output with the same input is rather a proof of NON
thinking (actually that is exactly for this reason that we consider usually
our computers as non-thinking). Consciousness implies the absence of
preprogrammed output to be compared with. So from the beginning you cannot
deduce that Olympia is conscious from the fact that she (it) reproduced a
precomputed output. You could consider that Olympia with Karas is
conscious, but in fact Karas is but not Olympia. Olympia is like a computer
that I would have programmed to answer selectively some advertising
messages that I am bored to read, with a preselected answer, and without me
knowing that I ever received them. If I were reading them, I would of
course have answered the same thing, but nevertheless we would not consider
that my computer is conscious. And of course my computer with my dead
corpse is not able to answer anything else that the programmed mails.

That's completely linked to a previous argument that I proposed to exclude
the duplication of really conscious systems, since consciousness implies a
self-representation in space-time.
Gilles
Received on Fri Jul 23 1999 - 07:26:04 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST