Re: Implementation

From: Marchal <marchal.domain.name.hidden>
Date: Fri Jul 23 12:01:57 1999

Gilles Henry wrote:

>the computationalist hypothesis may be questionable from the beginning
>since it tries to relate a subjective process (consciousness) to an
>objective one (computation).

I agree.
Remember comp entails that comp is forever undecidable. It
will always need sort of act of faith. (Like most theories).
I guess you agree that common sense also relates a subjective process
(consciousness) to an objective one (a body). Of course it is
questionable too.

>The only objective-like feature is the ability
>to react "like a human" (Turing test) including counterfactuals.

I agree.

> If you forbid any question implying a link between subjective and
>objective notions, it follows immediately that zombies can not exist.

By acting on a brain through electric needle we can today provoke
fear or laugh on people. There are obvious (and objective, statistically
 well established, cf Penfield's book) link between subjective reports and
'apparently objective brain'. If you forbid any question implying a link
between subjective and objective notions, even modest science like
neurophysiology will not progress.
You know, by forbidding all questions there is no more problems at all.
Are you telling us that the 'consciousness problem' is a false problem ?

>Concerning Olympia, my suggestion is the following : as consciousness is a
>subjective notion, there is no clear-cut between conscious and
>non-conscious systems.

I hope you mean there is no OBJECTIVE clear-cut between conscious and
non-conscious systems.
>From the point of view of the conscious system there is an important
SUBJECTIVE clear-cut between being conscious or not.
If you are conscious it is ethically wrong for people to bury yourself,
for exemple.

>The amount of consciousness we recognize to a system
>will depend on the number of counterfactuals it can properly handle (much
>like someone drunk will not react properly to normal inputs). So the amount
>of consciousness we recognize in Olympia depends on what she is able to
>handle, with or without Karas.

Without the Klaras, Olympia cannot handle anything. With the Klaras,
Olympia
can handle things like you and me (abstraction made of her body handicap).
And I agree with you, our attribution of consciousness depends of her
ability to manage counterfactuals.

> The key point (and the flaw in the paradox) is that replay experiments are
>NOT well-suited to test consciousness, unlike the tests of a software ! If
>I say something to you, and you answer me, you will probably not answer the
>same if I ask again the same question.

This is unfair ! In our thought experiment we ask the same question after
a resetting of the machine. That is possible because we use comp (if only
to
refute comp, I mean comp is part of the argument, remember that Maudlin's
goal is to refute comp).

>The fact that a
>system replays the same output with the same input is rather a proof of NON
>thinking (actually that is exactly for this reason that we consider usually
>our computers as non-thinking).

... from a non-computationalist point of view.

>Consciousness implies the absence of
>preprogrammed output to be compared with. So from the beginning you cannot
>deduce that Olympia is conscious from the fact that she (it) reproduced a
>precomputed output.

... from a non-computationalist point of view. Or maybe should I include
here some Mallah's physical computationalist point of view ?

This could be interesting but I am waiting for his physical criteria for
well-implemented computations. Clearly the correct handling of
counterfactuals
is not sufficient, because it leads to Maudlin's trap.

>You could consider that Olympia with Karas is
>conscious, but in fact Karas is but not Olympia.

Are you not making Searle's error in the chineese room where he
attributed
consciousness to the executor and not the executed. (cf Mind's I).
Look at the (system-) reply given by Hofstadter and Dennett.
But why suddenly do you attribute subjective consciousness to objectives
Klaras. And why the Klaras, when we are speaking to Olympia ?
Maybe you will convince my brain, but not me :-)

>Olympia is like a computer
>that I would have programmed to answer selectively some advertising
>messages that I am bored to read, with a preselected answer, and without me
>knowing that I ever received them. If I were reading them, I would of
>course have answered the same thing, but nevertheless we would not consider
>that my computer is conscious. And of course my computer with my dead
>corpse is not able to answer anything else that the programmed mails.

Are you not making the 'Ada Lovelace' error. A computer cannot think
because it does only what we tell him to do. (see Turing article in
Mind'I for a good reply).


>That's completely linked to a previous argument that I proposed to exclude
>the duplication of really conscious systems, since consciousness implies a
>self-representation in space-time.

It is clearly linked to your non-computationalist prejudice. I think.

Bruno
Received on Fri Jul 23 1999 - 12:01:57 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST