Re: Implementation

From: Marchal <>
Date: Fri Jul 23 05:05:20 1999

Chris wrote:

>I've concluded that
>Maudlin's proof of the incompatibility between physical supervenience
>and a computational theory of consciousness, is without merit.

Gosh ...

>Maudlin's main error is a subtle one, and the seeds for it can be
>found in this introduction to the concept of physical supervenience,
>on page 408:
> Computational structure supervenes on physical structure, so
> physically identical brains are also computationally identical.
>Indeed, he defines the _supervenience thesis_ thus:
> Two physical systems engaged in precisely the same physical
> activity through a time will support the same modes of
> consciousness (if any) through that time.
>He doesn't provide any evidence to support this conjecture, he
>assumes it as fairly obvious. In the case of human brains, it is
>fairly obvious, and probably true. But in the case of his final
>computational machine, Olympia, it is clearly false, as I will show.
>As a summary: the great lengths that Maudlin goes to in contriving
>Olympia are precisely those which invalidate the supervenience
>thesis, as he has defined it.

I'm not sure I understand you because it would mean that
Maudlin'argumentation succeed.

>Maudlin elaborates on his definition, as Hal pointed out in his post:
> If we introduce into the vicinity of the system an entirely inert
> object that has absolutely no causal or physical interaction with
> the system, then the same activity will still support the same
> mode of consciousness.
>But this is clearly incorrect, as a moment's reflection will verify.
>Computation supervenes on physical processes precisely to the extent
>that, to put it simply, the outputs depend on the inputs. As Maudlin
>(and everyone on this group) accepts, correct handling of some set
>of counterfactuals are essential to be able to call an implementation
>an instantiation of a computation (say _that_ three times fast!) So
>this definition of physical supervenience is where the error lies.

OK. You just don't believe in the physical supervenience thesis.
That is great !
But you will be obliged to explain why you still believe that
consciousness supervenes on the brain's activity (don't you ?).
In fact you will have to solve Mallah's implementation problem.
This is still more clear when you add:

>In fact, "objects that have absolutely no causal or physical
>interaction" could affect the ability of the mechanism to deal with
>counterfactuals, and so they would change the nature of the
>computational device.

All right. This is coherent with your suspicion against sup-phys.
Like Jacques M Mallah (and also like anyone who agree with both sup-phys
and comp, you make "inactive physical piece" having a role for

>To put it simply, as Jacques Mallah has pointed out many times, you
>must consider the entire physical system whenever you are talking
>about exactly what computation is instantiated. The parts of the
>system that don't happen to interact with other parts during a
>particular run are still part of the system, and thus still have an
>affect on which program is actually being run.
>I enjoyed Maudlins discussion, on pages 413ff, of "the ploy of funny
>instantiation", and other arguments, including Searle's "Chinese
>Room". I agree with his assessments of these arguments as basically
>non-substantive. So it's ironic (to me, anyway) that I've reached
>the conclusion that his argument falls into exactly this same class.

Like Jacques M Mallah. See my preeceeding "re-implementation" post
(responding to Jacques M Mallah) for my feeling about that.

But do you realise, Chris, that, like Nathanael, you will make
Olympia a Zombie ! (I know you aversion of the concept). Just remember
that Olympia just talk and behave like us.

>In particular, he mentions, on p. 416, a trick that can be played
>when discussing a proposed computational system:
> Someone might suggest that no activity is needed. Let a rock
> sitting on a table be the machine. Now let Si be: sitting on
> the table from 12:00 to 12:01. Let Sj be: sitting on the table
> from 12:01 to 12:02. The machine will effect a transition
> between the two states without undergoing any physical change at
> all. I shall take such tricks to be inadmissable.
>But the trick he makes in defining Olympia is of exactly this
>variety! It doesn't go quite as far, but it is the same in that it
>encodes information about a _particular run of the device_ into the
>definition, or structure, of the device itself.

Any program which is able to remember its activity do something like
that. This is just memorising. Frankly I don't see the difference.
The rock has no counterfactual abilities. Olympia does.
The only bizare feature of Olympia is that the memories and the
counterfactual are implemented in a way to be inactive during a
particular run. If that would affect consciousness, I would
prefer to abandon computationalism.

>It should be obvious how this trick is of the same sort as the rock
>trick above. In the original machine, the order of the troughs had a
>particular significance. He has then redefined the significance of
>the order of the troughs, ad hoc, to have a new significance which
>relates directly to information from the reference run of the device.

Here I would agree for a purely formal reason, and I see it as a
pedagogical weakness of Maudlin's presentation of his argument.
But it is not to difficult to eliminate this difficulty.
And, at least for me, the fact that counterfactual will be well managed is
enough. I don't believe in zombie !

>Then he makes "Maudlin's move". He contrives a mechanism such that
>if any of the troughs are not in the initial states that they were in
>at the begining of the reference run, then an external device will
>intervene and cause the counterfactual to be correctly implemented.
>At this point, then, we must say that the computation is
>instantiated, and that Olympia is now conscious. We must admit, he
>argues, that she is conscious even though none of the counterfactuals
>ever actually occurs. Thus, in the previous example and in this,
>there existed the identical physical activity, but in that case, the
>mechanism was not conscious, and this case, it is.
>The hole in this argument is rather glaring: as mentioned above,
>whenever considering the physical instantiation of a computation, the
>complete system must be taken into account, and translated as a whole
>into the computer program which is being instantiated. The presence
>or absence of parts which happen to play no role in a particular run
>of the program, nevertheless can, and obviously do, change the nature
>of the program itself. The error is in assuming that identical
>physical activity necessarily means that the same computation is

It is not an error. It is a refutation of sup-phys. It seems you
don't believe in sup-phys at the start.
In case you still believe that consciousness supervenes on the activity
of a "normal brain" you should explain why ?
You definitely should work with Jacques M Mallah on his implementation

>Other examples brought up during discussion on this list have the
>same flaw. For example, Bruno's brain that breaks, and gets fed by
>cosmic rays during the down-time. When he applied Maudlin's move in
>this scenario, he once again assumed a device which had, already
>encoded into it, significant information about a reference run.

I totaly agree with this. It is coherent with my abandon of sup-phys.
Of ANY sup-phys ! (unlike you, it seems).

>The same argument also applies when Maudlin discusses his "second
>block", which causes the gears to jam if ever the counterfactual is
>encountered. Again, this changes the overall structure of the
>device, and thus changes the program which is instantiated.

But any physical instantiation of a conditional instruction like

  IF M = O THEN RUN <this part of the device>
  ELSE do nothing

do precisely this.

>My point is that it is meaningless to talk of whether any of these
>instantiations is "conscious". As many have pointed out recently,
>consciousness is a subjective phenomenon. We can study it from the
>outside, just like we can study a computer program, but the actual
>conscious entity experiencing the experiences will not be sensitive
>to whether the machine breaks.

I clearly agree with you if you include the normal brain in 'these
instantiations'. That was the point to be prove.
If we keep "comp" we must abandon sup-phys.
Even on 'normal brain activity'. Is that your move ? I am not sure.

>And one final note, which I think is the most powerful argument yet:
>to make this conjecture stand, you'd have to show that physical
>processes are incapable of instantiating a computation, ever. I
>don't think Maudlin attempted this. The reason is clear: if you
>agree that consciousness is computational, and you agree that
>physical processes can instantiate computations, then it follows that
>physical processes can instantiate consciousnesses. I don't know how
>Maudlin would address this. Would he say that conscious computations
>are of a high enough order of complexity that they fall apart? Just
>hand-waving about a whether a particular contrived instantiation is
>conscious or not cannot lead you to any conclusions about the general

Maudlin abandon computationalism.
I abandon sup-phys and the wole idea that consciousness is emergent
or secondary with respect to physical laws.
And I show it is quite consistent that the physical laws emerges
from the possible (arithmetical) discourse of consistent machines
infering their own (relative) consistency.
The role of an 'apparent brain' is not the producing of consciousness.
The role of such a brain is only to make possible for a (conscious)
computation to manifest itself relatively to his more probable (measure 1)
computational neighborhood.
Is that a too big leap ?

I am still not sure you abandon sup-phys. You cannot abandon it for
Olympia and not for the 'normal brain'. At least not without giving us a
"physical" definition of "correct" implementation (like JMM).
But the end of your post seems to me going in the direction of total
abandon of sup-phys.

So, I ask you again, is Olympia a zombie ?
(From your conversation with Steve Price, I am aware of the
high provocation here !). I just try to have a better
understanding of your post.

Received on Fri Jul 23 1999 - 05:05:20 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST