Bruno, my position is very simple: I keep physical supervenience,
but I disagree with Maudlin's definition. See below.
Marchal wrote:
>
> Chris wrote:
>
> >I've concluded that
> >Maudlin's proof of the incompatibility between physical supervenience
> >and a computational theory of consciousness, is without merit.
>
> Gosh ...
>
> >Maudlin's main error is a subtle one, and the seeds for it can be
> >found in this introduction to the concept of physical supervenience,
> >on page 408:
> >
> > Computational structure supervenes on physical structure, so
> > physically identical brains are also computationally identical.
This conclusion is wrong. The second phrase does not follow from
the first.
> >
> >Indeed, he defines the _supervenience thesis_ thus:
> >
> > Two physical systems engaged in precisely the same physical
> > activity through a time will support the same modes of
> > consciousness (if any) through that time.
Olympia makes it clear why this conclusion is wrong. Computation
supervenes on physical structures, but you must take the entire
physical structure into account, including parts that happen to
be inactive during a particular run. Those parts change the
counterfactuals, and thus change the program.
> >
> >He doesn't provide any evidence to support this conjecture, he
> >assumes it as fairly obvious. In the case of human brains, it is
> >fairly obvious, and probably true. But in the case of his final
> >computational machine, Olympia, it is clearly false, as I will show.
> >As a summary: the great lengths that Maudlin goes to in contriving
> >Olympia are precisely those which invalidate the supervenience
> >thesis, as he has defined it.
>
> I'm not sure I understand you because it would mean that
> Maudlin'argumentation succeed.
Note the "as he has defined it".
> >Maudlin elaborates on his definition, as Hal pointed out in his post:
> >
> > If we introduce into the vicinity of the system an entirely inert
> > object that has absolutely no causal or physical interaction with
> > the system, then the same activity will still support the same
> > mode of consciousness.
> >
> >But this is clearly incorrect, as a moment's reflection will verify.
> >Computation supervenes on physical processes precisely to the extent
> >that, to put it simply, the outputs depend on the inputs. As Maudlin
> >(and everyone on this group) accepts, correct handling of some set
> >of counterfactuals are essential to be able to call an implementation
> >an instantiation of a computation (say _that_ three times fast!) So
> >this definition of physical supervenience is where the error lies.
>
> OK. You just don't believe in the physical supervenience thesis.
> That is great !
Note "this definition".
> But you will be obliged to explain why you still believe that
> consciousness supervenes on the brain's activity (don't you ?).
> In fact you will have to solve Mallah's implementation problem.
> This is still more clear when you add:
>
> >In fact, "objects that have absolutely no causal or physical
> >interaction" could affect the ability of the mechanism to deal with
> >counterfactuals, and so they would change the nature of the
> >computational device.
>
> All right. This is coherent with your suspicion against sup-phys.
> Like Jacques M Mallah (and also like anyone who agree with both sup-phys
> and comp, you make "inactive physical piece" having a role for
> consciousness.
>
> >To put it simply, as Jacques Mallah has pointed out many times, you
> >must consider the entire physical system whenever you are talking
> >about exactly what computation is instantiated. The parts of the
> >system that don't happen to interact with other parts during a
> >particular run are still part of the system, and thus still have an
> >affect on which program is actually being run.
> >
> >I enjoyed Maudlins discussion, on pages 413ff, of "the ploy of funny
> >instantiation", and other arguments, including Searle's "Chinese
> >Room". I agree with his assessments of these arguments as basically
> >non-substantive. So it's ironic (to me, anyway) that I've reached
> >the conclusion that his argument falls into exactly this same class.
>
> Like Jacques M Mallah. See my preeceeding "re-implementation" post
> (responding to Jacques M Mallah) for my feeling about that.
Okay, I'll look at it.
>
> But do you realise, Chris, that, like Nathanael, you will make
> Olympia a Zombie ! (I know you aversion of the concept). Just remember
> that Olympia just talk and behave like us.
I'll tell you exactly what I think of that. First let's be clear
about what we're talking about. Olympia, without the second set
of blocks, and _including all of the supporting Klaras_, is conscious,
because the physical system _as a whole_ is counterfactually correct.
Maudlin is not clear if he intends the Klaras to be part of Olympia
in that construction, or not. (For the record: the Klaras are
responsible for taking over the computation if the inputs differ from
the reference run).
Now, if you add the second set of blocks, or if you don't include
the Klaras, then Olympia is just a replay device, not quite a
zombie. She's not a zombie because she _doesn't_ talk and behave
like us -- I wouldn't be able to ask her questions, because she
would instantly die! She does not implement anything other that
a replay of the reference run.
>
> >In particular, he mentions, on p. 416, a trick that can be played
> >when discussing a proposed computational system:
> >
> > Someone might suggest that no activity is needed. Let a rock
> > sitting on a table be the machine. Now let Si be: sitting on
> > the table from 12:00 to 12:01. Let Sj be: sitting on the table
> > from 12:01 to 12:02. The machine will effect a transition
> > between the two states without undergoing any physical change at
> > all. I shall take such tricks to be inadmissable.
> >
> >But the trick he makes in defining Olympia is of exactly this
> >variety! It doesn't go quite as far, but it is the same in that it
> >encodes information about a _particular run of the device_ into the
> >definition, or structure, of the device itself.
>
> Any program which is able to remember its activity do something like
> that. This is just memorising. Frankly I don't see the difference.
> The rock has no counterfactual abilities. Olympia does.
Olympia does only when you add the supporting structure. The rock
does, also, when you add the external computer.
> The only bizare feature of Olympia is that the memories and the
> counterfactual are implemented in a way to be inactive during a
> particular run. If that would affect consciousness, I would
> prefer to abandon computationalism.
>
> >It should be obvious how this trick is of the same sort as the rock
> >trick above. In the original machine, the order of the troughs had a
> >particular significance. He has then redefined the significance of
> >the order of the troughs, ad hoc, to have a new significance which
> >relates directly to information from the reference run of the device.
>
> Here I would agree for a purely formal reason, and I see it as a
> pedagogical weakness of Maudlin's presentation of his argument.
> But it is not to difficult to eliminate this difficulty.
I don't think I've read any thought experiments that eliminate it,
in discussions so far. I'll look them over again. How would you
propose to eliminate it?
> And, at least for me, the fact that counterfactual will be well managed is
> enough. I don't believe in zombie !
>
> snip ...
>
>
> >The same argument also applies when Maudlin discusses his "second
> >block", which causes the gears to jam if ever the counterfactual is
> >encountered. Again, this changes the overall structure of the
> >device, and thus changes the program which is instantiated.
>
> But any physical instantiation of a conditional instruction like
>
> IF M = O THEN RUN <this part of the device>
> ELSE do nothing
So, Bruno, it seems that you do agree that a physical structure
can run a program! Congratulations on seeing your error!
> do precisely this.
>
> >My point is that it is meaningless to talk of whether any of these
> >instantiations is "conscious". As many have pointed out recently,
> >consciousness is a subjective phenomenon. We can study it from the
> >outside, just like we can study a computer program, but the actual
> >conscious entity experiencing the experiences will not be sensitive
> >to whether the machine breaks.
>
> I clearly agree with you if you include the normal brain in 'these
> instantiations'. That was the point to be prove.
> If we keep "comp" we must abandon sup-phys.
> Even on 'normal brain activity'. Is that your move ? I am not sure.
>
> >And one final note, which I think is the most powerful argument yet:
> >to make this conjecture stand, you'd have to show that physical
> >processes are incapable of instantiating a computation, ever. I
> >don't think Maudlin attempted this. The reason is clear: if you
> >agree that consciousness is computational, and you agree that
> >physical processes can instantiate computations, then it follows that
> >physical processes can instantiate consciousnesses. I don't know how
> >Maudlin would address this. Would he say that conscious computations
> >are of a high enough order of complexity that they fall apart? Just
> >hand-waving about a whether a particular contrived instantiation is
> >conscious or not cannot lead you to any conclusions about the general
> >case.
>
> Maudlin abandon computationalism.
> I abandon sup-phys and the wole idea that consciousness is emergent
> or secondary with respect to physical laws.
I do not. But the truly beautiful, weird, and elegant thing is
that physical laws are, at least partly, a result of computational
indeterminism. But that doesn't preclude one from considering
Tegmark's mathematical models in order to gain an idea of the
measure associated with a conscious experience. It also doesn't
say that its necessary, either. In fact, I'm leaning more and
more your way in my thinking about measure. I'm very anxious
to read your thesis.
> And I show it is quite consistent that the physical laws emerges
> from the possible (arithmetical) discourse of consistent machines
> infering their own (relative) consistency.
> The role of an 'apparent brain' is not the producing of consciousness.
> The role of such a brain is only to make possible for a (conscious)
> computation to manifest itself relatively to his more probable (measure 1)
> computational neighborhood.
> Is that a too big leap ?
>
> I am still not sure you abandon sup-phys. You cannot abandon it for
> Olympia and not for the 'normal brain'. At least not without giving us a
> "physical" definition of "correct" implementation (like JMM).
> But the end of your post seems to me going in the direction of total
> abandon of sup-phys.
>
> So, I ask you again, is Olympia a zombie ?
> (From your conversation with Steve Price, I am aware of the
> high provocation here !). I just try to have a better
> understanding of your post.
>
> Bruno
--
Chris Maloney
http://www.chrismaloney.com
"Knowledge is good"
-- Emil Faber
Received on Fri Jul 23 1999 - 05:04:48 PDT