Re: Implementation

From: Christopher Maloney <dude.domain.name.hidden>
Date: Thu, 22 Jul 1999 21:56:01 -0400

I've finally had time to read Maudlin's paper, and I've gradually
been catching up on your discussions on the Implementation thread,
and I'd like to add my opinions to the mix. I've concluded that
Maudlin's proof of the incompatibility between physical supervenience
and a computational theory of consciousness, is without merit. I'll
try to show where I think he made the errors in his argument.
Hopefully soon I'll have time to apply this same analysis to some of
the recent posts.

First I want to thank all of the participants for a lively
discussion. I don't think it gets said often enough here: thank you
very much for taking the time to write such well crafted posts. They
are a joy to read.

I found Maudlin's paper to be very well written, and powerfully
argued.

Maudlin's main error is a subtle one, and the seeds for it can be
found in this introduction to the concept of physical supervenience,
on page 408:

    Computational structure supervenes on physical structure, so
    physically identical brains are also computationally identical.

Indeed, he defines the _supervenience thesis_ thus:

    Two physical systems engaged in precisely the same physical
    activity through a time will support the same modes of
    consciousness (if any) through that time.

He doesn't provide any evidence to support this conjecture, he
assumes it as fairly obvious. In the case of human brains, it is
fairly obvious, and probably true. But in the case of his final
computational machine, Olympia, it is clearly false, as I will show.
As a summary: the great lengths that Maudlin goes to in contriving
Olympia are precisely those which invalidate the supervenience
thesis, as he has defined it.

Maudlin elaborates on his definition, as Hal pointed out in his post:

    If we introduce into the vicinity of the system an entirely inert
    object that has absolutely no causal or physical interaction with
    the system, then the same activity will still support the same
    mode of consciousness.

But this is clearly incorrect, as a moment's reflection will verify.
Computation supervenes on physical processes precisely to the extent
that, to put it simply, the outputs depend on the inputs. As Maudlin
(and everyone on this group) accepts, correct handling of some set
of counterfactuals are essential to be able to call an implementation
an instantiation of a computation (say _that_ three times fast!) So
this definition of physical supervenience is where the error lies.
In fact, "objects that have absolutely no causal or physical
interaction" could affect the ability of the mechanism to deal with
counterfactuals, and so they would change the nature of the
computational device.

To put it simply, as Jacques Mallah has pointed out many times, you
must consider the entire physical system whenever you are talking
about exactly what computation is instantiated. The parts of the
system that don't happen to interact with other parts during a
particular run are still part of the system, and thus still have an
affect on which program is actually being run.

I enjoyed Maudlins discussion, on pages 413ff, of "the ploy of funny
instantiation", and other arguments, including Searle's "Chinese
Room". I agree with his assessments of these arguments as basically
non-substantive. So it's ironic (to me, anyway) that I've reached
the conclusion that his argument falls into exactly this same class.

In particular, he mentions, on p. 416, a trick that can be played
when discussing a proposed computational system:

    Someone might suggest that no activity is needed. Let a rock
    sitting on a table be the machine. Now let Si be: sitting on
    the table from 12:00 to 12:01. Let Sj be: sitting on the table
    from 12:01 to 12:02. The machine will effect a transition
    between the two states without undergoing any physical change at
    all. I shall take such tricks to be inadmissable.

But the trick he makes in defining Olympia is of exactly this
variety! It doesn't go quite as far, but it is the same in that it
encodes information about a _particular run of the device_ into the
definition, or structure, of the device itself.

For those who haven't read the article, here's a brief recap of the
definition of Olympia. Olympia is a standard Turing machine with a
program and a tape. In place of the tape, he uses troughs of water,
which can be either empty or full. Now, the basic machine has an
infinite line of such troughs, and the mechanism will jump back and
forth among them, filling some, draining others, and leaving others
alone, in an order determined by the program.

Now, in his definition of Olympia, he employs the following trick:
first, record exactly what the mechanism does on a particular run,
when presumably the conscious entity was experiencing something (say,
a toothache). I'll call this the "reference run" (not a term used by
Maudlin.) Now, reorder all of the troughs such that they are in the
same order as they were visited during the reference run. If the
mechanism visited the same trough twice, then that trough will be
split into two troughs connected by a pipe.

It should be obvious how this trick is of the same sort as the rock
trick above. In the original machine, the order of the troughs had a
particular significance. He has then redefined the significance of
the order of the troughs, ad hoc, to have a new significance which
relates directly to information from the reference run of the device.

He then goes further, to invent a mechanism which dumbly fills,
empties, or leaves each trough alone. Again, this mechanism is set
up according to the results of the reference run. In affect, he has
created a simple replay device, which reproduces the reference run.
He then states that at this point, a computationalist (and most of
us) would say that the machine is no longer implementing a
computation, and could not be said to be conscious, because it
doesn't handle counterfactuals at all.

Then he makes "Maudlin's move". He contrives a mechanism such that
if any of the troughs are not in the initial states that they were in
at the begining of the reference run, then an external device will
intervene and cause the counterfactual to be correctly implemented.
At this point, then, we must say that the computation is
instantiated, and that Olympia is now conscious. We must admit, he
argues, that she is conscious even though none of the counterfactuals
ever actually occurs. Thus, in the previous example and in this,
there existed the identical physical activity, but in that case, the
mechanism was not conscious, and this case, it is.

The hole in this argument is rather glaring: as mentioned above,
whenever considering the physical instantiation of a computation, the
complete system must be taken into account, and translated as a whole
into the computer program which is being instantiated. The presence
or absence of parts which happen to play no role in a particular run
of the program, nevertheless can, and obviously do, change the nature
of the program itself. The error is in assuming that identical
physical activity necessarily means that the same computation is
instantiated.

Note also that his thought experiment can only be brought to this
point by contriving some mechanism which encodes the information
about a particular reference run. It really is not a large step from
his Olympia to the rock, where we can define the states of the rock
in such a way that it implements a reference run of a consciousness.
We could then build a whole computer around this rock, which would be
responsible for implementing the computation if the inputs happened
to be anything other that those encountered during the reference run.

Other examples brought up during discussion on this list have the
same flaw. For example, Bruno's brain that breaks, and gets fed by
cosmic rays during the down-time. When he applied Maudlin's move in
this scenario, he once again assumed a device which had, already
encoded into it, significant information about a reference run.

The same argument also applies when Maudlin discusses his "second
block", which causes the gears to jam if ever the counterfactual is
encountered. Again, this changes the overall structure of the
device, and thus changes the program which is instantiated.

Toward the end of Maudlin's paper, he introduces a term that I like,
"dispositional":

    The more dispositional a property appears, the easier it is to
    contend that it supervenes not only on the activities but also on
    the dipositions of a system. And, since the dispositions of
    Olympia are changed by the presence or absense of the second
    block, Olympia may not threaten a computational account of these
    other properties.

"These other properties" are, for example, intelligence and
intensionality, rather than experience. In this sentence it seems
clear that he recognizes that by changing the "disposition" of the
system, he changes the program, so it's mystifying why he would argue
that it has no affect on conscious experience.

I also wanted to add a thought or two about the concept of "replays".
Olympia is contrived such that her default behavior will be a replay
of the reference run. Then, with the addition of the "second blocks"
(which cause her gears to jam if there are any differences between
her input and the reference run) she is turned explicitly into a
replay device. But note that there is a continuous spectrum of
possibilities between a program which can handle, say, any set of
inputs at all, and one which "breaks" if any but one precise input is
encountered. There could be programs which can handle some subset of
counterfactuals, but break on others.

My point is that it is meaningless to talk of whether any of these
instantiations is "conscious". As many have pointed out recently,
consciousness is a subjective phenomenon. We can study it from the
outside, just like we can study a computer program, but the actual
conscious entity experiencing the experiences will not be sensitive
to whether the machine breaks.

And one final note, which I think is the most powerful argument yet:
to make this conjecture stand, you'd have to show that physical
processes are incapable of instantiating a computation, ever. I
don't think Maudlin attempted this. The reason is clear: if you
agree that consciousness is computational, and you agree that
physical processes can instantiate computations, then it follows that
physical processes can instantiate consciousnesses. I don't know how
Maudlin would address this. Would he say that conscious computations
are of a high enough order of complexity that they fall apart? Just
hand-waving about a whether a particular contrived instantiation is
conscious or not cannot lead you to any conclusions about the general
case.



-- 
Chris Maloney
http://www.chrismaloney.com
"Knowledge is good"
-- Emil Faber
Received on Thu Jul 22 1999 - 19:16:09 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST