Re:implementation

From: <hal.domain.name.hidden>
Date: Fri, 16 Jul 1999 11:56:00 -0700

Jacques M Mallah, <jqm1584.domain.name.hidden>, writes:
> On 14 xxx -1, Marchal wrote:
> > Suppose that from time t1 to time t2, every circuit of the UTM are broken.
> > Before t1 and *after* t2, for the sake of the argument, the circuit are
> > working (some demon fix the UTM at t2).
> > Suppose also that between t1 and t2 a bunch of cosmic ray accidentally
> > supplies the faults in the UTM's working.
> >
> > Do you think there is consciousness between t1 and t2 ? (No, I suppose)
> > Indeed, I guess that you will say that beween t1 and t2 the execution was
> > not well-implemented (although here "not well executed" seem better).
>
> OK; I would say the computation was not implemented.

Let me restate Marchal's example slightly, to avoid confusing talk of
"demons".

Suppose you have a Turing machine running a program which renders it
conscious. However, the machine has a weak point in its design, which
causes occasional but rare intermittent failures. During the failed
state, the machine executes randomly for a period of time, then the
failing part spontaneously corrects itself and the machine operates
correctly from then on.

Marchal then asks us to suppose that on some particular run of this
machine, the failure occurs, but during that time the random operation
of the machine "just happens" to match exactly what the machine was
supposed to be doing anyway.

I agree that during this time interval the machine cannot be considered
to handle counterfactuals correctly, hence the computation was not
implemented by that definition. Had the inputs to the machine been
different, since the machine was behaving randomly there is no reason
to expect that it would have "happened to" performed correctly with
those inputs.

(I would comment that it is astronomically unlikely that such an
event could occur for a long enough period of time to be perceptable.
Our brains are extremely complex, with billions of neurons that fire
thousands of times a second. If the shortest period of time we can
perceive is maybe 1/20 of a second, that is still many billions of neural
firings. The chances that we would get that many firings occuring at
random but with exactly the same timing that would have occured naturally
is astronomically small. Any thought experiment which relies on such
an event should be mistrusted, because it is so far from our experience
and common sense. In my opinion Maudlin's paradox is more relevant.)

> > But remember that the UTM was well-implementing the brain.
> >
> > And now I make the *Maudlin's move*:
>
> And I'll annul your alliterative argument.
>
> > Add a physically inactive piece such that, in the case of a change in the
> > neighborhood of the UTM, that physically inactive piece become active,
> > and
> > act as an automated demon fixing instantaneously the UTM, in the
> > accidentally
> > correct state let by the lucky cosmic rays.
> > (That that is possible is showned in Maudlin's paper).
>
> I take it that you mean that if the machine departs from it's
> normal sequence, a switch will be triggered to activate a backup machine.
> I think Maudlin's example used a false implementation, and I'm not
> convinced that it could be done that way otherwise, but that's not the
> quickest route to debunk the argument so let's continue.

I'm not sure I understand exactly what Marchal is proposing here.
He wants an automated demon to fix the UTM as soon as there is "a change
in the neighborhood of the UTM". What is this change which triggers
the fix?

I don't believe he refers to the failure of the machine. If his demon
fixed the machine as soon as it broke, it would simply not be broken
and would behave properly during the interval in question, so would
handle counterfactuals correctly.

I think what he wants is that the demon watches the machine and if the
machine ever breaks AND behaves other than what it is supposed to by
its program and design, then it will be fixed. So if the machine
randomly happens to follow the correct state transitions even while
broken, then the demon will not be activated and so the machine will
not need to get fixed.

In the case of the example above, where the machine follows the correct
transitions "by accident" and then spontaneously fixes itself, the demon
is never active, and so the machine's actual activity is the same as
in the original state.

Assuming this is the intention, we can dispense with the demon and replace
him by some repair circuit. However if we consider how this circuit would
work, I want to note one thing. How can the repair circuit know when the
machine is behaving incorrectly, as defined above? Incorrect behavior is
defined as departure from what the machine is actually supposed to do.
But the only way to detect that is to calculate, either in advance or
while the program is running, what the machine is supposed to be doing.

This means that we either have to be running a replay of a previously
instantiated computation, or we have to be running a second computation
in parallel with this one. In either case we are not dealing with a
unique calculation where this UTM is the only case where the calculation
is instantiated. The consciousness in question WILL BE or HAS BEEN
instantiated. If "I" am that consciousness, there is no question in
this thought experiment about whether I will ever have the experience of
thinking the thoughts in question. I do, I must. The only question is
whether my thoughts are being instantiated in this particular run of this
particular machine. I, as a thinker, could not tell the difference; it's
not like it would make the difference between my existence and my lack
of existence. It would only affect how many times I am instantiated,
and as I have argued before, it is questionable to me whether this has
any subjective or objective effects.

> > Now I tell you that there will be no change between t1 and t2 in the
> > neighborhood of the UTM, so that the
> > inactive demon remains inactive. Note that the whole setting is
> > counterfactually correct and should, I think, be considered
> > well-implemented,
> > because "the correct-conterfactualness" concern the alledged turing
> > emulability
> > of the correct implemention (by construction) of the brain by the UTM.
>
> OK, so you think the computation is implemented in this case.

I believe, from what Jacques says below, that he also believes that
the computation is implemented in this case, and that the addition of
the repair circuit (or demon, as Marchal has it) would make the
system conscious.


> > So your "correct implementation" whatever it is, as far as it is
> > Turing-emulable,
> > will fall in the Maudlin's trap, I think.
> >
> > MORAL : You cannot associate consciousness to the physical activity
> > supporting
> > a computation. (i.e. NOT SUP-PHYS)
>
> I think the key word here is 'activity'.
> I don't go around using the term 'SUP-PHYS', but it seems that
> what you mean by it is not what I mean by physical computationalism.
> Whether or not a computation is implemented depends on the laws of
> physics and the initial conditions. Equivalently, it depends on the laws
> of physics and the history of the system over time.
> It is clear that in the two cases you described, the initial
> conditions are *different*, and the history of the system is different.
> In the first example there was no 'demon', while in the second example
> there is a 'demon' and it moves at the constant velocity of the system.
> That's why the argument is a straw man. Maybe (?) some people
> once said that only objects that move in certain ways affect whether a
> computation is implemented and Maudlin countered that, but I never said
> anything like that. For me a stationary object is still part of the
> system and it's perfectly OK if the computation depends on the position
> and properties of that object.

I think this is a logical answer, that the system with a repair circuit
would have different properties than a system without one, even if the
repair circuit is not activated. Hence we can say that the system is
conscious with repair circuit and unconscious without, and not contradict
outself.

Maudlin also accepts that this answer is possible, but he does offer some
critiques.

To introduce them, let us sharpen the situation somewhat: we know that
the repair circuit is never activated. Let us suppose that it is a rather
complicated circuit involving a robot which halts the TM, takes it
apart, repairs the machine, puts it back together, and allows it to
continue to run.

Now Jacques has agreed, I think, that the existence of consciousness
between t1 and t2 depends on the existence of this robot. If the robot
exists and can function properly, then the TM is conscious. If the robot
does not exist or cannot function, then the TM is unconscious.

The problem is that we can make the presence of consciousness in this
machine depend on some far-removed events. For example, suppose that in
order to fix the machine the robot would have to order some new part.
Now suppose that, all the way around the world, there is a catastrophe
which makes the new part unavailable. The robot would be unable to
complete his repairs, hence the machine becomes unconscious.

Again, remember that the robot is never actually activated during this
time interval. So intuitively it is hard to understand why an event
happening on the other side of the world would affect the consciousness
here and now of the machine. But that is the position which we appear
to be forced to if we take this path.

Here is what Maudlin says:

"The modern picture of brain function rests primarily on the notion
of neural activity. The essential structure of mentation seems to be
founded in patterns of neural firings. Because those firings can be
analyzed as carrying information, the brain has come to be considered
as an information processor. So let us suppose that some time in
the future the electro-encephalograph is so perfected that it is
capable of recording the firing of every single neuron in the brain.
Suppose that researchers take two different surveys of a brain which
match exactly: the very same neurons fire at exactly the same rate
and in exactly the same pattern through a given period. They infer
(as surely they should!) that the brain supported the same occurrent
conscious state through the two periods. But the computationalist now
must raise a doubt. Perhaps some synaptic connection has been severed
in the interim. Not a synaptic connection of any of the neurons which
actually fired during either period, or which was in any way involved
in the activity recorded by the encephalograph. Still, such a change in
connection will affect the counterfactuals true of the brain, and so can
affect the subjective state of awareness. Indeed, the computationalist
will have to maintain that perhaps the person in question was conscious
through the first episode but not conscious at all through the second.
I admit to a great deal of mystification about the connection between mind
and body, but I see no reason to endorse such possibilities that directly
contradict all that we do know about brain process and experience.

"Whether the reason is enshrined in the supervenience thesis or not, our
general picture of the relation between physical and mental reality firmly
grounds the intuition that Olympia's [Olympia is Maudlin's name for his
pseudo-conscious TM - Hal] experience cannot be changed by the presence
or absence of the second set of blocks [these are blocks which interrupt
the operation of an extra circuit, like the repair circuit above - Hal].
These intuitions are not sacrosanct, but the computationalist especially
abandons them at his own risk. For similar intuitions are often appealed
to in defending the appropriateness of computational analogies in the
first place. One first step in arguments aimed at inducing assent to
the possibility of computers that can think, or feel, or intend, is
to imagine that some sort of prosthetic human neuron made of silicon
has been invented. We are then to imagine slowly replacing some poor
sap's brain bit by bit until at last we have a silicon brain that, our
intuitions should inform us, can do all of the mental and intensional
work of the original.

"This is as yet a far cry from showing that anything has mental properties
in virtue of its computational structure, but it is supposed to break
down parochial species-chauvinistic views about there being any deep
connection between mentality and organic chemistry. But the thought
experiment rests on a tacit appeal to supervenience. How could it
matter, one asks, whether the electrical impulses are carried by neurons
or by doped silicon? The implication is that mentality supervenes
only on the pattern of electrical or electrochemical activity. If the
computationalist now must assert that the presence or absence of a piece
of metal hanging untouched and inert in the midst of silent, frozen
machinery can make the difference between being conscious and not, who
knows what enormous changes in psychical state may result from replacing
axons and dendrites with little copper wires? Should the computationalist
reject the extremely general intuitions at play in assessing Olympia's
case, no means of judging the plausibility or implausibility of any
theory of mind seems to remain."

Or, in my example, if a factory fire in India can affect whether a system
here and now is conscious, you are moving into a realm where intuitions
about consciousness become highly suspect.

Hal
Received on Fri Jul 16 1999 - 12:02:34 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST