Re:implementation

From: Jacques M Mallah <jqm1584.domain.name.hidden>
Date: Sun, 18 Jul 1999 19:02:36 -0400

On Fri, 16 Jul 1999 hal.domain.name.hidden wrote:
> Let me restate Marchal's example slightly, to avoid confusing talk of
> "demons".
>
> Suppose you have a Turing machine running a program which renders it
> conscious. However, the machine has a weak point in its design, which
> causes occasional but rare intermittent failures. During the failed
> state, the machine executes randomly for a period of time, then the
> failing part spontaneously corrects itself and the machine operates
> correctly from then on.
>
> Marchal then asks us to suppose that on some particular run of this
> machine, the failure occurs, but during that time the random operation
> of the machine "just happens" to match exactly what the machine was
> supposed to be doing anyway.
>
> I agree that during this time interval the machine cannot be considered
> to handle counterfactuals correctly, hence the computation was not
> implemented by that definition. Had the inputs to the machine been
> different, since the machine was behaving randomly there is no reason
> to expect that it would have "happened to" performed correctly with
> those inputs.

        Just to clarify: by 'randomly' you mean that the behavior depends
only on the inputs. I assume deterministic physics.

[Marchal wrote]
> > > Add a physically inactive piece such that, in the case of a change in the
> > > neighborhood of the UTM, that physically inactive piece become active,
> > > and
> > > act as an automated demon fixing instantaneously the UTM, in the
> > > accidentally
> > > correct state let by the lucky cosmic rays.
> > > (That that is possible is showned in Maudlin's paper).
> >
> > I take it that you mean that if the machine departs from it's
> > normal sequence, a switch will be triggered to activate a backup machine.
> > I think Maudlin's example used a false implementation, and I'm not
> > convinced that it could be done that way otherwise, but that's not the
> > quickest route to debunk the argument so let's continue.
>
> I'm not sure I understand exactly what Marchal is proposing here.
> He wants an automated demon to fix the UTM as soon as there is "a change
> in the neighborhood of the UTM". What is this change which triggers
> the fix?
>
> I don't believe he refers to the failure of the machine. If his demon
> fixed the machine as soon as it broke, it would simply not be broken
> and would behave properly during the interval in question, so would
> handle counterfactuals correctly.
>
> I think what he wants is that the demon watches the machine and if the
> machine ever breaks AND behaves other than what it is supposed to by
> its program and design, then it will be fixed. So if the machine
> randomly happens to follow the correct state transitions even while
> broken, then the demon will not be activated and so the machine will
> not need to get fixed.
>
> In the case of the example above, where the machine follows the correct
> transitions "by accident" and then spontaneously fixes itself, the demon
> is never active, and so the machine's actual activity is the same as
> in the original state.
>
> Assuming this is the intention, we can dispense with the demon and replace
> him by some repair circuit. However if we consider how this circuit would
> work, I want to note one thing. How can the repair circuit know when the
> machine is behaving incorrectly, as defined above? Incorrect behavior is
> defined as departure from what the machine is actually supposed to do.
> But the only way to detect that is to calculate, either in advance or
> while the program is running, what the machine is supposed to be doing.

        I agree. In Maudlin's example the implementation was almost
'clock-style' (false) to begin with, so this was able to be swept under
the rug. That's why I said to Marchal that I didn't know if it (Maudlin's
move) could be done for a non-false implementation. (See my home page re:
false implementations.)

> This means that we either have to be running a replay of a previously
> instantiated computation, or we have to be running a second computation
> in parallel with this one. In either case we are not dealing with a
> unique calculation where this UTM is the only case where the calculation
> is instantiated. The consciousness in question WILL BE or HAS BEEN
> instantiated. If "I" am that consciousness, there is no question in
> this thought experiment about whether I will ever have the experience of
> thinking the thoughts in question. I do, I must. The only question is
> whether my thoughts are being instantiated in this particular run of this
> particular machine. I, as a thinker, could not tell the difference; it's
> not like it would make the difference between my existence and my lack
> of existence. It would only affect how many times I am instantiated,
> and as I have argued before, it is questionable to me whether this has
> any subjective or objective effects.

        While it is obvious to me that measure is proprtional to the
number of implementations, I must point out that is only true of
independent implementations. In this case where one implementation
depends on another it is probably just a single (independent)
implementation and should be counted as such.

> > > Now I tell you that there will be no change between t1 and t2 in the
> > > neighborhood of the UTM, so that the
> > > inactive demon remains inactive. Note that the whole setting is
> > > counterfactually correct and should, I think, be considered
> > > well-implemented,
> > > because "the correct-conterfactualness" concern the alledged turing
> > > emulability
> > > of the correct implemention (by construction) of the brain by the UTM.
> >
> > OK, so you think the computation is implemented in this case.
>
> I believe, from what Jacques says below, that he also believes that
> the computation is implemented in this case, and that the addition of
> the repair circuit (or demon, as Marchal has it) would make the
> system conscious.
>
> > > So your "correct implementation" whatever it is, as far as it is
> > > Turing-emulable,
> > > will fall in the Maudlin's trap, I think.
> > >
> > > MORAL : You cannot associate consciousness to the physical activity
> > > supporting
> > > a computation. (i.e. NOT SUP-PHYS)
> >
> > I think the key word here is 'activity'.
> > I don't go around using the term 'SUP-PHYS', but it seems that
> > what you mean by it is not what I mean by physical computationalism.
> > Whether or not a computation is implemented depends on the laws of
> > physics and the initial conditions. Equivalently, it depends on the laws
> > of physics and the history of the system over time.
> > It is clear that in the two cases you described, the initial
> > conditions are *different*, and the history of the system is different.
> > In the first example there was no 'demon', while in the second example
> > there is a 'demon' and it moves at the constant velocity of the system.
> > That's why the argument is a straw man. Maybe (?) some people
> > once said that only objects that move in certain ways affect whether a
> > computation is implemented and Maudlin countered that, but I never said
> > anything like that. For me a stationary object is still part of the
> > system and it's perfectly OK if the computation depends on the position
> > and properties of that object.
>
> I think this is a logical answer, that the system with a repair circuit
> would have different properties than a system without one, even if the
> repair circuit is not activated. Hence we can say that the system is
> conscious with repair circuit and unconscious without, and not contradict
> outself.
>
> Maudlin also accepts that this answer is possible, but he does offer some
> critiques.
>
> To introduce them, let us sharpen the situation somewhat: we know that
> the repair circuit is never activated. Let us suppose that it is a rather
> complicated circuit involving a robot which halts the TM, takes it
> apart, repairs the machine, puts it back together, and allows it to
> continue to run.
>
> Now Jacques has agreed, I think, that the existence of consciousness
> between t1 and t2 depends on the existence of this robot. If the robot
> exists and can function properly, then the TM is conscious. If the robot
> does not exist or cannot function, then the TM is unconscious.
>
> The problem is that we can make the presence of consciousness in this
> machine depend on some far-removed events. For example, suppose that in
> order to fix the machine the robot would have to order some new part.
> Now suppose that, all the way around the world, there is a catastrophe
> which makes the new part unavailable. The robot would be unable to
> complete his repairs, hence the machine becomes unconscious.

        It is no different from the usual case of implementation. The
above argument is similar to the Chinese room argument. If I am
corresponding by paper mail with some guy in Tibet and we take turns
manipulating some symbols a computation can be implemented.

> Again, remember that the robot is never actually activated during this
> time interval. So intuitively it is hard to understand why an event
> happening on the other side of the world would affect the consciousness
> here and now of the machine. But that is the position which we appear
> to be forced to if we take this path.

        And I might also be corresponding with some guy in Africa. I send
it to this guy only for certain choices of the symbols. Suppose he dies
and I don't know it. If so the computation will no longer be implemented
even if those symbols never occur in this run.
        All extremely standard stuff.

> Here is what Maudlin says:
>
> "The modern picture of brain function rests primarily on the notion
> of neural activity. The essential structure of mentation seems to be
> founded in patterns of neural firings. Because those firings can be
> analyzed as carrying information, the brain has come to be considered
> as an information processor. So let us suppose that some time in
> the future the electro-encephalograph is so perfected that it is
> capable of recording the firing of every single neuron in the brain.
> Suppose that researchers take two different surveys of a brain which
> match exactly: the very same neurons fire at exactly the same rate
> and in exactly the same pattern through a given period. They infer
> (as surely they should!) that the brain supported the same occurrent
> conscious state through the two periods. But the computationalist now
> must raise a doubt. Perhaps some synaptic connection has been severed
> in the interim. Not a synaptic connection of any of the neurons which
> actually fired during either period, or which was in any way involved
> in the activity recorded by the encephalograph. Still, such a change in
> connection will affect the counterfactuals true of the brain, and so can
> affect the subjective state of awareness. Indeed, the computationalist
> will have to maintain that perhaps the person in question was conscious
> through the first episode but not conscious at all through the second.
> I admit to a great deal of mystification about the connection between mind
> and body, but I see no reason to endorse such possibilities that directly
> contradict all that we do know about brain process and experience.

        He's not making much sense if he's saying that we know that
consciousness is present in such a case! (BTW for it to be absent, as I
explained in a previous post, a large part of the brain would have to be
affected simultaneously.)

> "Whether the reason is enshrined in the supervenience thesis or not, our
> general picture of the relation between physical and mental reality firmly
> grounds the intuition that Olympia's [Olympia is Maudlin's name for his
> pseudo-conscious TM - Hal] experience cannot be changed by the presence
> or absence of the second set of blocks [these are blocks which interrupt
> the operation of an extra circuit, like the repair circuit above - Hal].
> These intuitions are not sacrosanct, but the computationalist especially
> abandons them at his own risk. For similar intuitions are often appealed
> to in defending the appropriateness of computational analogies in the
> first place. One first step in arguments aimed at inducing assent to
> the possibility of computers that can think, or feel, or intend, is
> to imagine that some sort of prosthetic human neuron made of silicon
> has been invented. We are then to imagine slowly replacing some poor
> sap's brain bit by bit until at last we have a silicon brain that, our
> intuitions should inform us, can do all of the mental and intensional
> work of the original.
>
> "This is as yet a far cry from showing that anything has mental properties
> in virtue of its computational structure, but it is supposed to break
> down parochial species-chauvinistic views about there being any deep
> connection between mentality and organic chemistry. But the thought
> experiment rests on a tacit appeal to supervenience. How could it
> matter, one asks, whether the electrical impulses are carried by neurons
> or by doped silicon? The implication is that mentality supervenes
> only on the pattern of electrical or electrochemical activity. If the
> computationalist now must assert that the presence or absence of a piece
> of metal hanging untouched and inert in the midst of silent, frozen
> machinery can make the difference between being conscious and not, who
> knows what enormous changes in psychical state may result from replacing
> axons and dendrites with little copper wires? Should the computationalist
> reject the extremely general intuitions at play in assessing Olympia's
> case, no means of judging the plausibility or implausibility of any
> theory of mind seems to remain."

        That's simply and totally a false statement on his part. The
computationalist has decided what seems important, such as the
counterfactual stucture, and has no problem believing that unless the
copper wires support a different counterfactual structure they would
implement the same computation. There is no instability with respect to
tiny changes as he alleges.

> Or, in my example, if a factory fire in India can affect whether a system
> here and now is conscious, you are moving into a realm where intuitions
> about consciousness become highly suspect.

        'Here and now' is not relevant to the issue. Physics is local but
the effects of a change propagate. An implementation does not take place
at a point in spacetime, but depends on an extended stucture.

From: Marchal <marchal.domain.name.hidden>
>Jacques M Mallah did indeed accept that consciousness will rely on the
>presence or absence of inactive piece, but this will put arbitrariness
>in any notion of physical instantiation of a computation, in the very
>opposite direction of what Turing-mechanism, or computationalisme is.

        No. First of all I never said that it depends on an inactive
piece; I just said that inactivity does not disqualify that piece. As
pointed out above the piece would likely not be inactive since it would
have to perform the computation to verify the run.
        As for "arbritrariness", I don't know what you're smoking but I
*don't* see that happenning *AT ALL* in this example. You have said
nothing to even *try* to justify such a statement.

                         - - - - - - -
              Jacques Mallah (jqm1584.domain.name.hidden)
       Graduate Student / Many Worlder / Devil's Advocate
"I know what no one else knows" - 'Runaway Train', Soul Asylum
            My URL: http://pages.nyu.edu/~jqm1584/
Received on Sun Jul 18 1999 - 16:04:56 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST