Re: Olympia's Beautiful and Profound Mind

From: Bruno Marchal <marchal.domain.name.hidden>
Date: Fri, 13 May 2005 13:34:01 +0200

Thanks for that very nice summary. I let people think about it. We have
discussed it a long time before on the Everything-list. A keyword to
find that discussion in the everything list archive is "crackpot" as
Jacques Mallah named the argument.
Good we can come back on this, because we didn't conclude our old
discussion, and for the new people in the list, as for the for-list
people, it is a quite important step to figure out that the UDA is a
``proof", not just an ``argument". Well, at least I think so. Also,
thanks to Maudlin taking into account the necessity of the
counterfactuals in the notion of computation, and thanks to another
(more technical) paper by Hardegree, it is possible to use it to
motivate some equivalent but technically different path toward an
arithmetical quantum logic. I propose we talk on Hardegree later. But I
give the reference of Hardegree for those who are impatient ;) (also,
compare to many paper on quantum logic, this one is quite readable, and
constitutes perhaps a nice introduction to quantum logic, and I would
add, especially for Many-Wordlers. Hardegree shows that the most
standard implication connective available in quantum logic is formally
(at least) equivalent to a Stalnaker-Lewis notion of counterfactual. It
is the David Lewis of "plurality of worlds" and "Counterfactuals". Two
books which deserves some room on the shell of For-Lister and
Everythingers, imo.
Also, I didn't knew but late David Lewis did write a paper on Everett
(communicated to me by Adrien Barton). Alas, I have not yet find the
time to read it.

  Hardegree, G. M. (1976). The Conditional in Quantum Logic. In Suppes,
P., editor, Logic and Probability in Quantum Mechanics, volume 78 of
Synthese Library, pages 55-72. D. Reidel Publishing Company,
Dordrecht-Holland.

Bruno

Le 13-mai-05, à 09:50, Brian Scurfield a écrit :

> Bruno recently urged me to read up on Tim Maudlin's movie-graph
> argument
> against the computational hypothesis. I did so. Here is my version of
> the
> argument.
> ............................
>
> According to the computational hypothesis, consciousness supervenes on
> brain
> activity and the important level of organization in the brain is its
> computational structure. So the same consciousness can supervene on two
> different physical systems provided that they support the same
> computational
> structure. For example, we could replace every neuron in your brain
> with a
> functionally equivalent silicon chip and you would not notice the
> difference.
>
> Computational structure is an abstract concept. The machine table of a
> Turing Machine does not specify any physical requirements and different
> physical implementations of the same machine may not be comparable in
> terms
> of the amount of physical activity each must engage in. We might
> enquire:
> what is the minimal amount of physical activity that can support a
> given
> computation, and, in particular, consciousness?
>
> Consider that we have a physical Turing Machine that instantiates the
> phenomenal state of a conscious observer. To do this, it starts with a
> prepared tape and runs through a sequence of state changes, writing
> symbols
> to the tape, and moving the read-write as it does so. It engages in a
> lot of
> physical activity. By assumption, the phenomenal state supervenes on
> this
> physical computational activity. Each time we run the machine we will
> get
> the same phenomenal state.
>
> Let's try to minimise the amount of computational activity that the
> Turing
> Machine must engage in. We note that many possible pathways through the
> machine state table are not used in our particular computation because
> certain counterfactuals are not true. For example, on the first step,
> the
> machine might actually go from S_0 to S_8 because the data location on
> the
> tape contained 0. Had the tape contained a 1, it might have gone to
> S_10,
> but this doesn't obtain because the 1 was not actually present.
>
> So let's unravel the actual computational path taken by the machine
> when it
> starts with the prepared tape. Here are the actual machine states and
> tape
> locations at each step:
>
> S_0 s_8 s_7 s_7 s_3 s_2 . . . s_1023
> t_0 t_1 t_2 t_1 t_2 t_3 . . . t_2032
>
> Re-label these as follows:
>
> s_[0] s_[1] s_[2] s_[3] s_[4] s_[5] . . .s_[N]
> t_[0] t_[1] t_[2] t_[3] t_[4] t_[5] . . .t_[N]
>
> Note that t_[1] and t_[3] are the same tape location, namely t_1.
> Similarly,
> t_[2] and t_[4] are both tape location t_2. These tape locations are
> "multiply-located".
>
> The tape locations t_[0], t[1], t[2], ..., can be arranged in physical
> sequence provided that a mechanism is provided to link the
> multiply-located
> locations. Thus t[1] and t[3] might be joined by a circuit that turns
> both
> on when a 1 is written and both off when a 0 is written. Now when the
> machine runs, it has to take account of the remapped tape locations
> when
> computing what state to go into next. Nevertheless, the net-effect of
> all
> this is that it just runs from left to right.
>
> If the machine just runs from left to right, why bother computing the
> state
> changes? We could just arrange for each tape location to turn on (1 =
> on) or
> off (0 = off) when the read/write head arrives. For example, if t_[2]
> would
> have been turned on in the original computation, then there would be a
> local
> mechanism that turns that location on when the read/write head arrives
> (note
> that t_[4] would also turn on because it is linked to t_[2]). The state
> S_[i] is then defined to occur when the machine is at tape location
> t_[i]
> (this machine therefore undergoes as many state changes as the original
> machine). Now we have a machine that just moves from left to right
> triggering tape locations. To make it even simpler, the read/write
> head can
> be replaced by a armature that moves from left to right triggering tape
> locations. We have a very lazy machine! It's name is Olympia.
>
> What, then, is the physical activity on which the phenomenal state
> supervenes? It cannot be in the activity of the armature moving from
> left to right. That doesn't seem to have the required complexity. Is
> it in
> the turning on and off of the tape locations as the armature moves?
> Again that does not seem to have the required degree of complexity.
>
> It might be objected that in stripping out the computational pathway
> that we
> did, we have neglected all the other pathways that could have been
> executed
> but never in fact were. But what difference do these pathways make? We
> could
> construct similar left-right machines for each of these pathways. These
> machines would be triggered when a counterfactual occurs at a tape
> location.
> The triggering mechanism is simple. If, say, t_[3] was originally on
> just
> prior to the arrival of the read/write head but is now in fact off,
> then we
> can freeze the original machine and arrange for another left-right
> machine
> to start from that tape location. This triggering and freezing can be
> done
> using a simple local mechanism at t_[3].
>
> For brevity, I have just sketched how the counterfactuals might be
> implemented (see the original article for more detail). The point is
> that we
> have implemented all this extra machinery for supporting
> counterfactuals,
> but none of it is actually used during the original computation. It
> remains
> silent and inactive. Olympia runs just as well without them. Does
> connecting
> up all the counterfactual machinery make Olympia phenomenally aware?
> And
> does disconnecting the machinery make her not phenomenally aware even
> though
> exactly the same computation is taking place?
>
>> From the above, it would seem the following are inconsistent with each
> other.
>
> 1. Your phenomenal state at a time is entirely determined by your brain
> activity at the time.
> 2. For any phenomenal state of consciousness there exists some
> program, some
> tape configuration, and some sequence of machine states that brings
> about
> that phenomenal state on any physical machine capable of running the
> program.
> 3. A physical system supports a phenomenal state if that the system
> can be
> implemented as a Turing Machine performing some computation.
>
> Maudlin's conclusion is that phenomenal states cannot supervene on
> physical
> computational activity.
>
> This, of course, is where Bruno and co. step in.
> --------------------------------------
>
>
> Notes:
> 1. Bruno Marchal independently discovered the movie-graph argument in
> 1988.
> 2. Maudlin considered a machine that used water troughs in place of
> tape
> locations, but I really didn't want to inflict that kind of imagery on
> Bill!
>
>
> Reference.
>
> Maudlin, Tim (1989). Computation and Consciousness. Journal of
> Philosophy.
> pp. 407-432.
>
>
>
http://iridia.ulb.ac.be/~marchal/
Received on Fri May 13 2005 - 08:40:35 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:10 PST