Re: MGA 3

From: Brent Meeker <meekerdb.domain.name.hidden>
Date: Thu, 27 Nov 2008 11:38:03 -0800

Abram Demski wrote:
> Bruno,
>
> It seems to me that this runs head-on into the problem of the
> definition of time...
>
> Here is my argument; I am sure there will be disagreement with it.
>
> Supposing that Alice's consciousness is spread out over the movie
> billboards next to the train track, there is no longer a normal
> temporal relationship between mental moments. There must merely be a
> "time-like" relationship, which Alice experiences as time. But, then,
> we are saying that wherever a logical relationship exists that is
> time-like, there is subjective time for those inside the time-like
> relationship.
>
> Now, what might constitute a time-like relationship? I see several
> alternatives, but none seem satisfactory.
>
> At any given moment, all we can be directly aware of is that one
> moment. If we remember the past, that is because at the present moment
> our brain has those memories; we don't know if they "really" came from
> the past. What would it mean to put moments in a series? It changes
> nothing essential about the moment itself; we can remove the past,
> because it adds nothing.

You raise some good points. I think the crux of the problem comes from chopping
a process up into "moments" and assuming that these infinitesimal, frozen slices
preserve all that is necessary for time. It is essentially the same as assuming
there is a "subsitution level" below which we can ignore causality and just talk
about states. It seems like a obvious idea, but it is contrary to quantum
mechanics and unitary evolution under the Schrodinger equation which was the
basis for the whole idea of a multiverse and "everything happens".


>
> The connection between moments doesn't seem like a physical
> connection; the notion is non-explanatory, since if there were such a
> physical connection we could remove it without altering the individual
> moments, therefore not altering our memories, and our subjective
> experience of time.

How do we know that? Memories and brain processes are distributed and parallel,
which means there are spacelike separated parts of the process - and neural
signals are orders of magnitude slower than light.

Brent

>Similarly, can it be a logical relationship? Is it
> the structure of a single moment that connects it to the next? How
> would this be? Perhaps we require that there is some function (a
> "physics") from one moment to the next? But, this does not exactly
> allow for things like relativity in which there is no single universal
> clock. Of course, relativity could be simulated, creating a universe
> that was run be a universal clock but whose internal facts did not
> depend on which universal clock, exactly, the simulation was run from.
> My problem is, I suppose, that any particular definition of "timelike
> relationship" seems too arbitrary. As another example, should any
> probabilistic elements be allowed into physics? In this case, we don't
> have a function any more, but a relation-- perhaps a relation of
> weighted transitions. But how would this relation make any difference
> from inside the universe?
>
> --Abram
>
> On Wed, Nov 26, 2008 at 4:09 AM, Bruno Marchal <marchal.domain.name.hidden> wrote:
>> MGA 3
>>
>> It is the last MGA !
>>
>> I realize MGA is complete, as I thought it was, but I was doubting this
>> recently. We don't need to refer to Maudlin, and MGA 4 is not necessary.
>> Maudlin 1989 is an independent argument of the 1988 Toulouse argument (which
>> I present here).
>> Note that Maudlin's very interesting "Olympization technic" can be used to
>> defeat a wrong form of MGA 3, that is, a wrong argument for the assertion
>> that the movie cannot be conscious. (the argument that the movie lacks the
>> counterfactual). Below are hopefully correct (if not very simple) argument.
>> ( I use Maudlin sometimes when people gives this non correct form of MGA 3,
>> and this is probably what makes me think Maudlin has to be used, at some
>> point).
>>
>>
>>
>> MGA 1 shows that Lucky Alice is conscious, and MGA 2 shows that the
>> "luckiness" feature of the MGA 1 experiment was a red herring. We can
>> construct, from MEC+COMP, an home made lucky rays generator, and use it at
>> will. If we accept both digital mechanism, in particular Dennet's principle
>> that neurons have no intelligence, still less prescience, and this
>> *together with* the supervenience principle; we have to accept that Alice
>> conscious dream experience supervenes on the projection of her brain
>> activity movie.
>>
>> Let us show now that Alice consciousness *cannot* supervene on that
>> *physical* movie projection.
>>
>>
>> I propose two (deductive) arguments.
>>
>> 1)
>>
>> Mechanism implies the following tautological functionalist principle: if,
>> for some range of activity, a system does what it is supposed to do, and
>> this before and after a change is made in its constitution, then the change
>> does not change what the system is supposed to do, for that range of
>> activity.
>> Example:
>> - A car is supposed to broken but only if the driver is faster than 90
>> miles/h. Pepe Pepito NEVER drives faster than 80 miles/h. Then the car is
>> supposed to do what she is supposed to do, with respect of its range of
>> activity defined by Pepe Pepito.
>> - Claude bought a 1000 thousand processors computer. One day he realized
>> that he used only 990 processors, for his type of activity, so he decided to
>> get rid of those 10 useless processors. And indeed the machine will satisfy
>> Claude ever.
>>
>> - Alice has (again) a math exam. Theoreticians have correctly predict that
>> in this special circumstance, she will never use neurons X, Y and Z. Now
>> Alice go (again, again) to this exam in the same condition, but with the
>> neurons X, Y, Z removed. Again, not only will she behaved like if she
>> succeed her exam, but her consciousness, with both MEC *and* MAT still
>> continue.
>> The idea is that if something is not useful, for an active process to go on,
>> for some range of activity, then you can remove it, for that range of
>> activity.
>>
>> OK?
>>
>> Now, consider the projection of the movie of the activity of Alice's brain,
>> "the movie graph".
>> Is it necessary that someone look at that movie? Certainly not. No more than
>> it is needed that someone is look at your reconstitution in Moscow for you
>> to be conscious in Moscow after a teleportation. All right? (with MEC
>> assumed of course).
>> Is it necessary to have a screen? Well, the range of activity here is just
>> one dynamical description of one computation. Suppose we make a hole in the
>> screen. What goes in and out of that hole is exactly the same, with the hole
>> and without the hole. For that unique activity, the hole in the screen is
>> functionally equivalent to the subgraph which the hole removed. Clearly we
>> can make a hole as large as the screen, so no need for a screen.
>> But this reasoning goes through if we make the hole in the film itself.
>> Reconsider the image on the screen: with a hole in the film itself, you get
>> a "hole" in the movie, but everything which enters and go out of the hole
>> remains the same, for that (unique) range of activity. The "hole" has
>> trivially the same functionality than the subgraph functionality whose
>> special behavior was described by the film. And this is true for any
>> subparts, so we can remove the entire film itself.
>>
>> Does Alice's dream supervene (in real time and space) on the projection of
>> the empty movie?
>>
>> Remark.
>> 1° Of course, this argument can be sum up by saying that the movie lacks
>> causality between its parts so that it cannot really be said that it
>> computes any thing, at least physically. The movie is just an ordered record
>> of computational states. This is neither a physical computation, nor an
>> (immaterial) computation where the steps follows relatively to some
>> universal machine. It is just a description of a computation, already
>> existing in the Universal Deployment.
>> 2° Note this: If we take into consideration the relative destiny of Alice,
>> and supposing one day her brain broke down completely, she has more chance
>> to survive through "holes in the screen" than to the "holes in the film".
>> The film contains the relevant information to reconstitute Alice from her
>> brain description, contained on this high resolution film. Keeping comp, and
>> abandoning the physical supervenience thesis, means that we do no more
>> associate consciousness, neither on the movie, NOR on the brain special
>> activity in a computation, but to the computation itself directly. A brain,
>> and even a film, will "only" be a way to make bigger the probability
>> for a consciousness to manifest itself relatively to a "probable" universal
>> computational history.
>> Strictly speaking, running the movie dimimish Alice chance to have her
>> conscious experience (life) continue, at least relatively to you, because of
>> the many scratches the projector makes on the pellicle, which remove
>> relevant information for a safe reconstitution later (again relatively to
>> you).
>>
>>
>> 2)
>>
>> I give now what is perhaps a simpler argument
>>
>> A projection of a movie is a relative phenomenon. On the planet 247a, nearby
>> in the galaxy, they don't have screen. The film pellicle is as big as a
>> screen, and they make the film passing behind a stroboscope at the right
>> frequency in front of the public. But on planet 247b, movies are only for
>> travellers! They dress their film, as big as those on planet 247a, in their
>> countries all along their train rails with a lamp besides each frames, which
>> is nice because from the train, through its speed, you get the usual 24
>> frames per second. But we already accepted that such movie does not need to
>> be observed, the train can be empty of people. Well the train does not play
>> any role, and what remains is the static film with a lamp behind each frame.
>> Are the lamps really necessaries? Of course not, all right? So now we are
>> obliged to accept that the consciousness of Alice during the projection of
>> the movie supervenes of something completely inert in time and space. This
>> contradicts the *physical* supervenience thesis.
>>
>>
>> Exercises.
>>
>> a) Someone could propose an alternate argument that a movie does not compute
>> (and so consciousness does supervene on it) by alluding to the lack of
>> causality in the movie: the movie does not handle the counterfactual
>> existing implicitly in computations (physical or not). Use Maudlin's
>> Olympization technic to refute that argument.
>> b) Make fun by using a non dreaming Alice. Shows that the movie (film or
>> screen) graph border is needed to get the accidental zombies (the puppet).
>>
>> And then the "important" exercise (the original goal).
>> c) Eliminate the hypothesis "there is a concrete deployment" in the seventh
>> step of the UDA. Use UDA(1...7) to define properly the computationalist
>> supervenience thesis. Hint: reread the remarks above.
>> Have a good day.
>>
>>
>> Bruno
>>
>>
>> http://iridia.ulb.ac.be/~marchal/
>> http://iridia.ulb.ac.be/~marchal/
>>
>>
>>
>
> >
>


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Thu Nov 27 2008 - 14:38:22 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:15 PST