There's a quote you might like, by Korzybski: "That which makes no
difference _is_ no difference."
--------------------------
- Did you ever hear of "The Seattle Seven"?
- Mmm.
- That was me... and six other guys.
2008/11/26 Bruno Marchal <marchal.domain.name.hidden>
> MGA 3
>
> It is the last MGA !
>
> I realize MGA is complete, as I thought it was, but I was doubting this
> recently. We don't need to refer to Maudlin, and MGA 4 is not necessary.
> Maudlin 1989 is an independent argument of the 1988 Toulouse argument
> (which I present here). Note that Maudlin's very interesting "Olympization
> technic" can be used to defeat a wrong form of MGA 3, that is, a wrong
> argument for the assertion that the movie cannot be conscious. (the
> argument that the movie lacks the counterfactual). Below are hopefully
> correct (if not very simple) argument. ( I use Maudlin sometimes when people
> gives this non correct form of MGA 3, and this is probably what makes me
> think Maudlin has to be used, at some point).
>
>
>
> MGA 1 shows that Lucky Alice is conscious, and MGA 2 shows that the
> "luckiness" feature of the MGA 1 experiment was a red herring. We can
> construct, from MEC+COMP, an home made lucky rays generator, and use it at
> will. If we accept both digital mechanism, in particular Dennet's principle
> that neurons have no intelligence, still less prescience, and this
> *together with* the supervenience principle; we have to accept that Alice
> conscious dream experience supervenes on the projection of her brain
> activity movie.
>
> Let us show now that Alice consciousness *cannot* supervene on that
> *physical* movie projection.
>
>
>
> I propose two (deductive) arguments.
>
> 1)
>
> Mechanism implies the following tautological functionalist principle: if,
> for some range of activity, a system does what it is supposed to do, and
> this before and after a change is made in its constitution, then the change
> does not change what the system is supposed to do, for that range of
> activity.
> Example:
> - A car is supposed to broken but only if the driver is faster than 90
> miles/h. Pepe Pepito NEVER drives faster than 80 miles/h. Then the car is
> supposed to do what she is supposed to do, with respect of its range of
> activity defined by Pepe Pepito.
> - Claude bought a 1000 thousand processors computer. One day he realized
> that he used only 990 processors, for his type of activity, so he decided to
> get rid of those 10 useless processors. And indeed the machine will satisfy
> Claude ever.
>
> - Alice has (again) a math exam. Theoreticians have correctly predict that
> in this special circumstance, she will never use neurons X, Y and Z. Now
> Alice go (again, again) to this exam in the same condition, but with the
> neurons X, Y, Z removed. Again, not only will she behaved like if she
> succeed her exam, but her consciousness, with both MEC *and* MAT still
> continue.
> The idea is that if something is not useful, for an active process to go
> on, for some range of activity, then you can remove it, for that range of
> activity.
>
> OK?
>
> Now, consider the projection of the movie of the activity of Alice's brain,
> "the movie graph".
> Is it necessary that someone look at that movie? Certainly not. No more
> than it is needed that someone is look at your reconstitution in Moscow for
> you to be conscious in Moscow after a teleportation. All right? (with MEC
> assumed of course).
> Is it necessary to have a screen? Well, the range of activity here is just
> one dynamical description of one computation. Suppose we make a hole in the
> screen. What goes in and out of that hole is exactly the same, with the hole
> and without the hole. For that unique activity, the hole in the screen is
> functionally equivalent to the subgraph which the hole removed. Clearly we
> can make a hole as large as the screen, so no need for a screen.
> But this reasoning goes through if we make the hole in the film itself.
> Reconsider the image on the screen: with a hole in the film itself, you get
> a "hole" in the movie, but everything which enters and go out of the hole
> remains the same, for that (unique) range of activity. The "hole" has
> trivially the same functionality than the subgraph functionality whose
> special behavior was described by the film. And this is true for any
> subparts, so we can remove the entire film itself.
>
> Does Alice's dream supervene (in real time and space) on the projection of
> the empty movie?
>
> Remark.
> 1° Of course, this argument can be sum up by saying that the movie lacks
> causality between its parts so that it cannot really be said that it
> computes any thing, at least physically. The movie is just an ordered record
> of computational states. This is neither a physical computation, nor an
> (immaterial) computation where the steps follows relatively to some
> universal machine. It is just a description of a computation, already
> existing in the Universal Deployment.
> 2° Note this: If we take into consideration the relative destiny of Alice,
> and supposing one day her brain broke down completely, she has more chance
> to survive through "holes in the screen" than to the "holes in the film".
> The film contains the relevant information to reconstitute Alice from her
> brain description, contained on this high resolution film. Keeping comp, and
> abandoning the physical supervenience thesis, means that we do no more
> associate consciousness, neither on the movie, NOR on the brain special
> activity in a computation, but to the computation itself directly. A brain,
> and even a film, will "only" be a way to make bigger the probability
> for a consciousness to manifest itself relatively to a "probable" universal
> computational history.
> Strictly speaking, running the movie dimimish Alice chance to have her
> conscious experience (life) continue, at least relatively to you, because of
> the many scratches the projector makes on the pellicle, which remove
> relevant information for a safe reconstitution later (again relatively to
> you).
>
>
> 2)
>
> I give now what is perhaps a simpler argument
>
> A projection of a movie is a relative phenomenon. On the planet 247a,
> nearby in the galaxy, they don't have screen. The film pellicle is as big as
> a screen, and they make the film passing behind a stroboscope at the right
> frequency in front of the public. But on planet 247b, movies are only for
> travellers! They dress their film, as big as those on planet 247a, in their
> countries all along their train rails with a lamp besides each frames, which
> is nice because from the train, through its speed, you get the usual 24
> frames per second. But we already accepted that such movie does not need to
> be observed, the train can be empty of people. Well the train does not play
> any role, and what remains is the static film with a lamp behind each frame.
> Are the lamps really necessaries? Of course not, all right? So now we are
> obliged to accept that the consciousness of Alice during the projection of
> the movie supervenes of something completely inert in time and space. This
> contradicts the *physical* supervenience thesis.
>
>
> Exercises.
>
> a) Someone could propose an alternate argument that a movie does not
> compute (and so consciousness does supervene on it) by alluding to the lack
> of causality in the movie: the movie does not handle the counterfactual
> existing implicitly in computations (physical or not). Use Maudlin's
> Olympization technic to refute that argument.
>
> b) Make fun by using a non dreaming Alice. Shows that the movie (film or
> screen) graph border is needed to get the accidental zombies (the puppet).
>
> And then the "important" exercise (the original goal).
>
> c) Eliminate the hypothesis "there is a concrete deployment" in the seventh
> step of the UDA. Use UDA(1...7) to define properly the computationalist
> supervenience thesis. Hint: reread the remarks above.
>
> Have a good day.
>
>
> Bruno
>
>
> http://iridia.ulb.ac.be/~marchal/ <http://iridia.ulb.ac.be/%7Emarchal/>
> http://iridia.ulb.ac.be/~marchal/ <http://iridia.ulb.ac.be/%7Emarchal/>
>
>
>
>
> >
>
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Wed Nov 26 2008 - 08:40:01 PST