Bruno Marchal wrote:
> Replies to Jason Resch and Brent Meeker:
>
>
> On 01 Nov 2008, at 12:26, Jason Resch wrote:
>
>
>> I've thought of an interesting modification to the original UDA
>> argument which would suggest that one's consciousness is at both
>> locations simultaneously.
>>
>> Since the UDA accepts digital mechanism as its first premise, then
>> it is possible to instantiate a consciousness within a computer.
>> Therefore instead of a physical teleportation from Brussels to
>> Washington and Moscow instead we will have a digital transfer. This
>> will allow the experimenter to have complete control over the input
>> each mind receives and guarantee identical content of experience.
>>
>> A volunteer in Brussels has her brain frozen and scanned at the
>> necessary substitution level and the results are loaded into a
>> computer with the appropriate simulation software that can
>> accurately model her brain's functions, therefore from her
>> perspective, her consciousness continues onward from the time her
>> brain was frozen.
>>
>> To implement the teleportation, the simulation in the computer in
>> Brussels is paused, and a snapshot of the current state is sent over
>> the Internet to two computers, one in Washington and the other in
>> Moscow, each of these computers has the same simulation software and
>> upon receipt, resume the simulation of the brain where it left off
>> in Brussels.
>>
>> The question is: if the sensory input is pre-fabricated and
>> identical in both computers, are there two minds, or simply two
>> implementations of the same mind? If you believe there are two
>> minds, consider the following additional steps.
>
>
>
> Only one mind, belonging to two relative histories (among an infinity).
>
>
>
>
>> Since it was established that the experimenter can "teleport" minds
>> by pausing a simulation, sending their content over the network, and
>> resuming it elsewhere, then what happens if the experimenter wants
>> to teleport the Washington mind to Moscow, and the Moscow mind to
>> Washington? Assume that both computers were preset to run the
>> simulation for X number of CPU instructions before pausing the
>> simulation and transferring the state, such that the states are
>> exactly the same when each is sent. Further assume that the
>> harddrive space on the computers is limited, so as they receive the
>> brain state, they overwrite their original save.
>>
>> During this procedure, the computers in Washington and Moscow each
>> receive the other's brain state, however, it is exactly the same as
>> the one they already had. Therefore the overwriting is a no-op.
>> After the transfer is complete, each computer resumes the
>> simulation. Now is Moscow's mind on the Washington computer? If so
>> how did a no-op (overwriting the file with the same bits) accomplish
>> the teleportation, if not, what makes the teleportation fail?
>>
>> What happens in the case where the Washington and Moscow computer
>> shutdown for some period of time (5 minutes for example) and then
>> ONLY the Moscow computer is turned back on. Did a "virtual"
>> teleportation occur between Washington and Moscow to allow the
>> consciousness that was in Washington to continue? If not, then
>> would a physical transfer of the data from Washington to Moscow have
>> saved its consciousness, and if so, what happened to the Moscow
>> consciousness?
>>
>> The above thought experiments led me to conclude that both computers
>> implement the same mind and are the same mind, despite having
>> different explanations.
>
> Rigth.
>
>
>> Turning off one of the computers in either Washington or Moscow,
>> therefore, does not end the consciousness.
>
>
> Yes.
>
>
>> Per the conclusions put forth in the UDA, the volunteer in Brussels
>> would say she has a 1/2 chance of ending up in the Washington
>> computer and 1/2 chance of ending up in the Moscow computer.
>> Therefore, if you told her "15 minutes after the teleportation the
>> computer in Washington will be shut off forever" she should expect a
>> 1/2 chance of dying. This seems to be a contradiction, as there is
>> a "virtual" teleportation from Washington to Moscow which saves the
>> consciousness in Washington from oblivion. So her chances of death
>> are 0, not 1/2, which is only explainable if we assume that her mind
>> is subjectively in both places after the first teleport from
>> Brussels, and so long as a simulation of her mind exists somewhere
>> she will never die.
>
>
> And an infinity of those simulations exists, a-spatially and a-
> temporally, in arithmetic, (or in the "standard model of
> arithmetic") which entails comp-immortality (need step 8!). Actually
> a mind is never really located somewhere. Location is a construct of
> the mind. A (relative) body is what makes it possible for a mind to
> manifest itself relatively to some history/computation-from-inside.
> The movie graph argument (the 8th of UDA) justifies the necessity of
> this, but just meditation on the phantom limbs can help. The pain is
> not in the limb (given the absence of the limb), and the pain is not
> in the brain, (the brain is not sensitive) yet the subject locates the
> pain in the limb. Similarly we located ourself in space time, but if
> you push the logic of comp to its ultimate conclusion you understand
> that, assuming comp, space time is a phantom itself. Plato was on the
> right (with respect to comp) track.
>
> (Math: And computer science makes it possible to derive the
> mathematical description of that phantom, making comp Popper
> falsifiable. The phantom can be mathematically recovered from
> intensional variants of self-referential (Godel) provability modality
> G and G*).
>
>
> ==========================
> Brent Meeker wrote
>
>> My guess is that eventually we'll be able to create AI/robots that
>> seem
>> as intelligent and conscious as, for example, dogs seem.
>> We'll also be
>> able to partially map brains so that we can say that when these
>> neurons
>> do this the person is thinking thus and so. Once we have this degree
>> of
>> understanding and control, questions about "consciousness" will no
>> longer seem relevant. They'll be like the questions that philosophers
>> asked about life before we understood the molecular functions of
>> living
>> systems. They would ask:Where is the life? Is a virus alive? How
>> does
>> life get passed from parent to child? The questions won't get
>> answered; they'll just be seen as the wrong questions.
>
>
>
> You don't get the point. Mechanism is incompatible with naturalism. To
> solve the mind body problem, keeping mechanism, the laws of physicist
> have to be explained from computer science, even from the gap between
> computer science and computer's computer science ...
> Physics is the fixed point of universal machine self observation.
That would be a very impressive result if you could prove it - and you could
prove that there is no other empirically equivalent model. I've long been of
the opinion that space and time are constructs. I also think the integers and
arithmetic are constructs. But so far I understand your thesis to be that
physics consists of certain relations among experiences regarded as mental
events. This solves the mind-body problem by making the body a construct of the
mind. So far, so good. Further, you hold that these relations are Turing
computable and so exist in Platonia as a subset of all arithmetic. I like this
better than Tegmark's idea of our physics as a subset of all mathematics because
your idea is more specific and leads to questions that may be answerable. But I
still see some problems:
First, it doesn't eliminate the possibility that some other subset of Platonia,
e.g. geometry or topology, might also provide a representation of our physics.
In fact, given that our knowledge of physics is imprecise, it seems likely that
there are infinitely many subsets of Platonia that are models of our physics. Of
course you can argue that even a non-computable model of physics may be
approximated by a computable model to an adequate degree. But this just pushes
the question off to what is "adequate" and it does not warrant rejecting
materialism as explicated by Peter.
Second, is the problem of finding the fixed point, or distinguishing the measure
on all the Turing computations that picks out our physics. I understand you
have some results, such as "no computation can know which computation it is",
which are interesting, but do not pick out any particular physics. There's a
general problem here in that the current best theories of physics are based on
continuous variables. Many physicists think that an ultimate theory would be
discrete, but nobody knows how to make a discrete theory from which our
continuous theories would emerge.
> Let me know at which step (1?, ... 8?) you have a problem? The only
> one not discussed thoroughly is the 8th one.
>
> To be sure, do you understand the nuance between the following theses:
>
> WEAK AI: some machines can behave as if their were conscious (but
> could as well be zombies)
> STRONG AI: some machines can be conscious
> COMP: I am a machine
>
> We have
>
> COMP => STRONG AI => WEAK AI
>
> WEAK does not imply STRONG AI which does not imply COMP. (it is not
> because machine can be conscious that we are necessarily machine
> ourself, of course with occam razor, STRONG AI go in the direction of
> COMP).
>
> Does those nuances make sense? If not (1...8) does not, indeed, make
> sense. You just don't believe in consciousness and/or person like in
> the eliminative materialism of neuro-philosophers ( the Churchland,
> amost Dennett in "consciousness explained").
As my lawyer friend says, "I'm not in the belief business."
>
> Or you make us very special infinite analogical machines, but then you
> drop the digital mechanist thesis (even the naturalist one, which has
> been shown inconsistent by 1...8.)
I think it might be that the universe is not computable. But I think it is very
likely that one's consciousness is computable, at least for a finite time period.
Brent
>
>
> Bruno Marchal
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>
> >
>
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sun Nov 09 2008 - 14:29:00 PST