Re: QTI & euthanasia

From: Brent Meeker <meekerdb.domain.name.hidden>
Date: Sun, 02 Nov 2008 23:32:57 -0800

Bruno Marchal wrote:
> Replies to Jason Resch and Brent Meeker:
>
>
> On 01 Nov 2008, at 12:26, Jason Resch wrote:
>
>
>
>> I've thought of an interesting modification to the original UDA
>> argument which would suggest that one's consciousness is at both
>> locations simultaneously.
>>
>> Since the UDA accepts digital mechanism as its first premise, then
>> it is possible to instantiate a consciousness within a computer.
>> Therefore instead of a physical teleportation from Brussels to
>> Washington and Moscow instead we will have a digital transfer. This
>> will allow the experimenter to have complete control over the input
>> each mind receives and guarantee identical content of experience.
>>
>> A volunteer in Brussels has her brain frozen and scanned at the
>> necessary substitution level and the results are loaded into a
>> computer with the appropriate simulation software that can
>> accurately model her brain's functions, therefore from her
>> perspective, her consciousness continues onward from the time her
>> brain was frozen.
>>
>> To implement the teleportation, the simulation in the computer in
>> Brussels is paused, and a snapshot of the current state is sent over
>> the Internet to two computers, one in Washington and the other in
>> Moscow, each of these computers has the same simulation software and
>> upon receipt, resume the simulation of the brain where it left off
>> in Brussels.
>>
>> The question is: if the sensory input is pre-fabricated and
>> identical in both computers, are there two minds, or simply two
>> implementations of the same mind? If you believe there are two
>> minds, consider the following additional steps.
>>
>
>
>
> Only one mind, belonging to two relative histories (among an infinity).
>
>
>
>
>
>> Since it was established that the experimenter can "teleport" minds
>> by pausing a simulation, sending their content over the network, and
>> resuming it elsewhere, then what happens if the experimenter wants
>> to teleport the Washington mind to Moscow, and the Moscow mind to
>> Washington? Assume that both computers were preset to run the
>> simulation for X number of CPU instructions before pausing the
>> simulation and transferring the state, such that the states are
>> exactly the same when each is sent. Further assume that the
>> harddrive space on the computers is limited, so as they receive the
>> brain state, they overwrite their original save.
>>
>> During this procedure, the computers in Washington and Moscow each
>> receive the other's brain state, however, it is exactly the same as
>> the one they already had. Therefore the overwriting is a no-op.
>> After the transfer is complete, each computer resumes the
>> simulation. Now is Moscow's mind on the Washington computer? If so
>> how did a no-op (overwriting the file with the same bits) accomplish
>> the teleportation, if not, what makes the teleportation fail?
>>
>> What happens in the case where the Washington and Moscow computer
>> shutdown for some period of time (5 minutes for example) and then
>> ONLY the Moscow computer is turned back on. Did a "virtual"
>> teleportation occur between Washington and Moscow to allow the
>> consciousness that was in Washington to continue? If not, then
>> would a physical transfer of the data from Washington to Moscow have
>> saved its consciousness, and if so, what happened to the Moscow
>> consciousness?
>>
>> The above thought experiments led me to conclude that both computers
>> implement the same mind and are the same mind, despite having
>> different explanations.
>>
>
> Rigth.
>
>
>
>> Turning off one of the computers in either Washington or Moscow,
>> therefore, does not end the consciousness.
>>
>
>
> Yes.
>
>
>
>> Per the conclusions put forth in the UDA, the volunteer in Brussels
>> would say she has a 1/2 chance of ending up in the Washington
>> computer and 1/2 chance of ending up in the Moscow computer.
>> Therefore, if you told her "15 minutes after the teleportation the
>> computer in Washington will be shut off forever" she should expect a
>> 1/2 chance of dying. This seems to be a contradiction, as there is
>> a "virtual" teleportation from Washington to Moscow which saves the
>> consciousness in Washington from oblivion. So her chances of death
>> are 0, not 1/2, which is only explainable if we assume that her mind
>> is subjectively in both places after the first teleport from
>> Brussels, and so long as a simulation of her mind exists somewhere
>> she will never die.
>>
>
>
> And an infinity of those simulations exists, a-spatially and a-
> temporally, in arithmetic, (or in the "standard model of
> arithmetic") which entails comp-immortality (need step 8!). Actually
> a mind is never really located somewhere. Location is a construct of
> the mind. A (relative) body is what makes it possible for a mind to
> manifest itself relatively to some history/computation-from-inside.
> The movie graph argument (the 8th of UDA) justifies the necessity of
> this, but just meditation on the phantom limbs can help. The pain is
> not in the limb (given the absence of the limb), and the pain is not
> in the brain, (the brain is not sensitive) yet the subject locates the
> pain in the limb. Similarly we located ourself in space time, but if
> you push the logic of comp to its ultimate conclusion you understand
> that, assuming comp, space time is a phantom itself. Plato was on the
> right (with respect to comp) track.
>
> (Math: And computer science makes it possible to derive the
> mathematical description of that phantom, making comp Popper
> falsifiable. The phantom can be mathematically recovered from
> intensional variants of self-referential (Godel) provability modality
> G and G*).
>
>
> ==========================
> Brent Meeker wrote
>
>
>> My guess is that eventually we'll be able to create AI/robots that
>> seem
>> as intelligent and conscious as, for example, dogs seem.
>> We'll also be
>> able to partially map brains so that we can say that when these
>> neurons
>> do this the person is thinking thus and so. Once we have this degree
>> of
>> understanding and control, questions about "consciousness" will no
>> longer seem relevant. They'll be like the questions that philosophers
>> asked about life before we understood the molecular functions of
>> living
>> systems. They would ask:Where is the life? Is a virus alive? How
>> does
>> life get passed from parent to child? The questions won't get
>> answered; they'll just be seen as the wrong questions.
>>
>
>
>
> You don't get the point. Mechanism is incompatible with naturalism. To
> solve the mind body problem, keeping mechanism, the laws of physicist
> have to be explained from computer science, even from the gap between
> computer science and computer's computer science ...
> Physics is the fixed point of universal machine self observation.
> Let me know at which step (1?, ... 8?) you have a problem? The only
> one not discussed thoroughly is the 8th one.
>
I have reservations about #6: Consciousness is a process, but it
depends on a context. In the argument as to whether a stone is a
computer, even a universal computer, the error is in ignoring that the
computation in a computer has an interpretation which the programmer
provides. If he can provide this interpretation to the processes within
a stone, then indeed it would be a computer; but in general he can't. I
think consciousness is similar; it is a process but it only has an
interpretation as a *conscious* process within a context of perception
and action within a world. Which is why I think philosophical zombies
are impossible. But then, when you imagine reproducing someone's
consciousness, in a computer and simulating all the input/output, i.e.
all the context, then you have created a separate world in which there
is a consciousness in the context of *that* world. But it doesn't
follow that it is a consciousness in this world. The identification of
things that happen in the computer as "He experiences this." depend on
our interpretation of the computer program. There is no inherent,
ding-an-sich consciousness.

Your step #6 can be saved by supposing that a robot is constructed so
that the duplicated consciousness lives in the context of our world, but
this does not support the extension to the UD in step #7. To identify
some program the UD is generating as reproducing someone's consciousness
requires an interpretation. But an interpretation is a mapping between
the program states and the real world states - so it presumes a real world.

I have several problems with step #8. What are consistent 1-histories?
Can they be characterized without reference to nomological consistency?
The reduction to Platonia seems almost like a reduction argument against
comp. Except that comp was the assumption that one physical process can
be replaced by another that instantiates the same physical relations. I
don't see how it follows from that there need not be an instantiation at
all and we can just assume that the timeless existence in Platonia is
equivalent.

You write: "...the appearance of physics must be recovered from some
point of views emerging from those propositions." But how does is this
"emergence" work? Isn't it like saying if I postulate an absolute whole
that includes all logically possible relations then this must include
the appearance of physics and all I need is the probability measure that
picks it out. It's like Michaelangelo saying, "This block of marble
contains a statue of David. All I need is the measure that assigns 0 to
the part that's not David and 1 to the part that is David."

> To be sure, do you understand the nuance between the following theses:
>
> WEAK AI: some machines can behave as if their were conscious (but
> could as well be zombies)
> STRONG AI: some machines can be conscious
> COMP: I am a machine
>
> We have
>
> COMP => STRONG AI => WEAK AI
>
> WEAK does not imply STRONG AI which does not imply COMP. (it is not
> because machine can be conscious that we are necessarily machine
> ourself, of course with occam razor, STRONG AI go in the direction of
> COMP).
>
> Does those nuances make sense? If not (1...8) does not, indeed, make
> sense. You just don't believe in consciousness and/or person like in
> the eliminative materialism of neuro-philosophers ( the Churchland,
> amost Dennett in "consciousness explained").
>
I think they make some good arguments. I don't think that consciousness
is a thing or can exist apart from a much larger context.

Brent


> Or you make us very special infinite analogical machines, but then you
> drop the digital mechanist thesis (even the naturalist one, which has
> been shown inconsistent by 1...8.)
>
>
> Bruno Marchal
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>
> >
>
>


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Mon Nov 03 2008 - 02:33:14 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:15 PST