2009/2/25 Jack Mallah <jackmallah.domain.name.hidden>:
> 1) The fair trade
>
> This is the teleportation or Star Trek transporter thought experiment. A person is disintegrated, while a physically identical copy is created elsewhere.
>
> Even on Star Trek, not everyone was comfortable with doing this. The first question is: Is the original person killed, or is he merely moved from one place to another? The second question is, should he be worried?
>
> The answer to the first question depends on the definition of personal identity. If it is a causal chain, then if the transporter is reliable, the causal chain will continue. However, if the copy was only created due to extreme luck and its memory (though coincidentally identical to that of the original) is not determined by that of the original, then the chain was ended and a new one started.
>
> The second question is more important.
>
> Since we are considering the situation before the experiment, we have to use Caring Measure here. The temptation is to skip such complications because there is no splitting and no change in measure, but skipping it here can lead to confusion in more complicated situations.
>
> The utility function I'll use is oversimplified for most people in terms of being so utilitarian (as opposed to conservatively history-respecting, which might oppose 'teleportation') but will serve.
>
> So if our utility function is U = M Q, where M is the guy's measure (which is constant here) and Q is his quality of life factor (which we can assume to be constant), we see that it does not depend on whether or not the teleportation is done. (In practice, Q should be better afterwards, or there is no reason to do it.) Therefore it is OK to do it. It is a fair trade.
>
> 2) The unfair trade
>
> Now we come to the situation where there are 2 ‘copies’ of a person in the evening, but one will be removed overnight, leaving just one from then on. I’ll call this a culling.
>
> I pointed out that in this situation, the person does not know which copy he is, so subjectively he has a 50% chance of dying overnight. That is true, using causal chains to define identity, but the objection was raised that ‘since one copy survives, the person survives’ based on the ‘teleportation’ idea that the existence elsewhere of a person with the same memory and functioning is equivalent to the original person surviving.
>
> So to be clear, we can combine a culling with teleportation as follows: both copies are destroyed overnight, but elsewhere a new copy is created that is identical to what the copies would have been like had they survived.
>
> Is it still true that the person has a subjective 50% chance to die overnight? If causal chains are the definition, then depending on the unreliability of the teleporter and how it was done, the chance of dying might be more like 100%. But as we have seen, definitions of personal identity are not important. What matters is whether the person should be unhappy about this state of affairs; in other words, whether his utility function is decreased by conducting the culling.
>
> Using U = M Q, it obviously is decreased, since M is halved and Q is unchanged. So as far as I can see, the only point of contention that might remain is whether this is a reasonable utility function. That is what the next thought experiment will address.
If you're not worried about the fair trade, then to be consistent you
shouldn't be worried about the unfair trade either. In the fair trade,
one version of you A disappears overnight, and a new version of you B
is created elsewhere in the morning. The unfair trade is the same,
except that there is an extra version of you A' which disappears
overnight. Now why should the *addition* of another version make you
nervous when you wouldn't have been nervous otherwise? Sure, you don't
know whether you are A or A', but the situation is symmetrical: if you
are A the presence of A' should make no difference to you, and if you
are A' the presence of A should make no difference to you. And if
something makes no difference to you, it shouldn't impact on your
utility function.
> 3) The Riker brothers
>
> Will T. Riker tried to teleport from his spaceship down to a planet, but due to a freak storm, there was a malfunction. Luckily he was reconstructed back on the ship, fully intact, and the ship left the area.
>
> Unknown to those on the ship, a copy of him also materialized on the planet.. He survived, and years later, the two were reunited when Will’s new ship passed by. Now known as Tom, the copy that was on the planet did not join Star Fleet but went on to have many adventures of his own, often supporting rebel causes that Will would not. Will and Tom over their lifetimes played important but often conflicting roles in galactic events. They married different women and had children of their own.
>
> The two brothers became very different people, it seems obvious to say – similar in appearance, but as different as two brothers typically are. It should be obvious that killing one of them (say, Will) would not be morally OK, just because Tom is still alive somewhere. Based on functionalism, Will is just as conscious as any other person; his measure (amount of consciousness) is not half that of a normal person.
>
> When did they become two people, rather than one? Did it happen as soon as the first bit of information they received was different? If that were so, then it would be morally imperative to measure many bits of information to differentiate your own MWI copies. But no one believes that.
>
> No, what matters is that Rikers’ measure increased during the accident. As soon as there were two copies of him – even before they opened their eyes and saw different surroundings – they were two different (though initially similar) people and each had his own value. Their later experiences just differentiated them further.
That Riker's measure increased is not the important thing here: it is
that the two Rikers differentiated. Killing one of them after they had
differentiated would be wrong, but killing one of them before they had
differentiated would be OK. To give another example, if whenever the
teleporter operates two copies are created, one on the surface of the
planet and one at the centre of the planet's sun, then it wouldn't
worry me to use it: I would expect to certainly find myself on the
planet. On the other hand, if the teleporter is like the one in "The
Prestige", I would be very anxious about using it, since I would have
a 1/2 chance of an unpleasant death.
You might not agree with the above appraisal, just as someone may
refuse to use the teleporter, sticking with the belief that it would
mean certain death no matter what arguments are used to try to
persuade him otherwise. If so, this would seem the reason you don't
accept QS, and we have reached an impasse.
> 4) The 'snatch'
>
> Suppose an alien decided to ‘clone’ (copy) a human, much as Riker was copied. The alien scans Earth and gets the information he needs, then goes back to his home planet, many light years away. Then he produces the clone.
>
> Does this in any way affect the original on Earth? Is half of his consciousness suddenly snatched away? It seems obvious to me that it is not. If it is not, then measure is what matters, not ‘first person probabilities’.
The cloned person should expect to suddenly find himself at the
distant planet with 50% probability. This is equivalent to
non-destructive teleportation, or destructive teleportation to two
separate locations.
> 5) The groggy mornings
>
> Bob always wakes up to his alarm clock at 7:00 every morning. However, he is always groggy at first, and some of his memory does not ‘come on line’ until 7:05.
>
> During this time, Bob does not remember how old he is. Since there is genuine subjective uncertainty on his part, he can use the Reflection Argument to guess his current age. The effective probability of his being a certain age is proportional to his measure during that year. Thus, we can talk about his expectation value for his age, the age which he is 90% ‘likely’ to be younger than, etc.
>
> If he is mortal, then his measure decreases with time, so that his expected age and so on fall into typical human parameters.
>
> If he were immortal, then his expected age would diverge, and the ‘chance’ that his age would be normal is 0%. Clearly that is not the case. Thus, having a long time tail in the measure distribution is not immortality.
If the probability function has an infinite tail but most of the area
under the curve is to the left of a finite age, then if he forgets how
old he is he should bet that he is younger than this age. But if the
tail is infinite, that still means he can expect to live forever.
--
Stathis Papaioannou
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Wed Feb 25 2009 - 07:56:57 PST