# Re: another paradox and a solution

From: Wei Dai <weidai.domain.name.hidden>
Date: Wed, 25 Feb 1998 17:36:53 -0800

On Wed, Feb 25, 1998 at 05:59:22PM +0000, Nick Bostrom wrote:
> What is rational for you to do depends on what your goals are. If the
> only thing we cared about was the average standard of living in the
> world, then it might indeed be rational to kill off the poor.
> Similarly, if the only thing you cared about was the average
> prosperity of future continuations of your present persona, then it
> might be rational for you to kill off the poorest 99% of your future
> continuations. The lesson to draw is, I think, that these averages
> are not the only thing we care for; the number of people/personal
> continuations enjoying a given standard of living is also important.
> If we assume that that is part of our goals, then the Russian
> roulette option is no longer recommended.

I have no idea how decision theory can deal with global goals, i.e., goals
that refer to the outside view instead of the inside view. For the moment,
let's just talk about local goals, or equivalently assign utilities only
to actual perceptions. For the paradox, I'm assuming the experimenter gets
the same amount of satisfaction from spending each dollar, whether she
spends it before or after the experiment.

> But what does it mean that "I will perceive X"? Does it mean that
> there is at least one continuation (a copy perhaps) of my present
> self that will perceive X? Or does it mean that, if we
> randomly choose one future continuation from the set of all
> future continuations of myself, this random sample will
> perceive X? In the latter case, we might want to count the number of
> future continuations of myself that perceive A and divide this number
> by the total number of future continuations of myself. The resulting
> ratio would be the probability of "I will perceive X." in this
> sense.

I define it as the measure of my future continuations who perceive X
divided by the measure of my present self. Tegmark's definition is
equivalent to the second definition you give. I'm arguing that my
definition is more self-consistent and more compatible with decision
theory.

> In ordinary contexs, one would say "There is a 34% probability that
> the world is such-and-such. And if the world is such-and-such, then
> that means there are 326 instances of experiences of type A (i.e. 326
> brains in that specific state.".
>
> On the AUH then there would be a 100% probability that there are 326
> instances of experiences of type A (say -- we set to aside for the
> moment the problem that results from the fact that the AUH seems to
> imply that the number is infinite). What you can do, given the AUH,
> is to count what fraction of future continuations of yourself
> perceive A. This fraction could perhaps then be interpreted by you as
> your subjective probability that you will wake up next moring and
> perceive A.

This is the definition that I'm arguing against, because of the