Re: another paradox and a solution

From: Nick Bostrom <bostrom.domain.name.hidden>
Date: Fri, 27 Feb 1998 01:33:23 +0000

 Wei Dai wrote:

> On Wed, Feb 25, 1998 at 05:59:22PM +0000, Nick Bostrom wrote:
> > What is rational for you to do depends on what your goals are. If the
> > only thing we cared about was the average standard of living in the
> > world, then it might indeed be rational to kill off the poor.
> > Similarly, if the only thing you cared about was the average
> > prosperity of future continuations of your present persona, then it
> > might be rational for you to kill off the poorest 99% of your future
> > continuations. The lesson to draw is, I think, that these averages
> > are not the only thing we care for; the number of people/personal
> > continuations enjoying a given standard of living is also important.
> > If we assume that that is part of our goals, then the Russian
> > roulette option is no longer recommended.
>
> I have no idea how decision theory can deal with global goals, i.e., goals
> that refer to the outside view instead of the inside view.

It might be problematic on the AUH (a lot of things are), but apart
from that I don't see any problems in the present application. What
is the specific difficulty you see?

> For the moment,
> let's just talk about local goals, or equivalently assign utilities only
> to actual perceptions. For the paradox, I'm assuming the experimenter gets
> the same amount of satisfaction from spending each dollar, whether she
> spends it before or after the experiment.

There might be many instances of the experimenter after the
experiment. Whose satisfaction are you talking about? If you are
talking about average satisfaction among the actual instances of the
experimenter that will exist after the experiement, and all you care
about is this average (you don't care at all about how many instances
are enjoying this satisfaction) then you might indeed get the
implication that you should play Russian roulette in your thought
example, but what's so paradoxical about that? It wouldn't mean that
I would have any reason to play Russian roulette in that situation,
for I don't think I have the goals that your argument presupposes. I
care about how many branches I will continue to exist on/how many
copies there will be of me.

> > But what does it mean that "I will perceive X"? Does it mean that
> > there is at least one continuation (a copy perhaps) of my present
> > self that will perceive X? Or does it mean that, if we
> > randomly choose one future continuation from the set of all
> > future continuations of myself, this random sample will
> > perceive X? In the latter case, we might want to count the number of
> > future continuations of myself that perceive A and divide this number
> > by the total number of future continuations of myself. The resulting
> > ratio would be the probability of "I will perceive X." in this
> > sense.
>
> I define it as the measure of my future continuations who perceive X
> divided by the measure of my present self.

That's how you define the probability, yes, but how do you define the
proposition "I will perceive X."?

> Tegmark's definition is
> equivalent to the second definition you give. I'm arguing that my
> definition is more self-consistent and more compatible with decision
> theory.

But your argument, it seems, presupposes that I can't care about how
many instances there will exist of me. That seems wrong.

> > (This still leaves us with the problem of how to deal with the
> > infinities.)
>
> This is easy to solve. Instead of dealing with numbers of instances, we
> deal with measures. The measure of each string is equal to its universal a
> prior probability.

Well I think that's problematic. How do we interpret these
"measures"? I.e. what does it mean to say that a certain instance of
experience has a certain measure m?

_____________________________________________________
Nick Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
n.bostrom.domain.name.hidden
http://www.hedweb.com/nickb
Received on Thu Feb 26 1998 - 17:38:36 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST