SIA and the presumptuous philosopher

From: Wei Dai <weidai.domain.name.hidden>
Date: Tue, 13 Aug 2002 19:48:00 -0700

When making decisions without knowing which observer moment you're at,
there are two equivalent ways to go about it if you can ignore game
theoretic considerations (meaning there is no reason to expect a
non-Pareto-optimal outcome). You can think of yourself as being at all of
the candidate observer moments, or at just one of them. In my "framework
for multiverse decision theory" post, I used the latter approach because
it's more general - it works even when game theoretic concerns are
present. However, when they are not, the first approach gives an
additional prespective on the problem.

Consider a thought experiment from Nick Bostrom's Ph.d. thesis (available
at http://www.anthropic-principle.com/phd/phdhtml.html), which is meant to
show the counterintuitiveness of the self indication axiom (SIA):

It is the year 2100 and physicists have narrowed down the search for a
theory of everything to only two remaining plausible candidate theories,
T1 and T2 (using considerations from super-duper symmetry). According to
T1 the world is very, very big but finite, and there are a total of a
trillion trillion observers in the cosmos. According to T2, the world is
very, very, very big but finite, and there are a trillion trillion
trillion observers. The super-duper symmetry considerations seem to be
roughly indifferent between these two theories. The physicists are
planning on carrying out a simple experiment that will falsify one of the
theories. Enter the presumptuous philosopher: "Hey guys, it is completely
unnecessary for you to do the experiment, because I can already show to
you that T2 is about a trillion times more likely to be true than T1
(whereupon the philosopher runs the God’s Coin Toss thought experiment and
explains Model 3)!"

One suspects the Nobel Prize committee to be a bit hesitant about awarding
the presumptuous philosopher the big one for this contribution.
(end quote)

First, let me add some details to make the problem more precise. Suppose
that there are two possible multiverses, one of which is real. The first
contains one universe, W1, and the second contains one universe, W2. T1 is
true for W1 and T2 is true for W2. W2 consists of one trillion identical
copies of a space-time region whose history is identical to W1 until the
time of the experiment. You are the observer in W1 or his one trillion
counterparts in W2 who is in charge of deciding whether or not to perform
the experiment. The cost of doing the experiment is $1 (per copy). The
cost of not doing the experiment and just assuming that T2 is true is $x
(for W1 only; think of this as money wasted trying to contact the other
copies). Both costs are your personal responsibility.

So here are the two ways of thinking about this problem:

1. You are either in W1 or are all of the one trillion copies in W2. Let
the probability of you being in W1 be p. The expected utility of doing the
experiment is U(experiment) = p*U(lose $1) + (1-p)*U(one trillion copies
each lose $1). U(do not experiment) = p*U(lose $x). Clearly SIA should not
be applied in this case, because the trillion multiplier is already taken
into account in "U(one trillion copies each lose $1)", and there is no
other relevant information to make use of, so p = 0.5 seems reasonable.

2. You are either in W1 or are one of the one trillion copies in W2. Let
the probability of you being in W1 be q. U(experiment) = U(lose $1). U(do
not experiment) = q*U(lose $x).

Suppose p = 0.5, what would q have to be so that 1 and 2 both reach the
same conclusion? Let's assume that x is chosen so that you are indifferent
between experiment and not experiment. That means:

U(experiment) = p*U(lose $1) + (1-p)*U(one trillion copies each lose $1) =
U(do not experiment) = p*U(lose $x)
U(lose $1) + U(one trillion copies each lose $1) = U(lose $x)

and

U(experiment) = U(lose $1) = U(do not experiment) = q*U(lose $x)

so

q = U(lose $1) / U(lose $x)
  = U(lose $1) / (U(lose $1) + U(one trillion copies each lose $1))

So it turns out that SIA is true if and only if U(one trillion copies each
lose $1) = 10^12 * U(lose $1), and on the other hand q = 0.5 if and only
if U(lose $1) = U(one trillion copies each lose $1). I argue that neither
should be assumed in general. It's a subjective value judgement how much
worse it is for one trillion copies to lose $1 than for one copy to lose
$1. It could be any number between 1 and 10^12, depending on one's
personal philosophy about the value of identical copies. (Where does the
intution that the presumptuous philosopher does not deserve the Nobel
Prize come from? If must be that most of us do not think it's anywhere
near 10^12 times worse.) But that means q, despite being named a
probability, is also a matter of value judgement. There can be no
principle of rational reasoning (such as the SIA) that uniquely determines
what q should be.

Well not quite. I cheated a bit above in analysis 2. U(experiment) is
actually q * U(lose $1 | T1) + (1-q) * U(lose $1 | T2), so I actually
implicitly assumed that U(lose $1 | T1) = U(lose $1 | T2). What if we hold
q fixed and look at what U(lose $1 | T2) would have to be to make you
indifferent between experiment and not experiment? Some algebraic
manipulation shows U(lose $1 | T2) = q/(1-q)*U(one trillion copies each
lose $1). So if we assume q = 0.5, then U(lose $1 | T2) = U(one trillion
copies each lose $1), and if we assume instead the SIA, so that q =
1/(10^12+1), then U(lose $1 | T2) = U(one trillion copies each lose $1) /
10^12.

So now we have three choices. Either make q a matter of value judgement,
or choose q = 0.5, or the SIA. I think q = 0.5 can be ruled out first,
because U(lose $1 | T2) = U(one trillion copies each lose $1) is
completely unintuitive. The SIA is problematic when there are potentially
infinite number of observers. Suppose we replace "trillion copies" with
"infinite number of copies" in this thought experiment. Then the SIA
implies q = 0 and U(lose $1 | T2) = 0, which makes no sense. That
leaves q a matter of value judgement, which seems somewhat unsatisfactory
also. But perhaps it's the best solution we can get.
Received on Tue Aug 13 2002 - 19:52:05 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST