Fwd: Re: PhD-thesis on Observational Selection Effects

From: Nick Bostrom <nick.domain.name.hidden>
Date: Thu, 19 Oct 2000 20:14:18 -0400

Jacques wrote:

>>From: "Nick Bostrom" <nick.bostrom.domain.name.hidden>
>>Jacques Mallah wrote some time back:
>>[see chapter 8 of http://www.anthropic-principle.com/phd/]
>>
>> >>>If he believes the MWI however, then he knows there is a branch
>> >>>where a deer turns up and one where it doesn't. He knows that for a
>> usual day the effective probability of a deer turning up is about 1%.
>>Since he has no reason to believe the laws of nature are specially
>>configured to correlate the amplitude of the wavefunction's branches with
>>his actions (especially since such correlation would be hard to reconcile
>>with QM), he will still believe the effective probability of a deer
>>turning up is just 1%.
>>
>> >>You may be confusing subjective and objective probability.
>>
>> > Nope. Why would you think I might do so?
>>
>>Adam might have reason to think that the objective chance of Deer is 1%,
>>and nonetheless have reason assign a 90% (say) credence to Deer. Compare
>>the situation to the case of a coin which has just been tossed in what
>>you think is a fair manner; so you think there was a 50% chance of Heads.
>>But suppose you have cought a brief glimps of the coin after it landed,
>>and it looked like Tails, although you aren't absolutely sure. So your
>>subjective credence may be 95%.
>
> Yeah no shit. Of course I know about Bayesian probabilities.
> You still haven't suggested any reason as to why you might have
> thought that I was confusing them.
> Remember that effective probability is the fraction of observers, in a
> given situation, that see that outcome. For QM this is equal to the sum
> of the squares of the amplitudes of those branches consistent with that
> outcome, as long as no observers are created or destroyed and the
> measurement is definitely made.

Even assuming that Adam is 100% certain about the truth of MWI (which, of
course, there is absolutely no need for me to suppose), and even assuming
he knows that for a typical day the effective probability of Deer is 1%, it
is still not in general the case that he should think that there is a 1%
probability of Deer that morning. The reason why I was suspecting that you
were confusing objective and subjective probabilities is that I thought you
were claiming that Adam should assign a credence of 1% to deer turning up.
If you admit that Adam should believe that the probability of Deer (giving
him forming the right intentions) is very great even though he knows that
the effective probability of Deer on a typical day is only 1%, then my
ground for suspecting this confusion vanishes.


>> > In both cases I have so far discussed, his Bayesian probaility
>> >distribution for the objective probability is sharply peaked about >a
>> (stochastic xor effective) probability of 1%.
>> > A common way this might occur is if he has observed, over the
>>course of 10 years (3652 days) that about 37 times a deer has turned up.
>>If he assumes that there is a fixed probability p, and initially has a
>>uniform Bayesian distribution for p on (0,1), then his final distribution
>>will be sharply peaked about 1%.
>> > The point, here, is that in such a case he *can't* suddenly assume
>> "today is different, so while on a usual day p=.01, I'll just have a
>> uniform Bayesian prior for p_today.", and then apply the "Adam paradox".
>>So, in both the non-MWI and the MWI case, p~=.01 is his prior probability
>>before he considers his own situation regarding reproduction, but the
>>effect of the
>> >latter is different. So far I think you agree with that.
>>
>>Yes, I think that's right so far. But prior probability is not the same
>>as objective probability. If there is a wounded deer in the
>>neighborhood, then the objective chance may be quite high (say, 78%).
>>But Adam doesn't know whether there is such a deer in the neighboorhood,
>>so his subjective credence that the objective chance is that high, is low
>>(say 0.1%). So he shouldn't think that a deer will turn up; that has a
>>low prior probability. But when he forms the appropriate reproductive
>>intentions, then (assuming SSA with a universal reference class), he
>>obtains reason to think that the objective chance is actually quite high.
>>This also results in his subjective credence in Deer increases.
>
> That's only without the MWI. (i.e. True only for stochastic
> probabilities, not for effective probabilities.)

No, it's true whether those objective probabilities are given my MWI or
some other physical theory. Nowhere in what I was saying was I presupposing
that the objective probabilities come from MWI.


>>(Note that I'm talking about an objective chance here which varies from
>>day to day, depending on what the deer in the region are up to. The more
>>deer nearby, the greater the objective physical chance that some will
>>pass by Adam's cave within a certain time interval.)
>
> In the MWI, btw, there would be little such variation since one must
> sum over the branches desribing the various deer activities.

Over time, the objective probabilities might get smeared out into a general
deer-fog over the whole region. But at the beginning of the world, the
objective deer-probability will be concentrated where the initial
conditions say that the deer start out.


>> >>And I do find the premise that he can be certain that no
>> >>type of MWI can be true hard to swallow.
>> >>
>> >>He wouldn't have to be certain about that.
>>
>> > Well, he would certainly have to assign a ludicrously high
>> >probability (e.g. 50%) to the idea that the MWI might be false.
>
>>2. Adam might not know contemporary physics.
>
> Irrelevant.

No, that's very relevant, because if Adam does not know contemporary
physics then he has no reason to think that MWI is true, and then he would
have no reason not to assign such a "ludicrously high" probability as 50%
to the idea that the MWI might be false.


>> > As I see it, it is a priori possible that I could have been any
>> >observer. Thus all observers must be included, by definition.
>>
>>I find the "I could have been you" talk quite suspicious and murky. It's
>>not clear to me that this is the way to cast light on the situtation.
>
> Let me try to enlighten you a little then. Think of it like this: I
> know I'm an observer-moment (or thought, if you like); that much I can
> assume a priori. Now I (or my brain, which is the computer actually
> carrying out the calculations, with observer-moments like me "along for
> the ride") want to compare two possible models of the universe, so I need
> to calculate the Bayesian probability that each model is true.
> (In my view, first I must get the prior for this from Occam's razor,
> or "exp(-complexity)"; pretend for this exercise that it results in just
> two models with significant prior probability and that each of these is
> roughly 50% likely a priori. Even if you don't like that, in any case
> assume I have two competing models of equal a priori probability. I can
> always just find the conditional Bayesian probability that model #1 is
> true given that either #1 or #2 is true; that's more or less what
> scientists traditionally do, since they neglect the simplest [AUH] models.)
> For that I need to know the Bayesian probability that, if a given
> model is true, the other information that I take as known would also be true.
>This other information takes the form "I see x". Since
>this is not used as a priori information, it will allow me to update my
>prior. But I need the Bayesian probabilities that the information would
>be true if each model were true.
> So what is the Bayesian probability that I would see x if model #1
> were true? In order to get it, I do not first assume that I see x, since
> then I would get 1 and that's not what I need. So the only information I
> assume is the a priori information, I am an observer-moment. That is the
> only other thing I know about myself, other than the observation that I
> see x. So if model #1 predicts the existance of N observer-moments, m of
> whom see x, I have no a priori reason to say that any of them is more
> likely to be me than the others. So the Bayesian probability that "I
> would see x" if model #1 were true is the effective probability, m / N.

Well, if we assume that it is a priori knowledge that I am an
observer-moment, then why should this a priori knowledge not be used in
evaluating hypotheses? For example by saying: it would be more probable
that I should exit if many observer-moments came into existence? (This is
the Self-Indication Assumption, which as you know I reject. But I would be
interested in hearing your story of why it should be rejected.)

Second, suppose I say to you: Not only "I exist (am an observer-moment)."
is a priori, but "I exist and I'm currently thinking about something
related to anthropic reasoning." is also a priori. And for the same reason:
you couldn't possibly have found out otherwise. Then by reasoning you
describe, the reference class would not consist of all observer-moments but
instead of all observer-moments thinking about something related to
anthropic reasoning. What do you say about that? (Incidentally, I am
recently drawn to a definition of the reference class which might look
rather like that.)



Dr. Nick Bostrom
Department of Philosophy
Yale University
Homepage: http://www.nickbostrom.com
Received on Thu Oct 19 2000 - 17:31:16 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST