- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Jacques Mallah <jackmallah.domain.name.hidden>

Date: Mon, 16 Oct 2000 16:27:22 EDT

*>From: "Nick Bostrom" <nick.bostrom.domain.name.hidden>
*

*>Jacques Mallah wrote some time back:
*

*>[see chapter 8 of http://www.anthropic-principle.com/phd/]
*

*>
*

*> >>>If he believes the MWI however, then he knows there is a branch
*

*> >>>where a deer turns up and one where it doesn't. He knows that for a
*

*>usual day the effective probability of a deer turning up is about 1%.
*

*>Since he has no reason to believe the laws of nature are specially
*

*>configured to correlate the amplitude of the wavefunction's branches with
*

*>his actions (especially since such correlation would be hard to reconcile
*

*>with QM), he will still believe the effective probability of a deer turning
*

*>up is just 1%.
*

*>
*

*> >>You may be confusing subjective and objective probability.
*

*>
*

*> > Nope. Why would you think I might do so?
*

*>
*

*>Adam might have reason to think that the objective chance of Deer is 1%,
*

*>and nonetheless have reason assign a 90% (say) credence to Deer. Compare
*

*>the situation to the case of a coin which has just been tossed in what you
*

*>think is a fair manner; so you think there was a 50% chance of Heads. But
*

*>suppose you have cought a brief glimps of the coin after it landed, and it
*

*>looked like Tails, although you aren't absolutely sure. So your subjective
*

*>credence may be 95%.
*

Yeah no shit. Of course I know about Bayesian probabilities.

You still haven't suggested any reason as to why you might have thought

that I was confusing them.

Remember that effective probability is the fraction of observers, in a

given situation, that see that outcome. For QM this is equal to the sum of

the squares of the amplitudes of those branches consistent with that

outcome, as long as no observers are created or destroyed and the

measurement is definitely made.

*> > In both cases I have so far discussed, his Bayesian probaility
*

*> >distribution for the objective probability is sharply peaked about >a
*

*>(stochastic xor effective) probability of 1%.
*

*> > A common way this might occur is if he has observed, over the
*

*>course of 10 years (3652 days) that about 37 times a deer has turned up.
*

*>If he assumes that there is a fixed probability p, and initially has a
*

*>uniform Bayesian distribution for p on (0,1), then his final distribution
*

*>will be sharply peaked about 1%.
*

*> > The point, here, is that in such a case he *can't* suddenly assume
*

*>"today is different, so while on a usual day p=.01, I'll just have a
*

*>uniform Bayesian prior for p_today.", and then apply the "Adam paradox".
*

*>So, in both the non-MWI and the MWI case, p~=.01 is his prior probability
*

*>before he considers his own situation regarding reproduction, but the
*

*>effect of the
*

*> >latter is different. So far I think you agree with that.
*

*>
*

*>Yes, I think that's right so far. But prior probability is not the same as
*

*>objective probability. If there is a wounded deer in the
*

*>neighborhood, then the objective chance may be quite high (say, 78%).
*

*>But Adam doesn't know whether there is such a deer in the neighboorhood, so
*

*>his subjective credence that the objective chance is that high, is low (say
*

*>0.1%). So he shouldn't think that a deer will turn up; that has a low prior
*

*>probability. But when he forms the appropriate reproductive intentions,
*

*>then (assuming SSA with a universal reference class), he obtain reason to
*

*>think that the objective chance is actually quite high. This also results
*

*>in his subjective credence in Deer increases.
*

That's only without the MWI. (i.e. True only for stochastic

probabilities, not for effective probabilities.)

*>(Note that I'm talking about an objective chance here which varies from day
*

*>to day, depending on what the deer in the region are up to. The more deer
*

*>nearby, the greater the objective physical chance that some will pass by
*

*>Adam's cave within a certain time interval.)
*

In the MWI, btw, there would be little such variation since one must sum

over the branches desribing the various deer activities.

*> >>If there are all these other fat branches in the world, then yes, I
*

*>agree with that. However, Adam and Eve were there from the beginning,
*

*>before there deer paths had begun to spread out much as a probability cloud
*

*>over the terrain. Or at least we can suppose they were - that's the nice
*

*>thing about thought experiments!
*

*>
*

*> > But it would be foolish of Adam to believe that. Take his name, >for
*

*>example. (Other examples are easily found.) There are millions of other
*

*>names he could have had. The laws of physics (or initial conditions) would
*

*>have to be set up in a very contrived way in order for the effective
*

*>probability of just the one name for the first man, Adam, to be of order
*

*>one. Of course, in the biblical story things are very contrived because
*

*>some guy (god) contrived it, and because this original guy was himself
*

*>unique to start with. It's not a coincidence that such a situation is
*

*>implausible, and even if it was true, only a fool would believe it.
*

*>
*

*>Let's suppose then that Adam was a fool in that respect. It's irrelevant to
*

*>the point of the gedanken. (Besides, I don't think Adam needs to have been
*

*>that foolish, if we assume that he has had the right sorts of revelations
*

*>etc., and no exposure to other religions; in such cases, a reasonable man
*

*>might easily be led to believe in the christian God.)
*

I'll have to disagree with you there, but that could get off topic.

Sure, if Adam's a fool, he could come up with foolish conclusions.

*> >>And I do find the premise that he can be certain that no
*

*> >>type of MWI can be true hard to swallow.
*

*> >>
*

*> >>He wouldn't have to be certain about that.
*

*>
*

*> > Well, he would certainly have to assign a ludicrously high
*

*> >probability (e.g. 50%) to the idea that the MWI might be false.
*

*>
*

*>1. That is not ludicrous even today.
*

That's a matter of opinion. You know mine. The majority may rule, but

not because of collective intelligence.

*>2. Adam might not know contemporary physics.
*

Irrelevant.

*>3. Adam would not have to assign a 50% probability to MWI being false; as
*

*>long as there's some finite chance, one would get a shift in his posterior
*

*>probabilty after forming the intention, and that is the point of the
*

*>gedanken.
*

There would be a very small shift, yes. That's the way it should be.

[I have previously stated that, in the hypothetical *absense* of the MWI,

the *lack of* such a shift would be counterintuitive and ludicrous in my

opinion. So, to recap, I think the shift is a feature and not a flaw, but

the MWI makes it go away.] But I don't think anyone's intuition would

protest against a very small shift.

*> > As I see it, it is a priori possible that I could have been any
*

*> >observer. Thus all observers must be included, by definition.
*

*>
*

*>I find the "I could have been you" talk quite suspicious and murky. It's
*

*>not clear to me that this is the way to cast light on the situtation.
*

Let me try to enlighten you a little then. Think of it like this: I

know I'm an observer-moment (or thought, if you like); that much I can

assume a priori. Now I (or my brain, which is the computer actually

carrying out the calculations, with observer-moments like me "along for the

ride") want to compare two possible models of the universe, so I need to

calculate the Bayesian probability that each model is true.

(In my view, first I must get the prior for this from Occam's razor, or

"exp(-complexity)"; pretend for this exercise that it results in just two

models with significant prior probability and that each of these is roughly

50% likely a priori. Even if you don't like that, in any case assume I have

two competing models of equal a priori probability. I can always just find

the conditional Bayesian probability that model #1 is true given that either

#1 or #2 is true; that's more or less what scientists traditionally do,

since they neglect the simplest [AUH] models.)

For that I need to know the Bayesian probability that, if a given model

is true, the other information that I take as known would also be true.

This other information takes the form "I see x". Since

this is not used as a priori information, it will allow me to update my

prior. But I need the Bayesian probabilities that the information would be

true if each model were true.

So what is the Bayesian probability that I would see x if model #1 were

true? In order to get it, I do not first assume that I see x, since then I

would get 1 and that's not what I need. So the only information I assume is

the a priori information, I am an observer-moment. That is the only other

thing I know about myself, other than the observation that I see x. So if

model #1 predicts the existance of N observer-moments, m of whom see x, I

have no a priori reason to say that any of them is more likely to be me than

the others. So the Bayesian probability that "I would see x" if model #1

were true is the effective probability, m / N.

- - - - - - -

Jacques Mallah (jackmallah.domain.name.hidden)

Physicist / Many Worlder / Devil's Advocate

"I know what no one else knows" - 'Runaway Train', Soul Asylum

My URL: http://hammer.prohosting.com/~mathmind/

_________________________________________________________________________

Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.

Share information about yourself, create your own public profile at

http://profiles.msn.com.

Received on Mon Oct 16 2000 - 13:32:30 PDT

Date: Mon, 16 Oct 2000 16:27:22 EDT

Yeah no shit. Of course I know about Bayesian probabilities.

You still haven't suggested any reason as to why you might have thought

that I was confusing them.

Remember that effective probability is the fraction of observers, in a

given situation, that see that outcome. For QM this is equal to the sum of

the squares of the amplitudes of those branches consistent with that

outcome, as long as no observers are created or destroyed and the

measurement is definitely made.

That's only without the MWI. (i.e. True only for stochastic

probabilities, not for effective probabilities.)

In the MWI, btw, there would be little such variation since one must sum

over the branches desribing the various deer activities.

I'll have to disagree with you there, but that could get off topic.

Sure, if Adam's a fool, he could come up with foolish conclusions.

That's a matter of opinion. You know mine. The majority may rule, but

not because of collective intelligence.

Irrelevant.

There would be a very small shift, yes. That's the way it should be.

[I have previously stated that, in the hypothetical *absense* of the MWI,

the *lack of* such a shift would be counterintuitive and ludicrous in my

opinion. So, to recap, I think the shift is a feature and not a flaw, but

the MWI makes it go away.] But I don't think anyone's intuition would

protest against a very small shift.

Let me try to enlighten you a little then. Think of it like this: I

know I'm an observer-moment (or thought, if you like); that much I can

assume a priori. Now I (or my brain, which is the computer actually

carrying out the calculations, with observer-moments like me "along for the

ride") want to compare two possible models of the universe, so I need to

calculate the Bayesian probability that each model is true.

(In my view, first I must get the prior for this from Occam's razor, or

"exp(-complexity)"; pretend for this exercise that it results in just two

models with significant prior probability and that each of these is roughly

50% likely a priori. Even if you don't like that, in any case assume I have

two competing models of equal a priori probability. I can always just find

the conditional Bayesian probability that model #1 is true given that either

#1 or #2 is true; that's more or less what scientists traditionally do,

since they neglect the simplest [AUH] models.)

For that I need to know the Bayesian probability that, if a given model

is true, the other information that I take as known would also be true.

This other information takes the form "I see x". Since

this is not used as a priori information, it will allow me to update my

prior. But I need the Bayesian probabilities that the information would be

true if each model were true.

So what is the Bayesian probability that I would see x if model #1 were

true? In order to get it, I do not first assume that I see x, since then I

would get 1 and that's not what I need. So the only information I assume is

the a priori information, I am an observer-moment. That is the only other

thing I know about myself, other than the observation that I see x. So if

model #1 predicts the existance of N observer-moments, m of whom see x, I

have no a priori reason to say that any of them is more likely to be me than

the others. So the Bayesian probability that "I would see x" if model #1

were true is the effective probability, m / N.

- - - - - - -

Jacques Mallah (jackmallah.domain.name.hidden)

Physicist / Many Worlder / Devil's Advocate

"I know what no one else knows" - 'Runaway Train', Soul Asylum

My URL: http://hammer.prohosting.com/~mathmind/

_________________________________________________________________________

Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.

Share information about yourself, create your own public profile at

http://profiles.msn.com.

Received on Mon Oct 16 2000 - 13:32:30 PDT

*
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:07 PST
*