- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Jacques Mallah <jackmallah.domain.name.hidden>

Date: Fri, 17 Nov 2000 17:50:12 EST

This is a rewrite of a post I tried to send at an earlier date, but it

got lost.

By the way, at the end of October I attended the Plank symposium at the

Univ. of Puget Sound (in Washington state). It was interesting, a lot of

history as well as QM, and I gave a talk on my approach to QM. The paper

report will contain some of my latest ideas, only a litle bit about the AUH

though. It was also interesting to meet some people who are well known

within the field such as James Hartle and Roland Omnes (as well as people

who are less known). I asked Hartle if he believes the MWI, and he still

couldn't give a straight answer. (He would not say he doesn't, but seems

not to care about the distinction between MWI and Copenhagen. He does

reject hidden variables.) Omnes seemed open to the idea of computationalism

but had his doubts.

From: "Nick Bostrom" <nick.bostrom.domain.name.hidden>:

*>Even assuming that Adam is 100% certain about the truth of MWI [...] and
*

*>even assuming he knows that for a typical day the effective probability of
*

*>Deer is 1%, it is still not in general the case that he should think that
*

*>there is a 1% probability of Deer that morning.
*

If by "probability" you mean effective probability, please say so.

Also, it would help to distinguish the effective probability from the

Bayesian probability, with the only difference being that the former is

defined given an initial wavefunction of the universe (and can be defined

objectively), but the latter includes his uncertainty about that

wavefunction.

*>The reason why I was suspecting that you were confusing objective and
*

*>subjective probabilities is that I thought you were claiming that Adam
*

*>should assign a credence of 1% to deer turning up. If you admit that Adam
*

*>should believe that the probability of Deer (giving him forming the right
*

*>intentions) is very great even though he knows that the effective
*

*>probability of Deer on a typical day is only 1%, then my
*

ground for suspecting this confusion vanishes.

That's wrong. With the MWI, if on a typical day there is a 1% chance,

then for any reasonable simple (thus subjectively likely assuming he uses

Occam's razor) initial wavefunction of the universe, the effective

probability on that day will still be 1% and so his Bayesian probability of

seeing the deer would be very close to 1% regardless of what intentions he

forms.

*>>So, in both the non-MWI and the MWI case, p~=.01 is his prior probability
*

*>>before he considers his own situation regarding reproduction, but the
*

*>>effect of the latter is different. So far I think you agree with that.
*

*>>
*

*>>Yes, I think that's right so far.
*

Then you should agree with what I've been saying.

*>>>(Note that I'm talking about an objective chance here which varies from
*

*>>>day to day, depending on what the deer in the region are up to. The more
*

*>>>deer nearby, the greater the objective physical chance that some will
*

*>>>pass by Adam's cave within a certain time interval.)
*

*>>
*

*>> In the MWI, btw, there would be little such variation since one must
*

*>>sum over the branches desribing the various deer activities.
*

*>
*

*>Over time, the objective probabilities might get smeared out into a general
*

*>deer-fog over the whole region. But at the beginning of the world, the
*

*>objective deer-probability will be concentrated where the initial
*

*>conditions say that the deer start out.
*

First, for the initial conditions to be so complicated as to even

include deer is extremely unlikely.

Even supposing this incredible situation though, the "fog" as you call

it would "smear" rather quickly. Deer thoughts, like weather, are chaotic

and thus the system will bifurcate often. In the time it would take Adam to

establish the 1% per day rule, the "deer fog" would be completely smeared.

If Adam (or, I should say, the many Adams in the many branches) know this

his Bayesian probability of seeing the wounded deer would quickly approach

the true effective probability of 1% regardless of his intentions.

*>>>2. Adam might not know contemporary physics.
*

*>> Irrelevant.
*

*>No, that's very relevant, because if Adam does not know contemporary
*

physics then he has no reason to think that MWI is true, and then he would

have no reason not to assign such a "ludicrously high" probability as 50% to

the idea that the MWI might be false.

First, we were discussing the effect of the MWI, so we should assume he

knows the MWI. And even without modern physics, he could derive the AUH (a

MWI even if not QM) from Occam's razor.

*>>> > As I see it, it is a priori possible that I could have been any
*

*>>>observer. Thus all observers must be included, by definition.
*

*>>>I find the "I could have been you" talk quite suspicious and murky. It's
*

*>>>not clear to me that this is the way to cast light on the situtation.
*

*>>
*

*>> Let me try to enlighten you a little then. Think of it like this: I
*

*>>know I'm an observer-moment (or thought, if you like); that much I can
*

*>>assume a priori. Now I (or my brain, which is the computer actually
*

*>>carrying out the calculations, with observer-moments like me "along for
*

*>>the ride") want to compare two possible models of the universe, so I need
*

*>>to calculate the Bayesian probability that each model is true.
*

*>> (In my view, first I must get the prior for this from Occam's razor,
*

*>>or "exp(-complexity)"; pretend for this exercise that it results in just
*

*>>two models with significant prior probability and that each of these is
*

*>>roughly 50% likely a priori. Even if you don't like that, in any case
*

*>>assume I have two competing models of equal a priori probability. I can
*

*>>always just find the conditional Bayesian probability that model #1 is
*

*>>true given that either #1 or #2 is true; that's more or less what
*

*>>scientists traditionally do, since they neglect the simplest [AUH]
*

*>>models.)
*

*>> For that I need to know the Bayesian probability that, if a given
*

*>>model is true, the other information that I take as known would also be
*

*>>true. This other information takes the form "I see x". Since
*

*>this is not used as a priori information, it will allow me to update my
*

*>prior. But I need the Bayesian probabilities that the information would be
*

*>true if each model were true.
*

*>> So what is the Bayesian probability that I would see x if model #1
*

*>>were true? In order to get it, I do not first assume that I see x, since
*

*>>then I would get 1 and that's not what I need. So the only information I
*

*>>assume is the a priori information, I am an observer-moment. That is the
*

*>>only other thing I know about myself, other than the observation that I
*

*>>see x. So if model #1 predicts the existance of N observer-moments, m of
*

*>>whom see x, I have no a priori reason to say that any of them is more
*

*>>likely to be me than the others. So the Bayesian probability that "I
*

*>>would see x" if model #1 were true is the effective probability, m / N.
*

*>Well, if we assume that it is a priori knowledge that I am an
*

observer-moment, then why should this a priori knowledge not be used in

evaluating hypotheses? For example by saying: it would be more probable that

I should exit if many observer-moments came into existence? (This is the

Self-Indication Assumption, which as you know I reject. But I would be

interested in hearing your story of why it should be rejected.)

That's not how a-priori information is used.

For example, if there are 10 balls in a jar and ether A) 9 iron and 1

wood, or B) 1 iron and 9 wood, and I pick one randomly and see it's iron,

that's new information and I would update my prior to reflect it so I think

A is more likely to be true.

If on the other hand I tell a robot to go pick one up with a magnet

(assume that it will always pick up one ball) and it fetches me an iron

ball, this does not tell me anything new. I know that if it failed the

first time, it would keep trying and that I would only see the result of

picking an iron ball. (Iron-thropic principle.) This is a-priori

information.

*>Second, suppose I say to you: Not only "I exist (am an observer-moment)."
*

*>is a priori, but "I exist and I'm currently thinking about something
*

*>related to anthropic reasoning." is also a priori. And for the same reason:
*

*>you couldn't possibly have found out otherwise. Then by reasoning you
*

*>describe, the reference class would not consist of all observer-moments but
*

*>instead of all observer-moments thinking about something related to
*

*>anthropic reasoning. What do you say about that? (Incidentally, I am
*

*>recently drawn to a definition of the reference class which might look
*

*>rather like that.)
*

I don't like it - but can't really give a good objection. Usually I say

that the observer must be intelligent enough to be able to use anthropic

reasoning ...

Still, it is a good question and not unreasonable. In practice, it

makes no difference, since the fraction of observers using anthropic

reasoning is probably not sensitive to the differences in physical models.

Also, I'm not sure how one would define it precisely.

Just to be clear for those who may want an example - suppose I know that

either there are 10^6 observers, only 10 of which would think about

anthropic reasoning; or else, I am the only observer.

The a priori probability of either case (presumably from Occam's razor,

although in this example Occam needs new blades) is 50%.

If "I am thinking about anthropic reasoning" is a priori info, the prior

remains at 50%.

If not, then since it is additional ("new") info, the prior is updated

and I know that most probably I am the only observer.

- - - - - - -

Jacques Mallah (jackmallah.domain.name.hidden)

Physicist / Many Worlder / Devil's Advocate

"I know what no one else knows" - 'Runaway Train', Soul Asylum

My URL: http://hammer.prohosting.com/~mathmind/

_________________________________________________________________________

Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.

Share information about yourself, create your own public profile at

http://profiles.msn.com.

Received on Fri Nov 17 2000 - 15:08:04 PST

Date: Fri, 17 Nov 2000 17:50:12 EST

This is a rewrite of a post I tried to send at an earlier date, but it

got lost.

By the way, at the end of October I attended the Plank symposium at the

Univ. of Puget Sound (in Washington state). It was interesting, a lot of

history as well as QM, and I gave a talk on my approach to QM. The paper

report will contain some of my latest ideas, only a litle bit about the AUH

though. It was also interesting to meet some people who are well known

within the field such as James Hartle and Roland Omnes (as well as people

who are less known). I asked Hartle if he believes the MWI, and he still

couldn't give a straight answer. (He would not say he doesn't, but seems

not to care about the distinction between MWI and Copenhagen. He does

reject hidden variables.) Omnes seemed open to the idea of computationalism

but had his doubts.

From: "Nick Bostrom" <nick.bostrom.domain.name.hidden>:

If by "probability" you mean effective probability, please say so.

Also, it would help to distinguish the effective probability from the

Bayesian probability, with the only difference being that the former is

defined given an initial wavefunction of the universe (and can be defined

objectively), but the latter includes his uncertainty about that

wavefunction.

ground for suspecting this confusion vanishes.

That's wrong. With the MWI, if on a typical day there is a 1% chance,

then for any reasonable simple (thus subjectively likely assuming he uses

Occam's razor) initial wavefunction of the universe, the effective

probability on that day will still be 1% and so his Bayesian probability of

seeing the deer would be very close to 1% regardless of what intentions he

forms.

Then you should agree with what I've been saying.

First, for the initial conditions to be so complicated as to even

include deer is extremely unlikely.

Even supposing this incredible situation though, the "fog" as you call

it would "smear" rather quickly. Deer thoughts, like weather, are chaotic

and thus the system will bifurcate often. In the time it would take Adam to

establish the 1% per day rule, the "deer fog" would be completely smeared.

If Adam (or, I should say, the many Adams in the many branches) know this

his Bayesian probability of seeing the wounded deer would quickly approach

the true effective probability of 1% regardless of his intentions.

physics then he has no reason to think that MWI is true, and then he would

have no reason not to assign such a "ludicrously high" probability as 50% to

the idea that the MWI might be false.

First, we were discussing the effect of the MWI, so we should assume he

knows the MWI. And even without modern physics, he could derive the AUH (a

MWI even if not QM) from Occam's razor.

observer-moment, then why should this a priori knowledge not be used in

evaluating hypotheses? For example by saying: it would be more probable that

I should exit if many observer-moments came into existence? (This is the

Self-Indication Assumption, which as you know I reject. But I would be

interested in hearing your story of why it should be rejected.)

That's not how a-priori information is used.

For example, if there are 10 balls in a jar and ether A) 9 iron and 1

wood, or B) 1 iron and 9 wood, and I pick one randomly and see it's iron,

that's new information and I would update my prior to reflect it so I think

A is more likely to be true.

If on the other hand I tell a robot to go pick one up with a magnet

(assume that it will always pick up one ball) and it fetches me an iron

ball, this does not tell me anything new. I know that if it failed the

first time, it would keep trying and that I would only see the result of

picking an iron ball. (Iron-thropic principle.) This is a-priori

information.

I don't like it - but can't really give a good objection. Usually I say

that the observer must be intelligent enough to be able to use anthropic

reasoning ...

Still, it is a good question and not unreasonable. In practice, it

makes no difference, since the fraction of observers using anthropic

reasoning is probably not sensitive to the differences in physical models.

Also, I'm not sure how one would define it precisely.

Just to be clear for those who may want an example - suppose I know that

either there are 10^6 observers, only 10 of which would think about

anthropic reasoning; or else, I am the only observer.

The a priori probability of either case (presumably from Occam's razor,

although in this example Occam needs new blades) is 50%.

If "I am thinking about anthropic reasoning" is a priori info, the prior

remains at 50%.

If not, then since it is additional ("new") info, the prior is updated

and I know that most probably I am the only observer.

- - - - - - -

Jacques Mallah (jackmallah.domain.name.hidden)

Physicist / Many Worlder / Devil's Advocate

"I know what no one else knows" - 'Runaway Train', Soul Asylum

My URL: http://hammer.prohosting.com/~mathmind/

_________________________________________________________________________

Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.

Share information about yourself, create your own public profile at

http://profiles.msn.com.

Received on Fri Nov 17 2000 - 15:08:04 PST

*
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:07 PST
*