>From: "Charles Goodwin" <cgoodwin.domain.name.hidden>
>I'll have another go at explaining my position (maybe I'll spot a flaw in
>it if I keep examininig it long enough).
OK. Nice to see you're honestly thinking about it.
>Bayesian reasoning assumes (as far as I can see) that I should treat my
>present observer moment as typical.
Not quite. Rather, treat it as though it's "randomly" drawn from the
set of possible OMs. (I have already explained why this is useful although
there is really nothing random.)
Most of them are typical (if there is a typical; sometimes there are
just a lot of different categories none of which is typical), some are not -
that's automatically taken into account when you calculate the posterior
Bayesian probabilities.
>My objection to doing so is that this assumes the result you want to prove
Assuming, a priori, that it is typical would indeed be an illegitimate
move. You just need to understand that the Bayesian procedure does nothing
of the sort, as I said above. Sorry to repeat what I said above, since I
hope you got the point, but such is the nature of replying by email.
>because if my observer moment is typical and QTI is correct, then the
>likelihood of me experiencing a moment at which my age is less than
>infinity is infinitesimal.
>This either demonstrates that (1) my present observer moment is typical and
>QTI is wrong or (2) the present observer moment isn't
>typical and Bayesian reasoning is inappropriate ((2) doesn't imply that QTI
>is correct, of course, merely that it's compatible with
>observation).
Right. So the question is: Is Bayesian reasoning sound? This is a
general question that should be considered independently of the FIN. The
answer is surely yes, but I guess you need some more convincing of that, so
considering another example should help.
>*Assuming* that QTI is correct, then the chances of you and me interacting
>at a typical observer moment (for either of us) is
>negligible. QTI guarantees that almost all interactions between observers
>will occur at highly non-typical observer moments, because
>(scary thought) for 99.9999999999999999999....% of any given person's
>observer moments, the rest of the human race will be extinct.
>Hence Bayesian reasoning isn't appropriate because the fact that we're
>communicating with one another guarantees that at least one
>of us, and with overwhelming probability both of us, is experiencing highly
>atypical observer moments.
You seem to be confusing a priori probabilities with predictions. This
should become clearer in the example I'll use below.
BTW, even if the human race were extinct, you would surely have saved an
archive of the best literature, such as my posts to this list ...
>The "assumption of typicality" can't be made without first checking that
>you're not dealing with a special case. To take an obvious example, if I
>was to apply Bayesian reasoning to myself I would be forced to assume that
>I am almost certainly a peasant of indeterminate sex living in the third
>world.
Indeterminate sex? I would think you simply wouldn't know which sex. I
don't think most peasants are transexuals or transvestites ... but let's not
go there ...
The "1st world" / "3rd world" case is an excellent example. Suppose
that 90% of people live in world #3, while 10% live in world #1. If you had
no other information, then indeed the best you could do is to guess that you
are 90% likely to be in #3.
That's your a priori Bayesian probability of being in #3, but you do
have additional information. In my case for example (which I'll use becuase
I don't know where you are located), I can look out my window and see vast
stretches with nothing but fast food restaurants, churches, cattle, and
bales of hay. These things correlate very strongly with being in North
Dakota. Let's say that only 1% of world #3 is like that, while 50% of world
%1 is like that.
So now I will guess my location: (here "see" is short for "what I see")
p(me in world #1)
= p(see|#1) p_0(#1) / [p(see|#1) p_0(#1) + p(see|#2) p_0(#2)]
= (.50) (.1) / [ (.50) (.1) + (.01) (.9) ]
= .05 / (.05 + .009) = 0.847
So I now think I am about 85% likely to be in world #1. Using the
additional information gained by observation, I realized that my location is
probably not typical.
However, if I didn't have that information, then I would have been
correct to think I am 90% likely to be in world #3. After all, 90% of
people are in #3.
The point is that saying "assume your location is typical" is often a
good qualitative charicature of the Bayesian procedure, especially when
there is little additional information to go on, but it is not at all what
Bayesian reasoning actually does.
In particular, there are no "special cases" in which you have to "first
check" before you can apply Bayesian reasoning. Bayes' theorem
automatically handles all cases and every bit of information you have. It's
just that you have to be careful not to jump the gun, not to incorporate
information into your prior that should enter later in the form of
observational evidence.
In the FIN case, the prior becomes how likely you would think it is for
the FIN to be true, a priori, as if you have not yet considered your age at
all. Suppose you pick 0.5, which I think is ridiculously high, but just for
the sake of argument.
Then you can look at the conditional probability of seeing your
observational evidence given each hypothesis. In this case, that's the
conditional probability of being younger than some natural reference point -
which (surprise!) you are. For the FIN that's almost zero, so the posterior
probability for the FIN to be true is almost zero regardless of the prior
you started with.
Now, if I were really older than the natural reference points (such as
being too old to calculate), then Bayes' theorem would no longer argue
against the FIN. It's simply not true that Bayes is biased against the FIN.
Its just that the actual, observed evidence is overwhelmingly unlikely if
the FIN were true.
>Or more likely a beetle... Or even more likely a microbe (assuming microbes
>have observer moments).
Now the "reference class" issue is actually an interesting question that
I'll probably address a bit in reply to another person, but actually I don't
think the solution should be mysterious.
The procedure is "likely to work" because most people using it will get
the right answer. So, creatures that can't use it don't count. You should
assume, a priori, that you are of a type of creature that is able to use
Bayesian reasoning. If everyone who can use it makes that assumption, then
it will be reliable for the people that can use it.
- - - - - - -
Jacques Mallah (jackmallah.domain.name.hidden)
Physicist / Many Worlder / Devil's Advocate
"I know what no one else knows" - 'Runaway Train', Soul Asylum
My URL:
http://hammer.prohosting.com/~mathmind/
_________________________________________________________________
Get your FREE download of MSN Explorer at
http://explorer.msn.com/intl.asp
Received on Wed Sep 05 2001 - 16:46:02 PDT