----- Original Message -----
From: "Quentin Anciaux" <quentin.anciaux.domain.name.hidden>
To: <everything-list.domain.name.hidden>
Sent: Monday, June 20, 2005 11:37 PM
Subject: Measure, Doomsday argument
> Hi everyone,
>
> I have some questions about measure...
>
> As I understand the DA, it is based on conditionnal probabilities. To
somehow
> calculate the "chance" on doom soon or doom late. An observer should
reason
> as if he is a random observer from the "class" of observer.
>
> The conditionnal probabilities come from the fact, that the observer find
that
> he is the sixty billions and something observer to be "born". Discover
this
> fact, this increase the probability of doom soon. The probability is
> increased because if doom late is the case, the probability to find myself
in
> a universe where billions of billions of observer are present is greater
but
> I know that I'm the sixty billions and something observer.
This is a false argument see here:
http://arxiv.org/abs/gr-qc/0009081
To calculate the conditional probability given the birthrank you have you
must use Bayes' theorem. You then have to take into account the a priori
probability for a given birthrank. If you could have been anyone of all the
people that will ever live, then you must include this informaton in the
a-priori probability, and as a result of that the Doomsday Paradox is
canceled.
>
> Now I come to the measure of observer moment :
> It has been said on this list, to justify we are living in "this" reality
and
> not in an Harry Potter like world that somehow "our" reality is simpler,
has
> higher measure than Whitte rabbit universe. But if I correlate this
> assumption with the DA, I also should assume that it is more probable to
be
> in a universe with billions of billions of observer instead of this one.
>
> How are these two cases different ?
>
Olum also stumbles on this point in his article. I also agree with Hall's
earlier reply that (artificially) increasing the number of universes will
lead to a decrease in intrinsic measure. One way to see this is as follows
(this argument was also given by Hall a few years ago, if I remember
correctly):
According to the Self Sampling Asumption you have to include an
''anthropic'' factor in the measure. The more observers there are the more
likely the universe is, but you do have to multiply the number of observers
by the intrinsic measure. For any given universe U you can consider an
universe U(n) that runs U n times, So, the anthropic factor of U(n) is n
times that of U. This means that the intrinsic measure of U(n) should go to
zero faster than 1/n, or else you wouldn't be able to normalize
probabilities for observers. U(n) contains
Log(n)/Log(2) bits more than U (you need to specify the number n). So,
assuming that the intrinsic measure only depends on program size, it should
decay faster than 2^(-program length).
Saibal
-------------------------------------------------
Defeat Spammers by launching DDoS attacks on Spam-Websites:
http://www.hillscapital.com/antispam/
Received on Mon Jun 20 2005 - 19:42:57 PDT