Re: White Rabbit vs. Tegmark
Paddy Leahy writes:
> For the continuum you can restore order by specifying a measure which just
> *defines* what fraction of real numbers between 0 & 1 you consider to lie
> in any interval. For instance the obvious uniform measure is that there
> are the same number between 0.1 and 0.2 as between 0.8 and 0.9 etc.
> Why pick any other measure? Well, suppose y = x^2. Then y is also between
> 0 and 1. But if we pick a uniform measure for x, the measure on y is
> non-uniform (y is more likely to be less than 0.5). If you pick a uniform
> measure on y, then x = sqrt(y) also has a non-uniform measure (more likely
> to be > 0.5).
>
> A measure like this works for the continuum but not for the naturals
> because you can map the continuum onto a finite segment of the real line.
> In m6511 Russell Standish describes how a measure can be applied to the
> naturals which can't be converted into a probability. I must say, I'm not
> completely sure what that would be good for.
I think it still makes sense to take limits over the integers.
The fraction of integers less than n that is prime has a limit of 0
as n goes to infinity. The fraction that are even has a limit of 1/2.
And so on.
When you apply a measure to the whole real line, it has to be non-uniform
and has to go asymptotically to zero as you go out to infinity.
This happens implicitly when you map it to (0,1) even before you put
a measure on that segment. The same thing can be done to integers.
The universal distribution assigns probability to every integer such
that they all add to 1. The probability of an integer is based on the
length of the smallest program in a given Universal Turing Machine which
outputs that integer. Specifically it equals the sum of 1/2^l where
l is the length of each program that outputs the integer in question.
Generally this will give higher measure to smaller numbers, although a few
big numbers will have relatively high measure if they have small programs
(i.e. if they are "simple"). Of course this measure is non-uniform and
goes asymptotically to zero, as any probability measure must.
One problem with the UD is that the probability that an integer is even
is not 1/2, and that it is prime is not zero. Probabilities in general
will not equal those defined based on limits as in the earlier paragraph.
It's not clear which is the correct one to use.
Going back to Alistair's example, suppose we lived in a spatially infinite
universe, Tegmark's "level 1" multiverse. Of course our entire Hubble
bubble is replicated an infinite number of times, to any desired degree
of precision. Hence we have an infinite number of counterparts.
Do you see a problem in drawing probabilistic conclusions from this?
Would it make a difference if physics were ultimately discrete and all
spatial positions could be described as integers, versus ultimately
continuous, requiring real numbers to describe positions?
Note that in this case we can't really use the UD or a line-segment
measure because there is no natural starting point which distinguishes the
"origin" of space. We can't have a non-uniform measure in a homogeneous
space, unless we just pick an origin arbitrarily. So in this case the
probability-limit concept seems most appropriate.
Hal Finney
Received on Thu May 26 2005 - 12:59:35 PDT
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:10 PST