Problems with the Universal Distribution

From: Hal Finney <hal.domain.name.hidden>
Date: Fri, 15 Jul 2005 23:29:01 -0700 (PDT)

I wrote a few days ago about the use of the Universal Distribution
(UDist) in the context of a Schmidhuberian approach to the multiverse,
in the UD+ASSA thread. I think it is a very attractive ontology which
can go a long way to account for what we experience, as well as providing
in-principle solutions to most of the thought experiments we discuss.

Wei Dai implicitly showed a challenge to this by his thought experiment
about whether halting problem oracles could exist. The UDist gives such
entities probability zero, which suggests that unless we are absolutely
certain they can't exist, we should not adopt the UDist as the foundation
for the multiverse.

I want to describe a few other problems that I know about with the UDist,
so as to put the proposal to base the multiverse on it into perspective.

The first problem is that the UDist is uncomputable, which makes it
hard to use in practice because we can't actually compute the measure
of any objects. However, we can approximate the measure by doing enough
computation. I don't know if there are any known bounds on the accuracy
of the approximation that would shed light on how feasible and practical
this is.

It occurs to me as I write that this problem may be worse than I have
been thinking, because at least when we add the ASSA or otherwise try
to tie measure to our perceptions, it means that we actually experience
the effects of an uncomputable formula. And really that means that we
are interacting with actual infinities, which I just spent a long time
yesterday explaining was impossible. This is a new idea to me so I need
to think about it more.

Another problem is that the UDist is not unique. Every Universal Turing
Machine (UTM) produced a different UDist. The one thing you can say is
that the various flavors of UDist do agree with each other up to some
constant that is independent of the object whose measure being calculated.
That's a good sign, but I am worried that it is not enough.

We want to use the UDist to calculate probabilities for our experiences.
We want to calculate the probability of a universe which looks like
ours vs some other universe. But if these probabilities are non-unique,
then we can't come up with meaningful comparisons between the measures of
particular information objects. One UTM may make the first object have
higher measure, while the second UTM may make the second object higher.
How can we relate that to our experience? Subjectively, we experience
probabilities which are well defined. But if one UTM says that flipping
a coin is more likely to come up heads and another UTM says it is more
likely to come up tails (i.e. that the information patterns of one or
the other have higher measure), they can't both be right.

This suggests that there is in fact one particular UTM which creates the
multiverse, but so far as I know computability theory does not pick out
one UTM as obviously the best. I don't know a solution to this problem.
My hope is that someday it will become obvious that a particular UTM is
the "unit UTM" and that all others are variants on it. This may sound
like a relatively unlikely prospect but it is the best I have at present.

Another problem is that the MWI does not seem to fit too well into
this model. Basically, the universe described by the MWI is too big.
It's vastly bigger than the classical universe.

The way I think of the universe and the measure of observer moments,
based on a proposal from Wei Dai, is that a universe's contribution to the
measure of an OM is based on two factors: the universe's measure, and the
fraction of the universe's resources involved in the OM. This relates to
the UDist, as I have explained recently, because it represents arguably
the highest measure way to produce the information pattern for conscious
observers like ourselves. First you create and run the universe,
which requires a relatively short program, then you locate and output
the observer pattern from within the output of the universe program,
which also requires a relatively short program. The result is that you
get an entire observer-moment or whole observer from a short program.

My guess is that no other program so short could produce a pattern as
complex as an observer-moment. Although all programs which output a
given OM contribute to its measure, this framework seems likely to be by
far the shortest program and hence the main contribution to the measure.
This is simply a consequence of the UDist.

The problem arises when we try to apply this reasoning to the MWI.
The MWI universe is enormously bigger than a Newtonian style universe,
or even one with conventional QM with state reduction, as long as that
is done using a short program to create its randomness. The MWI is
exponentially bigger than a single-thread universe because it splits at
every quantum event.

This means that the amount of information needed to localize an observer
within an MWI universe is exponentially greater than in a simple universe,
and that implies that the size of the locating program described above
changes from small to enormous. The consequence is that the contribution
of an MWI that contains an observer to that observer's measure appears
to be essentially zero. The observer is just too small a fraction of
the MWI, or in other words, the MWI is so profligate in its creation of
universes that each individual branch has essentially zero measure.

(This objection was also pointed out originally by Wei Dai.)

Well, there are several possible solutions to this, none of them
terribly attractive. One is the possibility that our measures within
the MWI are much higher than they seem, because somehow our existence
is much more inevitable than we would suppose. Rather than all the
quantum branches producing totally different worlds, somehow most of
them produce worlds with us in them, the specific people on this list,
Bruno and Russell and all the others. It may seem absurd or bizarre,
but I suppose it's not impossible. In that case we occupy many of the
quantum branches rather than an infinitesimal fraction of them, and our
measures in the MWI are high.

A more attractive solution is that the MWI is false, or more precisely,
that most of our measure comes from universes with true state reduction,
with essentially none coming from universes that don't reduce states,
where the MWI would be true. The biggest problem with this one for
me is that I don't know of any QM interpretations other than the MWI
that are truly coherent. Certainly Copenhagen isn't; it doesn't define
precisely when state reduction occurs. Maybe some of the other ones,
Cramer's or Bohm's, can be made to work. In this case, ironically,
a multiverse model could be taken to disprove the MWI.

The really weird part of this solution is that for it to work, for the
universe program to be small, quantum randomness can't be truly random.
Otherwise the program for the universe would have to be loaded up
with random bits, one for each quantum state reduction, and it would
be enormous. No, the only way this can work is for quantum randomness
to be generated by a numeric pseudo random number generator (PRNG).
We all live in von Neumann's "state of sin". I guess this is the AUH
version of original sin.

Plus it means that we are pretty special after all. Each different seed
for the PRNG, each variant on a PRNG, would generate a different version
of this universe, each with its own set of observers. Probably many of
them would generate no observers at all. So somehow we, the particular
people living on earth, are the ones lucky enough to be generated from
a PRNG that had a particularly short description and seed. All the
seemingly random events which created each of our lives, back to the
race among the spermatozoa, were in a sense pre-ordained based on a
relatively small seed for a PRNG that goes back before the big bang.
There really was no other way things could happen, once we got human
civilization going. That would have long since used up all of the
randomness. From then on everything has been predetermined. In effect
we live in a deterministic universe after all, just as much as the most
rigid Newtonian clockwork.

So these are the major problems that I know of with the concept of basing
measure for all objects on the UDist, which then leads to Schmidhuber's
multiverse. In exchange for these though we do get some interesting
predictions and explanations, which I have largely posted before, but
here are a few of them repeated:

1. The physical laws of our universe should be expressible as a relatively
simple computer program, and likewise with the initial conditions.

2. The universe should not be much bigger than it needs to be in order
to allow human beings to exist.

3. There should be no substantially simpler computer program that can
produce observers nearly as easily as our universe does.

4. There should not be vastly greater numbers of aliens in the universe
than humans.

5. There should not be vastly more human beings (or anything we would
consider observers) in the entire future of the universe than are
alive today.

6. There should not be vastly more conscious animals in the world than
humans.

7. If it ever becomes possible to miniaturize and/or greatly speed-up the
human mind, we should be surprised to find ourselves as such a person
(unless that number of such minds is greatly increased to compensate
for these factors).

8. We will almost never find ourselves experiencing human observer-moments
that have much lower measure than typical ones (such as being a one million
year old cave man).

I see these as very powerful predictions for such a simple model, and
my hope is that the problems with the UDist will be able to be cleared
up with continual improvements in our understanding of the nature of
computation.

Hal Finney
Received on Sat Jul 16 2005 - 03:27:21 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:10 PST