Computing Randomness

From: <juergen.domain.name.hidden>
Date: Thu, 22 Mar 2001 09:40:27 +0100

Where does all the randomness come from?

Many physicists would be content with a statistical theory of everything
(TOE) based on simple probabilistic physical laws allowing for stochastic
predictions such as "We do not know where this particular electron will
be in the next nanosecond, but with probability 0.04 we will find it in a
certain volume V".

Any source of randomness, however, leaves us with an unsatisfactory
TOE, since randomness does not have a compact or simple explanation, by
definition. Where does the enormous information conveyed by a particular
history of random events come from? A TOE that cannot explain this
is incomplete.

The in hindsight obvious solution is an "ensemble TOE" which covers all
possible universe histories. The ensemble conveys less information than
most particular histories - one main motivation of this mailing list.

Which are the possible histories? Let us focus on well-defined ensembles
only, and ignore those that cannot be sufficiently specified to permit
reconstruction through a formally describable computer. In particular,
we may ignore uncountable ensembles such as continua, or other ensembles
including histories without finite descriptions.

Is there an optimally efficient way of computing all the "randomness" in
all the describable (possibly infinite) universe histories? Yes, there
is. There exists a machine-independent ensemble-generating algorithm
called FAST that computes any history essentially as quickly as this
history's fastest algorithm. Somewhat surprisingly, FAST is not slowed
down much by the simultaneous computation of all the other histories.

It turns out, however, that for any given history there is a speed
limit which greatly depends on the history's degree of randomness.
Highly random histories are extremely hard to compute, even by the optimal
algorithm FAST. Each new bit of a truly random history requires at least
twice the time required for computing the entire previous history.

As history size grows, the speed of highly random histories (and most
histories are random indeed) vanishes very quickly, no matter which
computer we use (side note: infinite random histories would even require
uncountable time, which does not make any sense). On the other hand,
FAST keeps generating numerous nonrandom histories very quickly; the
fastest ones come out at a rate of a constant number of bits per fixed
time interval.

Now consider an observer evolving in some universe history. He does not
know in which, but as history size increases it becomes less and less
likely that he is located in one of the slowly computable, highly random
universes: after sufficient time most long histories involving him will
be fast ones.

Some consequences are discussed in
http://www.idsia.ch/~juergen/toesv2/node39.html

Juergen Schmidhuber
Received on Thu Mar 22 2001 - 00:48:32 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST