RE: Is the universe computable

From: Hal Finney <hal.domain.name.hidden>
Date: Sun, 18 Jan 2004 22:58:01 -0800

David Barrett-Lennard writes:
> Why is it assumed that a multiple "runs" makes any difference to the
> measure?

One reason I like this assumption is that it provides a natural reason
for simpler universes to have greater measure than more complex ones.

Imagine a Turing machine with an infinite program tape. But suppose the
actual program we are running is finite size, say 100 bits. The program
head will move back and forth over the tape but never go beyond the
first 100 bits.

Now consider all possible program tapes being run at the same time,
perhaps on an infinite ensemble of (virtual? abstract?) machines.
Of those, a fraction of 1 in 2^100 of those tapes will start with that
100 bit sequence for the program in question. And since the TM never
goes beyond those 100 bits, all such tapes will run the same program.
Therefore, 1/2^100 of all the executions of all possible program tapes
will be of that program.

Now consider another program that is larger, 120 bits. By the same
reasoning, 1 in 2^120 of all possible program tapes will start with that
particular 120-bit sequence. And so 1/2^120 of all the executions will
be of that program.

Therefore runs of the first program will be 2^20 times more numerous
than runs of the second.

If we use the assumption that each of these multiple executions or runs
contributes to the measure, we therefore can conclude that the measure
of the universe generated by the first program is 2^20 times greater
than the measure of the universe generated by the second. And more
generally, the measure of a universe is inversely related to the size
of the program which creates it. Therefore, QED, universes with simple
programs have a higher measure than universes with more complex programs.

This conclusion then allows us to further conclude that observers are
likely to evolve in lawful universes, that is, universes without "flying
rabbits", i.e. rare, magical exceptions to otherwise universal laws.
And we can conclude that the physical laws are likely to be stable or
at least predictable over time.

All of these are very properties of the universe which are otherwise
difficult or impossible to explain. The fact that the multiverse
hypothesis can provide some grounds for explaining them is one of the
main sources of its attractiveness, at least for me.

However, all this is predicated on the assumption that multiple runs of
the same program all contribute to the measure. If that is not true,
then it would be harder to explain why simple programs are of higher
measure than more complex ones.

> If the computation is reversible we could run the simulation backwards -
> even though the initial state make seem contrived because it leads to a
> low entropy at the end of the computation. Given that the simulated
> beings don't know the difference (their subjective time runs in the
> direction of increasing entropy) the fact that the simulation is done in
> reverse is irrelevant to them.
>
> Would a simulation done in reverse contribute to the measure?

When I think of the abstract notion of a universal TM that runs all
possible programs at once, I don't necessarily picture an explict time
element being present. I think of it more as a mapping:
TM + program ==> universe. The more programs which create a given
universe, the higher the measure of that universe.

However, I don't think I can escape from your question so easily.
We could alternately imagine an actual, physical computer, sitting in
our universe somewhere, simulating another universe. And that should
contribute to that other universe's measure. In that case we should
have some rule that would answer questions about how much reversible
and reversed simulations contribute.

I would consider applying Wei Dai's heuristic, which I discussed the
other day. It says that the measure of an object is larger if the
object is easier to find in the universe that holds it. I gave some
rough justifications for this, such as the fact that a simple counting
program eventually outputs every million bit number, but no one would
say that this means that the complexity of a given million bit number
is as small as the size of that program.

In this context, the heuristic would say that the contribution of
a physical computer simulating another universe to the measure of
that simulated universe should be based on how easy it is to find the
computation occuring in our own universe. Computations which occur
multiple times would be easier to find, so by Wei's heuristic would
have higher measure. This is another path to justify the assumption
that multiple simulations should contribute more to measure.

I'd say that a computation running backwards contributes as well,
by making it easier to locate. Now take a complex case, where a
computation ran forwards for a while, then backwards, then forwards.
I'd say that this heuristic suggests that the portion of the simulated
universe which was repeated 3 times (forwards, backwards, forwards)
would have higher measure than the rest of it.

> Once we say that all possible computations exist in the Platonic sense,
> it seems to me that running them is irrelevant. Of course it is agreed
> that the existence hypothesis tells us nothing about their relative
> measure. Does anyone have some principles to go by?

See above.

> I presume a theory of measure along the lines described by Jesse would
> need to account for the measure of mappings between computations.
> Presumably a simple correspondence would have higher weighting than some
> complicated mapping between two computations.

Yes, this gets into the difficult issue of telling whether two
computations are running the same program, i.e. whether a mapping exists
between the two. I'm hoping to write more about this soon.

Hal
Received on Mon Jan 19 2004 - 02:00:00 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:09 PST