Re: Global measure and "one structure, one vote"

From: Michiel de Jong <Michiel.de.Jong.domain.name.hidden>
Date: Thu, 15 Mar 2001 14:51:21 +0100 (MET)

Jesse Mazer writes:
> Obviously not *all* measures would
> work, since I could pick a measure that was 100% concentrated on a
> particular bitstring and 0% on all the others, and that'd yield predictions
> quite different from those based on the universal prior.

Yes, you've pointed out a big problem there. (it's connected to Russell
Standish's remark "The trouble with this argument, ..." on page 4 of
"Why Occam's Razor").

> Juergen Schmidhuber's paper goes into more detail on the class of
> measures that the universal prior is a "good enough" approximation
> for, right?

Yes. This class of measures corresponds to the set of all universal
programming languages (i.e., all programming languages that are
equivalent to the Turing Machine).
(by the way, your particular example that is 100% concentrated on a
particular bitstring, is not necessarily in this class; only if the
chosen bitstring is computable).

> Maybe I need to go read that...
Yes, it's very good stuff. Read "A Computer Scientist's View on Life,
the Universe, and Everything" first. After that, if you have a bit of
background in computer science, then "Algorithmic Theories of
Everything" is also very good reading. In this last paper, Schmidhuber
differentiates between two ways to derive a measure from a universal
programming language:
1) by program length (the standard way in algorithmic information
theory)
2) by program execution time (a very interesting alternative).

> using the universal prior might turn out to be a bit like
> "renormalization" in quantum field theory, i.e. a tool that's useful for
> making calculations but probably isn't going to be the basis of our final
> TOE.

You're right. Your question was about the "basis of our final TOE",
and I gave you an answer about a "tool". So, after some more thinking
and reading, I want to change my answer to option 1: there is no
global measure. As Russell Standish writes in "Why Occam's Razor":

"The conscious observer is responsible, under the Anthropic Principle,
for converting the potential into actual, for creating the observed
information from the zero information of the ensemble".

If I interpret him correctly, this means that we should not try to
make any specific predictions on the basis of a global measure,
because all computable universes are possible anyway.

I think this is also what Hal Ruhl means:
> My particular approach is to produce an Everything that contains no
> information at all either absolute or relative.

I think that trying to establish a single superior global measure is
already a violation of the zero information principle, because it
excludes other possible measures. The probability of being in a
certain universe is only determined by the SAS making
observations. His universe is determined by the outcomes of his
observations, not by some a priori global measure.

Then, the logical next question would be:
What is the prior probability over the different outcomes of such
observations?

It looks like the problem of choosing a measure has been shifted from
the choice of universe to the repeated choice of
observation-outcomes. But I think the important difference here is that
an observation has only 2 possible outcomes, "yes" or "no", whereas
the choice of universe has infinitely many outcomes. It seems to be
much more defendable to use the "one-structure-one-vote" principle for
observation-outcomes, than for a global measure on universes.

I think this must have been Wheeler's motivation for taking on the
it-from-bit view. See http://suif.stanford.edu/~jeffop/WWW/wheeler.txt
for a good summary of it.


Cheers,
Michiel de Jong.
http://www.cwi.nl/~mbj
Received on Thu Mar 15 2001 - 06:07:54 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST