Re: Why no white talking rabbits?

From: Jesse Mazer <lasermazer.domain.name.hidden>
Date: Sun, 11 Jan 2004 01:18:51 -0500

Hal Finney wrote:
>
>Jesse Mazer writes:
> > Hal Finney wrote:
> > >However, I prefer a model in which what we consider equally likely is
> > >not patterns of matter, but the laws of physics and initial conditions
> > >which generate a given universe. In this model, universes with simple
> > >laws are far more likely than universes with complex ones.
> >
> > Why? If you consider each possible distinct Turing machine program to be
> > equally likely, then as I said before, for any finite complexity bound
>there
> > will be only a finite number of programs with less complexity than that,
>and
> > an infinite number with greater complexity, so if each program had equal
> > measure we should expect the laws of nature are always more complex than
>any
> > possible finite rule we can think of. If you believe in putting a
>measure on
> > "universes" in the first place (instead of a measure on first-person
> > experiences, which I prefer), then for your idea to work the measure
>would
> > need to be biased towards smaller program/rules, like the "universal
>prior"
> > or the "speed prior" that have been discussed on this list by Juergen
> > Schimdhuber and Russell Standish (I think you were around for these
> > discussions, but if not see
> > http://www.idsia.ch/~juergen/computeruniverse.html and
> > http://parallel.hpc.unsw.edu.au/rks/docs/occam/occam.html for more
>details)
>
>No doubt I am reiterating our earlier discussion, but I can't easily find
>it right now. I claim that the universal measure is equivalent to the
>measure I described, where all programs are equally likely.
>
>Feed a UTM an infinite-length random bit string as its program tape.
>It will execute only a prefix of that bit string. Let L be the length
>of that prefix. The remainder of the bits are irrelevant, as the UTM
>never gets to them. Therefore all infinite-length bit strings which
>start with that L-bit prefix represent the same (L-bit) program and will
>produce precisely the same UTM behavior.
>
>Therefore a UTM running a program chosen at random will execute a
>program of length L bits with probability 1/2^L. Executing a random
>bit string on a UTM automatically leads to the universal distribution.
>Simpler programs are inherently more likely, QED.

I don't follow this argument (but I'm not very well-versed in computational
theory)--why would a UTM operating on an infinite-length program tape only
execute a finite number of bits? If the UTM doesn't halt, couldn't it
eventually get to every single bit?

>
> > If the "everything that can exist does exist" idea is true, then every
> > possible universe is in a sense both an "outer universe" (an independent
> > Platonic object) and an "inner universe" (a simulation in some other
> > logically possible universe).
>
>This is true. In fact, this may mean that it is meaningless to ask
>whether we are an inner or outer universe. We are both. However it
>might make sense to ask what percentage of our measure is inner vs outer,
>and as you point out to consider whether second-order simulations could
>add significantly to the measure of a universe.

What do you mean by "add significantly to the measure of a universe", if
you're saying that all programs have equal measure?

>
> > If you want a measure on universes, it's
> > possible that universes which have lots of simulated copies running in
> > high-measure universes will themselves tend to have higher measure,
>perhaps
> > you could bootstrap the global measure this way...but this would require
>an
> > answer to the question I keep mentioning from the Chalmers paper, namely
> > deciding what it means for one simulation to "contain" another. Without
>an
> > answer to this, we can't really say that a computer running a simulation
>of
> > a universe with particular laws and initial conditions is contributing
>more
> > to the measure of that possible universe than the random motions of
> > molecules in a rock are contributing to its measure, since both can be
>seen
> > as isomorphic to the events of that universe with the right mapping.
>
>We have had some discussion of the implementation problem on this list,
>around June or July, 1999, with the thread title "implementations".
>
>I would say the problem is even worse, in a way, in that we not only
>can't tell when one universe simulates another; we also can't be certain
>(in the same way) whether a given program produces a given universe.
>So on its face, this inability undercuts the entire Schmidhuberian
>proposal of identifying universes with programs.
>
>However I believe we have discussed on this list an elegant way to
>solve both of these problems, so that we can in fact tell whether a
>program creates a universe, and whether a second universe simulates the
>first universe. Basically you look at the Kolmogorov complexity of a
>mapping between the computational system in question and some canonical
>representation of the universe. I don't have time to write more now
>but I might be able to discuss this in more detail later.

Thanks for the pointer to the "implementations" thread, I found it in the
archives here:

http://www.escribe.com/science/theory/index.html?by=OneThread&t=implementations

Are you the same person as "hal" who posted the second message on that
thread? That post suggested some problems with the idea of looking at the
Kolmogorov complexity of the mapping:

>However I think there is a worse problem. That is that K. complexity is
>not uniquely defined. K. complexity is defined only with regard to some
>specific universal Turing machine (UTM). Two different UTMs will give a
>different answer for what the K. complexity is of a string or program.
>In fact, given any particular string, you can construct a UTM which gives
>it an arbitrarily large or small K. complexity as measured by that UTM.
>
>I think this objection is probably fatal to Jacques' idea. We need the
>SMA to be uniquely defined. But this cannot be true if there exist UTMs
>which disagree about which mapping algorithm is simplest. Within the
>mathematical framework of K. complexity, all UTMs are equally valid.
>So there is no objective preference of one over the other, hence there
>can be no objective meaning to the SMA.
>
>In order to fix this, we have to identify a particular UTM which we will
>use to measure the K. complexity. There has to be some particular Turing
>machine which is preferred by the universe. You could choose one and
>produce an "objective theory of consciousness", but this would be yet
>another ad hoc rule which would make the theory less persuasive.

Reading the thread, a different idea occurred to me--what if, instead of
just looking at abstract properties of different possible mappings, you
looked at how often the mappings themselves were "implemented", ie you
looked at the measure of the programs which actually do the mapping? If you
have some program A whose output, when fed as input to a mapping program M,
reproduces the output of a program B, then perhaps the degree to which A
contributes to B's measure would be given by something like (measure of
A)*(measure of M). There is potentially something a little circular about
deriving B's measure from the measure of all possible programs which map
other programs to it, since B itself could be a "mapping program" in another
context, but I'm attracted to circular definitions of measure, since my hope
is that the true universal measure will turn out to be a unique
self-consistent solution to some set of constraints, a bit like solving a
set of simultaneous equations (this would resolve the "arbitrariness
problem" of introducing a global measure that I discussed at
http://www.escribe.com/science/theory/m2606.html )

Jesse Mazer

_________________________________________________________________
Rethink your business approach for the new year with the helpful tips here.
http://special.msn.com/bcentral/prep04.armx
Received on Sun Jan 11 2004 - 01:22:23 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:09 PST