Jesse Mazer wrote:
>
> Russell Standish wrote:
>
> >Thanks for the rap! I sent my previous post before coming across this
> >one. Yes your summary is correct. However section 4 goes on to show
> >what sort of universe we expect to see ourselves in (ie the Multiverse
> >we find ourselves in - its the best explanation I've come across yet
> >for Quantum Mechanics), based on some pretty simple, and one would
> >hope uncontroversial assumptions about what it means to be conscious.
> >
> >To go further on the measure problem would require attaching a
> >particular property of our observed universe to the anthropic
> >principle - eg why we find ourselves in 4D Minkowski
> >spacetime. Tegmark has some speculations on this matter, but it
> >doesn't go far enough.
>
> Doesn't your scheme assume something like "one turing machine, one vote"
> though? On the universal prior page you say:
>
> "the natural measure induced on the ensemble of bitstrings is the uniform
> one, i.e. no bitstring is favoured over any other."
No - its "one description, one vote". Nonuniform measures arise out of
equivalencing descriptions through interpretation. The universal prior
is what you get when you use a universal turing machine as your
interpreting device.
My key point was that the obvious interpreting device is the observer
erself. I fudge the details a bit in "Occam" by claiming that
observers should be capable of universal computation (which appears to
be true of homo sapiens), in which case the universal prior is what
one should observe. But really, any equivalencing mechanism will do,
even ones that generate your pathological measue distributions below.
In a later paper "On Complexity and Emergence", I show that an
observer's concept of complexity is inextricably related to this
equivalencing process. One distinct difference between how homo
sapiens does things and how Turing machines do things is that random
strings (or at least patternless strings) have almost zero complexity
to humans (they are all meaningless strings, and equivalent), whereas
to a Turing machine, they are all distinct and have maximum complexity.
>
> On the other hand, Michiel de Jong said:
>
> "although there is no global measure (as in option 1), Solomonoff's
> universal prior allows us to make predictions _as_if_ there were one,
> because it approximates any candidate measure within O(1)."
>
> When he says it "approximates any candidate measure", does this mean there
> is some general class of measures for which the universal prior is a "good
> enough" approximation in some sense? Obviously not *all* measures would
> work, since I could pick a measure that was 100% concentrated on a
> particular bitstring and 0% on all the others, and that'd yield predictions
> quite different from those based on the universal prior. Juergen
> Schmidhuber's paper goes into more detail on the class of measures that the
> universal prior is a "good enough" approximation for, right? Maybe I need
> to go read that...
>
> Anyway, it may be that for most "plausible" measures the universal prior is
> a good approximation, in which case using it is perfectly justified. But it
> still seems that for a complete TOE we must address the measure problem in a
> more direct way, in order to rule out weird measures like the one I
> mentioned...using the universal prior might turn out to be a bit like
> "renormalization" in quantum field theory, i.e. a tool that's useful for
> making calculations but probably isn't going to be the basis of our final
> TOE.
>
> _________________________________________________________________
> Get your FREE download of MSN Explorer at http://explorer.msn.com
>
----------------------------------------------------------------------------
Dr. Russell Standish Director
High Performance Computing Support Unit, Phone 9385 6967
UNSW SYDNEY 2052 Fax 9385 6965
Australia R.Standish.domain.name.hidden
Room 2075, Red Centre
http://parallel.hpc.unsw.edu.au/rks
----------------------------------------------------------------------------
Received on Sun Mar 18 2001 - 14:39:17 PST