White Rabbits and Algorithmic TOEs

From: Russell Standish <R.Standish.domain.name.hidden>
Date: Thu, 18 Jan 2001 15:30:21 +1100 (EST)

Last night I finally got around to reading Bruno's CCQ paper and
Juergen's Algo TOE paper. Thanks for the citation guys.

Bruno, its a bit unfair that you lump me in with Schmidhuber in your
criticism in footnote 9. (I agree your criticism is fair of
Schmidhuber's approach BTW). It indicates that you missed the point of
section 3 of my Occam paper - are you using the second (latest)
version of this paper? I substantially revised sections 3 & 4.

I do not assume we are inhabiting a well defined universe amongst all
possible - quite the opposite in fact. The UTM used for defining the
Universal Prior is that of the observer erself. In fact, one doesn't
need a UTM at all - all that suffices is an observer that equivalences
descriptions. The measure of a description is simply the ratios of
equivalence class sizes. (see my new Complexity and Emergence paper
for my point on effective complexity). In section 3, the uncountable
random universes are equivalenced to their indistinguishable law-like
universe. White Rabbit universes, also have their own cloud of random
variations, but the whole cloud must have lower measure.

I don't deliberately ignore the distinction between 1st and 3rd person
viewpoints - in "Occams razor", I'm only dealing with 1st person
viewpoints. I'm ignoring the possibility of whether there even is a
3rd person viewpoint.

Incidently, in your "Hunting the White Rabbit (I)", I like your use of
the Feynman path metaphor. The destructive interference of long
Feynman paths in some sense corresponds to the way that random strings
are equivalenced by an observer.

Juergen's paper makes many interesting contributions into different
types of measures, definitely a step forward. I would criticise it on
two grounds: a) assumption that the universe is running on a
particular machine, ala my Occam's razor critique, and Bruno's
critique, and b) the assumption of COMP - ie uncomputable objects such
as random numbers are unecessary for consciousness. This assumption
leads to specific predictions, such as beta decay having hidden pseudo
random correllations, and (forgive if I'm wrong) the prediction that
quantum computing will fail.

Note that in my Occam paper, I explicity assume that randomness is
needed for consciousness, and that the mechanism for producing it
(with respectable measure) is to embed the observer in a
Multiverse. This is why I end up with conventional QM. But then, I
have always been critical of COMP.

Could Juergen's work lead to falsifiability of COMP?


----------------------------------------------------------------------------
Dr. Russell Standish Director
High Performance Computing Support Unit, Phone 9385 6967
UNSW SYDNEY 2052 Fax 9385 6965
Australia R.Standish.domain.name.hidden
Room 2075, Red Centre http://parallel.hpc.unsw.edu.au/rks
----------------------------------------------------------------------------
Received on Wed Jan 17 2001 - 20:49:59 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST