----- Original Message -----
From: Juergen Schmidhuber <juergen.domain.name.hidden>
> ... For instance, Tegmark's statement
> "... all mathematical structures are a priori given equal statistical
> weight" [2] differs from the natural complexity-based weighting used
> in [1], which builds on the "optimal" UTM-based "universal prior" or
> Solomonoff-Levin distribution U.
> The universal prior U assigns to a bitstring s the probability of
> successively guessing all bits of a self-delimiting program that computes
> s (individual bits are guessed whenever the UTM's input scanning head
> shifts right). So we need to sum over all programs computing the same
> universe. U is "optimal" or "universal" in the sense that it dominates
> all other discrete enumerable semimeasures: under U the probability
> of any computable object is at least as large (save for a constant
> that does not depend on the object) as under any alternative discrete
> enumerable semimeasure. The particular choice of UTM does not matter
> due to the invariance theorem.
I'm afraid I find the part above unclear, as I do the corresponding part of
your paper. If what you are saying is in effect that 'program halts' or
'closed loops' (both enabling self-delimitation) would be encountered with
equal probability along the instruction string, then I can understand why
shorter functional strings would be expected to predominate. (But my
argument still stands that the UTM is a very specific (sequence-based) way
of mapping from one n-tuple (ordered list) to another (m-tuple; m>>n), and
so could not be considered to provide a reliable universal measure this
way.)
If there is a further/alternative secure conceptual foundation to the
universal measure proposal (that is, another explanation for why shorter
programs predominate), it should be possible to convey the essentials
(without too much recourse to specialist terminology!)
Thanks
Alastair
Received on Sat Oct 23 1999 - 08:29:13 PDT
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:06 PST