Re: Predictions & duplications

From: <juergen.domain.name.hidden>
Date: Thu, 11 Oct 2001 13:39:55 +0200

> > > From R.Standish.domain.name.hidden :
> > > juergen.domain.name.hidden wrote:
> > > >
> > > > So you NEED something additional to explain the ongoing regularity.
> > > > You need something like the Speed Prior, which greatly favors regular
> > > > futures over others.
> > >
> > > I take issue with this statement. In Occam's Razor I show how any
> > > observer will expect to see regularities even with the uniform prior
> > > (comes about because all observers have resource problems,
> > > incidently). The speed prior is not necessary for Occam's Razor. It is
> > > obviously consistent with it though.
> >
> > First of all: there is _no_ uniform prior on infinitely many things.
> > Try to build a uniform prior on the integers. (Tegmark also wrote that
> > "... all mathematical structures are a priori given equal statistical
> > weight," but of course this does not make much sense because there is
> > _no_ way of assigning equal nonvanishing probability to all - infinitely
> > many - mathematical structures.)
>
> I don't know why you insist on the prior being a PDF. It is not
> necessary. With the uniform prior, all finite sets have vanishing
> probability. However, all finite descriptions correspond to infinite
> sets, and these infinite sets have non-zero probability.

Huh? A PDF? You mean a probability density function? On a continuous set?
No! I am talking about probability distributions on describable objects.
On things you can program.

Anyway, you write "...observer will expect to see regularities even with
the uniform prior" but that clearly cannot be true.

> > There is at best a uniform measure on _beginnings_ of strings. Then
> > strings of equal size have equal measure.
> >
> > But then regular futures (represented as strings) are just as likely
> > as irregular ones. Therefore I cannot understand the comment: "(comes
> > about because all observers have resource problems, incidently)."
>
> Since you've obviously barked up the wrong tree here, it's a little
> hard to know where to start. Once you understand that each observer
> must equivalence an infinite number of descriptions due to the
> boundedness of its resources, it becomes fairly obvious that the
> smaller, simpler descriptions correspond to larger equivalence classes
> (hence higher probability).

Maybe you should write down formally what you mean? Which resource bounds?
On which machine? What exactly do you mean by "simple"? Are you just
referring to the traditional Solomonoff-Levin measure and the associated
old Occam's razor theorems, or do you mean something else?

> > Of course, alternative priors lead to alternative variants of Occam's
> > razor. That has been known for a long time - formal versions of Occam's
> > razor go at least back to Solomonoff, 1964. The big question really
> > is: which prior is plausible? The most general priors we can discuss are
> > those computable in the limit, in the algorithmic TOE paper. They do not
> > allow for computable optimal prediction though. But the more restrictive
> > Speed Prior does, and seems plausible from any programmer's point of view.
> >
> > > The interesting thing is of course whether it is possible to
> > > experimentally distinguish between the speed prior and the uniform
> > > prior, and it is not at all clear to me that it is possible to
> > > distinguish between these cases.
> >
> > I suggest to look at experimental data that seems to have Gaussian
> > randomness in it, such as interference patterns in split experiments.
> > The Speed Prior suggests the data cannot be really random, but that a
> > fast pseudorandom generator PRG is responsible, e.g., by dividing some
> > seed by 7 and taking some of the resulting digits as the new seed, or
> > whatever. So it's verifiable - we just have to discover the PRG method.
> >
>
> I can't remember which incompleteness result it is, but it is
> impossible to prove the randomness of any sequence. In order to
> falsify your theory one would need to prove a sequence to be
> random. However, of course if all known sequences are provably
> pseudo-random (ie compressible), then this would constitute pretty
> good evidence. However, this is a tall order, as there is no algorithm
> for generating the compression behind an arbitrary sequence.

You are talking falsifiability. I am talking verifiability. Sure, you
cannot prove randomness. But that's not the point of any inductive
science. The point is to find regularities if there are any. Occam's
razor encourages us to search for regularity, even when we do not know
whether there is any. Maybe some PhD student tomorrow will discover a
simple PRG of the kind I mentioned, and get famous.

It is important to see that Popper's popular and frequently cited and
overrated concept of falsifiability does not really help much to explain
what inductive science such as physics is all about. E.g., physicists
accept Everett's ideas although most of his postulated parallel universes
will remain inaccessible forever, and therefore are _not_ falsifiable.
Clearly, what's convincing about the multiverse theory is its simplicity,
not its falsifiability, in line with Solomonoff's theory of inductive
inference and Occam's razor, which is not just a wishy-washy philosophical
framework like Popper's.

Similarly, today's string physicists accept theories for their simplicity,
not their falsifiability. Just like nobody is able to test whether
gravity is the same on Sirius, but believing it makes things simpler.

Again: the essential question is: which prior is plausible? Which
represents the correct notion of simplicity? Solomonoff's traditional
prior, which does not care for temporal complexity at all? Even more
general priors computable in the limit, such as those discussed in
the algorithmic TOE paper? Or the Speed Prior, which embodies a more
restricted concept of simplicity that differs from Kolmogorov complexity
because it takes runtime into account, in an optimal fashion?

Juergen Schmidhuber

http://www.idsia.ch/~juergen/
http://www.idsia.ch/~juergen/everything/html.html
http://www.idsia.ch/~juergen/toesv2/
Received on Thu Oct 11 2001 - 04:42:51 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST