- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: <juergen.domain.name.hidden>

Date: Thu, 11 Oct 2001 09:56:19 +0200

*> From R.Standish.domain.name.hidden :
*

*> juergen.domain.name.hidden wrote:
*

*> >
*

*> > So you NEED something additional to explain the ongoing regularity.
*

*> > You need something like the Speed Prior, which greatly favors regular
*

*> > futures over others.
*

*>
*

*> I take issue with this statement. In Occam's Razor I show how any
*

*> observer will expect to see regularities even with the uniform prior
*

*> (comes about because all observers have resource problems,
*

*> incidently). The speed prior is not necessary for Occam's Razor. It is
*

*> obviously consistent with it though.
*

First of all: there is _no_ uniform prior on infinitely many things.

Try to build a uniform prior on the integers. (Tegmark also wrote that

"... all mathematical structures are a priori given equal statistical

weight," but of course this does not make much sense because there is

_no_ way of assigning equal nonvanishing probability to all - infinitely

many - mathematical structures.)

There is at best a uniform measure on _beginnings_ of strings. Then

strings of equal size have equal measure.

But then regular futures (represented as strings) are just as likely

as irregular ones. Therefore I cannot understand the comment: "(comes

about because all observers have resource problems, incidently)."

Of course, alternative priors lead to alternative variants of Occam's

razor. That has been known for a long time - formal versions of Occam's

razor go at least back to Solomonoff, 1964. The big question really

is: which prior is plausible? The most general priors we can discuss are

those computable in the limit, in the algorithmic TOE paper. They do not

allow for computable optimal prediction though. But the more restrictive

Speed Prior does, and seems plausible from any programmer's point of view.

*> The interesting thing is of course whether it is possible to
*

*> experimentally distinguish between the speed prior and the uniform
*

*> prior, and it is not at all clear to me that it is possible to
*

*> distinguish between these cases.
*

I suggest to look at experimental data that seems to have Gaussian

randomness in it, such as interference patterns in split experiments.

The Speed Prior suggests the data cannot be really random, but that a

fast pseudorandom generator PRG is responsible, e.g., by dividing some

seed by 7 and taking some of the resulting digits as the new seed, or

whatever. So it's verifiable - we just have to discover the PRG method.

Juergen Schmidhuber

http://www.idsia.ch/~juergen/

http://www.idsia.ch/~juergen/everything/html.html

http://www.idsia.ch/~juergen/toesv2/

Received on Thu Oct 11 2001 - 00:56:29 PDT

Date: Thu, 11 Oct 2001 09:56:19 +0200

First of all: there is _no_ uniform prior on infinitely many things.

Try to build a uniform prior on the integers. (Tegmark also wrote that

"... all mathematical structures are a priori given equal statistical

weight," but of course this does not make much sense because there is

_no_ way of assigning equal nonvanishing probability to all - infinitely

many - mathematical structures.)

There is at best a uniform measure on _beginnings_ of strings. Then

strings of equal size have equal measure.

But then regular futures (represented as strings) are just as likely

as irregular ones. Therefore I cannot understand the comment: "(comes

about because all observers have resource problems, incidently)."

Of course, alternative priors lead to alternative variants of Occam's

razor. That has been known for a long time - formal versions of Occam's

razor go at least back to Solomonoff, 1964. The big question really

is: which prior is plausible? The most general priors we can discuss are

those computable in the limit, in the algorithmic TOE paper. They do not

allow for computable optimal prediction though. But the more restrictive

Speed Prior does, and seems plausible from any programmer's point of view.

I suggest to look at experimental data that seems to have Gaussian

randomness in it, such as interference patterns in split experiments.

The Speed Prior suggests the data cannot be really random, but that a

fast pseudorandom generator PRG is responsible, e.g., by dividing some

seed by 7 and taking some of the resulting digits as the new seed, or

whatever. So it's verifiable - we just have to discover the PRG method.

Juergen Schmidhuber

http://www.idsia.ch/~juergen/

http://www.idsia.ch/~juergen/everything/html.html

http://www.idsia.ch/~juergen/toesv2/

Received on Thu Oct 11 2001 - 00:56:29 PDT

*
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:07 PST
*