- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Wei Dai <weidai.domain.name.hidden>

Date: Mon, 18 Dec 2000 16:01:20 -0800

On Mon, Dec 18, 2000 at 03:39:05PM +0100, juergen.domain.name.hidden wrote:

*> Not quite true - S just predicts the pseudorandom number generator is
*

*> also fast. Even the most dominant priors mu^G and mu^E (more dominant
*

*> than the enumerable priors studied in the classic work on Occam's
*

*> razor by Solomonoff and Levin and Gacs) cannot help assigning vanishing
*

*> probability to truly random universes - see Theorems 5.3-5.6. But mu^G
*

*> and mu^E do assign nonvanishing probability to certain pseudorandom
*

*> universes whose pseudorandomness is hard or even impossible to detect
*

*> because the algorithm for computing the pseudorandomness consumes
*

*> excessive time. Compare Example 2.1 and Section 7. The speed prior S,
*

*> however, must necessarily assign low probability to such hard-to-compute
*

*> universes. Roughly speaking, with S the collective probability of all
*

*> universes incomputable in phase i of the most efficient way of computing
*

*> all universes is discounted by 2^-i (Def 6.3). In particular, zero
*

*> probability is assigned to universes whose pseudorandomness consumes
*

*> more than countably many resources - from a computational point of view
*

*> uncountable resources do not make sense, hence infinite random strings
*

*> do not make sense either.
*

You haven't addressed the issue of why it makes sense that S assigns about

2^(-2n) to random strings instead of 2^-n. If a prior does assign about

2^-n to random strings, then it does not predict that a pseudorandom

number generator is driving the apparent randomness that we observe. This

is because although each individual non-pseudorandom universe has very

small probability, there are a lot more of them, and these two factors

cancel out. To see this, consider a program prefix X that implements the

laws of our universe and reads the rest of the input tape to obtain any

bits needed to drive random occurances, and compare it with a program

prefix Y that implements the laws of our universe but treats the rest of

the tape as a pseudorandom number generator program to obtain these

"random" bits. It's reasonable to assume that X is a little shorter than

Y. Under the speed prior S, the set of programs that begin with X

contributes less to the measure than the set of programs that begin with

Y. But the reverse is true under any of the other priors mentioned in your

paper.

Let me reemphasize that each new bit of information should cost a factor

of 2, not 4. The latter leads to the counterintuitive conclusion that any

apparent randomness in the universe is actually the output of a

pseudorandom number generator, which implies for example that it's

theoretically possible to compute the entire history of the universe from

the information obtained from a single tea leave.

Received on Mon Dec 18 2000 - 16:19:45 PST

Date: Mon, 18 Dec 2000 16:01:20 -0800

On Mon, Dec 18, 2000 at 03:39:05PM +0100, juergen.domain.name.hidden wrote:

You haven't addressed the issue of why it makes sense that S assigns about

2^(-2n) to random strings instead of 2^-n. If a prior does assign about

2^-n to random strings, then it does not predict that a pseudorandom

number generator is driving the apparent randomness that we observe. This

is because although each individual non-pseudorandom universe has very

small probability, there are a lot more of them, and these two factors

cancel out. To see this, consider a program prefix X that implements the

laws of our universe and reads the rest of the input tape to obtain any

bits needed to drive random occurances, and compare it with a program

prefix Y that implements the laws of our universe but treats the rest of

the tape as a pseudorandom number generator program to obtain these

"random" bits. It's reasonable to assume that X is a little shorter than

Y. Under the speed prior S, the set of programs that begin with X

contributes less to the measure than the set of programs that begin with

Y. But the reverse is true under any of the other priors mentioned in your

paper.

Let me reemphasize that each new bit of information should cost a factor

of 2, not 4. The latter leads to the counterintuitive conclusion that any

apparent randomness in the universe is actually the output of a

pseudorandom number generator, which implies for example that it's

theoretically possible to compute the entire history of the universe from

the information obtained from a single tea leave.

Received on Mon Dec 18 2000 - 16:19:45 PST

*
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:07 PST
*