- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Wei Dai <weidai.domain.name.hidden>

Date: Tue, 19 Dec 2000 13:33:01 -0800

On Tue, Dec 19, 2000 at 03:18:14PM +0100, juergen.domain.name.hidden wrote:

*> The speed prior is based on the most efficient way of computing
*

*> all individually computable universes (note that the infinite random
*

*> universes you mentioned are not computable at all, neither individually
*

*> nor collectively by the ensemble approach) plus an additional natural
*

*> resource-oriented postulate: the cumulative prior probability of all
*

*> $x$ incomputable within time $t$ by this optimal algorithm should be
*

*> $1/t$. In particular, anything that consumes uncountable resources
*

*> even when we use the fastest way of computing everything has measure
*

*> zero. Note that the pure description size-based priors ignore time to
*

*> the extent that their computation does require uncountable resources,
*

*> a notion rejected by the speed prior S.
*

Please explain in what sense FAST is the most efficient way to compute

everything. I couldn't find a formal definition of this idea. What makes

FAST the most natural choice for defining S? How about an alternative to

FAST, which does the same thing except that in phase i it executes 2^i

instructions of all program prefixes satisfying l(p) <= 2^i? Why not use

this algorithm instead of FAST?

A related issue is what effect changing the model of computation has on S.

For example, changing from a classical TM to a quantum TM makes some

strings much faster to compute, so S would seem to depend very much on the

details of the model of computation. Suppose the "great programmer" had

access to a computer that has an instruction to duplicate itself and run

two programs in parallel (and each child computer can also duplicate

itself), and suppose there's also an instruction to concatenate the

outputs of the child computers together. Then we can compute everything

much faster than FAST, by duplicating the computer whenever a new bit is

read from the program tape. In fact, with this computer the above

alternative to FAST (which we can call FASTER) only uses 2^i time units in

phase i.

A speed prior based on FASTER would not lead to the conclusion that any

apparent randomness is pseudorandom, and if the above model of computation

is used, it would also satisfy Postulate 6.1.

Received on Tue Dec 19 2000 - 13:42:37 PST

Date: Tue, 19 Dec 2000 13:33:01 -0800

On Tue, Dec 19, 2000 at 03:18:14PM +0100, juergen.domain.name.hidden wrote:

Please explain in what sense FAST is the most efficient way to compute

everything. I couldn't find a formal definition of this idea. What makes

FAST the most natural choice for defining S? How about an alternative to

FAST, which does the same thing except that in phase i it executes 2^i

instructions of all program prefixes satisfying l(p) <= 2^i? Why not use

this algorithm instead of FAST?

A related issue is what effect changing the model of computation has on S.

For example, changing from a classical TM to a quantum TM makes some

strings much faster to compute, so S would seem to depend very much on the

details of the model of computation. Suppose the "great programmer" had

access to a computer that has an instruction to duplicate itself and run

two programs in parallel (and each child computer can also duplicate

itself), and suppose there's also an instruction to concatenate the

outputs of the child computers together. Then we can compute everything

much faster than FAST, by duplicating the computer whenever a new bit is

read from the program tape. In fact, with this computer the above

alternative to FAST (which we can call FASTER) only uses 2^i time units in

phase i.

A speed prior based on FASTER would not lead to the conclusion that any

apparent randomness is pseudorandom, and if the above model of computation

is used, it would also satisfy Postulate 6.1.

Received on Tue Dec 19 2000 - 13:42:37 PST

*
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:07 PST
*