On 1 Feb 2005 Hal Finney wrote:
>Here is how I approach it, based on Schmidhuber. Suppose we pick a model
>of computation based on a particular Universal Turing Machine (UTM).
>Imagine this model being given all possible input tapes. There are an
>uncountably infinite number of such tapes, but on any given tape only
>a finite length will actually be used (i.e. the probability of using
>an infinite number of bits of the tape is zero). This means that any
>program which runs is only a finite size, yet occurs on an infinite
>number of tapes. The fraction of the tapes which holds a given program
>is proportional to 1 over 2^(program length), if they are binary tapes.
>This is considered the measure of the given program.
>
>An equivalent way to think of it is to imagine the UTM being fed with
>a tape created by coin flips. Now the probability that it will run a
>given program is its measure, and again it will be proportional to 1
>over 2^(program length).
It sounds like this program-to-generate-all-programs includes a mechanism
whereby there is a Pr=0.5 that it will halt (and go on to the next program)
as it generates each bit. This will do the job, and it will favour shorter
programs, but why rely on this particular program? And if you consider not a
program-to-generate-all-programs, but simply the set of all possible
programs, how do you arrive at this 1/2^n measure?
--Stathis Papaioannou
_________________________________________________________________
Credit Card. Apply online. 60 sec response. Must be over 18. AU only:
http://ad.au.doubleclick.net/clk;11046970;10638934;f?http://www.anz.com/aus/promo/first0105ninemsn/default.asp
Received on Wed Feb 02 2005 - 19:57:14 PST