Hal Finney wrote:
>I'm not sure that the number of high-decimal calculations will inherently
>be much greater than the number of low-decimal calculations. Here is
>my reasoning.
>
>Your overall idea seems correct, that you could imagine a universe
>simulator which is performing a real-number calculation to finite
>precision. This is of course what our computers do all the time. And
>further, the size of the program is not very sensitive to the precision
>of the real numbers. Crudely, a Fortran program using single precision
>reals is about the same size as the same program using double precision.
>
>In the case of a TM emulating a real-number based universe, the precision
>it uses could be thought of as a parameter to the program, something which
>is entered once and is then used throughout the program as a sort of loop
>counter to tell how far to extend each calculation. Since this value is
>only entered once, its size is only counted once, and it so it does not
>contribute very much to the overall size of the program.
>
>However there is of course a limit to the size of the parameter, since
>the overall program itself has a finite size. We can't take a great
>deal more space to specify the precision parameter than the size of the
>program itself, without reducing the measure significantly.
I am not sure there should be a limit to the size of the parameter.
With dynamical data structures (like LISP's lists) you can write programs
which are able to handle reals with finite but an arbitary number of
decimals. You can also write programs working with arbitrary computational
reals, etc. This indeed will not change significantly the length of the
programs involved. But this miss Thisell's point which bear on the
*relative* probability of a computational continuation (cf RSSA), not with
universal prior (ASSA).
Some time ago I argue with Wei Dai that both ASSA and RSSA are useful for
the search of measure.
So I think Thisell is right when he said <<the number of computations in the
Schmidhuber plenitude using an insanely high number of decimals is a lot
higher than the ones that use a specific measurable but life-permitting
precision>>.
He is wrong when he suggest this is a trivial matter!
Let us look on what happens precisely with the UD:
Suppose you have a program with real parameters, and working dynamicaly
with arbitrary precision. Suppose that to simulate correctly the phenomenon
X, it needs only finite precision (the first 10 decimal, let us say).
For exemple let _alpha be one of those real parameters.
Suppose _alpha is equal to 0,0004659086 81230099674437119967372202...
But for simulating correctly the phenomenon X, the following approximation
of _alpha, 0,0004659087, is enough.
Now the UD will, in particular dovetails on all continuations between
0,0004659086 0000000000000000000000000000000000000000000000000000
0,0004659086 9999999999999999999999999999999999999999999999999999,
(such dovetailing is itself produced by a very little program).
This means that in the running of the UD, a very big number of correct
emulations (as far as phenomenon X is concerned) will be done. In the limit
The UD will emulate ALEPH_1 correct version of the phenomenon.
If you remember that the time of the occurence of the emulation cannot be
taken into account *from the possible first person points of view* you
realise that the measure must take that ALEPH_1 set of continuations
into account(°). Note that this remark is orthogonal
to the universal prior question, because a program handling reals with
single precision is not much shorter than a program handling reals with
arbitrary big precisions (here I guess you are rigth).
Happy New Year,
Bruno
(°) I agree with Jacques Mallah that the measure depends on the number of
implementations (except that those implementations don't need to be
*physical*, in particular the term "physical" has no a priori meaning at
this stage).
Received on Mon Jan 03 2000 - 05:59:47 PST
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:06 PST