Niclas Thisell, <niclas.domain.name.hidden>, writes:
> One of the initial gripes I had with the turing approach to the
> plenitude (vs e.g. the grand ensemble) was that I figured it would favor
> a finite resolution lattice and finite resolution calculations. And I
> find it very hard to believe that we will ever actually detect an
> 'absolute' lattice. And it's even harder to believe that we will ever
> notice that nature uses a finite number of decimals. This issue has been
> discussed before and some of you found that a turing machine won't
> properly handle true 'reals'.
>
> The answer is, of course, that the number of computations in the
> Schmidhuber plenitude using an insanely high number of decimals is a lot
> higher than the ones that use a specific measurable but life-permitting
> precision. The measure of a computation using 10^10^10 decimals is
> roughly the same as one using 10^10^10^10 decimals. And the computations
> themselves will most likely remain virtually identical throughout the
> history of the universe and the observer-moments will be identical. The
> same goes for grid spacing (and grid extent, for that matter). Therefore
> observer-moments in a universe using precision indistinguishable from
> 'reals' and a lattice indistinguishable from a continuum seem to be
> favoured.
I'm not sure that the number of high-decimal calculations will inherently
be much greater than the number of low-decimal calculations. Here is
my reasoning.
Your overall idea seems correct, that you could imagine a universe
simulator which is performing a real-number calculation to finite
precision. This is of course what our computers do all the time. And
further, the size of the program is not very sensitive to the precision
of the real numbers. Crudely, a Fortran program using single precision
reals is about the same size as the same program using double precision.
In the case of a TM emulating a real-number based universe, the precision
it uses could be thought of as a parameter to the program, something which
is entered once and is then used throughout the program as a sort of loop
counter to tell how far to extend each calculation. Since this value is
only entered once, its size is only counted once, and it so it does not
contribute very much to the overall size of the program.
However there is of course a limit to the size of the parameter, since
the overall program itself has a finite size. We can't take a great
deal more space to specify the precision parameter than the size of the
program itself, without reducing the measure significantly.
Now, the question is, given this much space to specify the bits of precision
for the calculation, what will be the probability distribution of the
actual precision values? Suppose we say that the precision is specified
by a program which will calculate and write the actual precision value,
in unary, onto a specific region of the TM tape. This value will then be
referred to in all of the real-number calculations to determine how much
precision to use.
So, looking at all programs of a certain size which produce well-formed
numeric precision outputs, what is the probability distribution of the
output values? I think Niclas' point comes down to the question of
whether the majority of these values will be very large (in some sense).
Certainly there are programs which will produce very large values.
As Niklas writes, something like 10^10^10^10^...^10 is astronomically
large but specified by a very short string. We could imagine this
number being produced by having the precision-specifying region consist
of a short arithmetic interpreter which can handle numbers and operators
(like *, +, ^), and produce the numeric output given this input string.
However such programs will be in the minority; most programs will not
produce well formed output, and of those that do, few will be busy-beaver
programs that produce vastly huge output, as I understand it. I don't
have a strong understanding of these issues, but my sense is that a
program chosen randomly from the set that produce well-formed output
will probably not produce a very large number. There will be a great
many programs that just flail around and produce a number like 1, or
2, as the output. Some will go forever and never stop (and hence not
produce well formed output). Only a few will be the clever ones that
can unfold some tight internal representation into a super-large number
like Niklas' examples.
On this basis I would claim that the number of universe simulators that
calculate real numbers to small or moderate precision would be greater
than the number that calculate real numbers to very high precision.
Hence I would think that a Schmidhuber ensemble theory would predict that
our universe has only as much precision as it needs for life to evolve.
>From what I understand, this does not seem to match very well with what
we observe, but our theories are still incomplete and it may turn out
that they can be expressed in some form consistent with this prediction.
Hal Finney
Received on Wed Dec 29 1999 - 10:26:43 PST
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:06 PST