Hal Finney wrote in part
>...
> However there is of course a limit to the size of the parameter, since
> the overall program itself has a finite size. We can't take a great
> deal more space to specify the precision parameter than the
> size of the
> program itself, without reducing the measure significantly.
I agree, sort of, but I think that we should try not to ignore programs
with significantly lower measure.
> Now, the question is, given this much space to specify the
> bits of precision
> for the calculation, what will be the probability distribution of the
> actual precision values? Suppose we say that the precision
> is specified
> by a program which will calculate and write the actual
> precision value,
> in unary, onto a specific region of the TM tape. This value
> will then be
> referred to in all of the real-number calculations to
> determine how much
> precision to use.
>
> So, looking at all programs of a certain size which produce
> well-formed
> numeric precision outputs, what is the probability distribution of the
> output values? I think Niclas' point comes down to the question of
> whether the majority of these values will be very large (in
> some sense).
Hmmm...
Let's say we actually tried to figure out the measure M(I) for every
positive integer using the turing-machine approach. We could consider
all bitstrings of length N and have a computer analyse the turing
machine given by a particular bitstring. Sometimes it's hard or
impossible to know if the execution will come to an end. So let's ignore
all those hard cases and only count the ones that are 'easily' analysed.
We could set
M(I)=number of bitstrings that would have written I unary / 2^N.
And we would have
sum(M(I),I=1..inf)<=1
As long as we don't let N->inf, there must be an integer A such that
(1) sum(M(I),I=1..A) > sum(M(I),I=A+1..inf)
In the context of decimal-counting, we should also find another integer
B such that
sum(M(I),I=L..B) > sum(M(I),I=B+1..inf)
where L is the lowest precision that permits life. This would indicate
that, assuming we could actually build a device that detects loss of
precision up to B digits, we would expect to find that the universe is
doing a sloppy job.
But technically, we must let N->inf. And we could, for instance, find
that, for a large N, we have approximately
M(I)~C(N)/I
where C(N) is a normalization-constant. And, ignoring this constant,
when trying to calculate an approximate sum to infinity we find it to be
divergent. I think this means that, when also taking the
normalization-constant into account, there is no integer A such that (1)
holds.
Of course, M(I) could just as well turn out to be something perfectly
integratable and this argument would fail. (btw: That means that you
have sort of convinced me that my initial guess was a bit too bold)
As a side note and not particularly related to the argument, I suspect
that, if we are to use the Schmidhuber ensemble, mathematical models
that prevent observers from actually determining the precision are
favoured.
Anyway, I wonder if A turns out to be 42 :-).
Best regards,
Niclas Thisell
Received on Thu Dec 30 1999 - 10:07:38 PST
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:06 PST