Hal Finney wrote
> Marchal <marchal.domain.name.hidden> writes:
> > So I think Thisell is right when he said <<the number of
> computations in the
> > Schmidhuber plenitude using an insanely high number of
> decimals is a lot
> > higher than the ones that use a specific measurable but
> life-permitting
> > precision>>.
>
> This is true as stated, but I claim that the programs which
> implement the
> versions with insanely high decimal places are MUCH LARGER
> than those which
> implement smaller numbers of decimal places. Hence they have
> much lower
> measure and do not make a significant calculation.
Well, you can't prove your point that easily either. They may have lower
measure, but there are more of them. And they might add up.
>
> The reason, as I said, is that specifying the number of decimal places
> takes space, and specifying a very large number takes more space than
> specifying a small number. This is a corollary to the well known fact
> that most strings are not compressible: most large numbers cannot be
> expressed by small programs.
Yes.
>
> > He is wrong when he suggest this is a trivial matter!
> >
> > Let us look on what happens precisely with the UD:
>
> The universal dovetailer creates all possible universes, using a very
> small program. By itself this does not tell you the relative measure
> of the various universes. So this line of argument does not seem to
> help in judging whether universes with high precision have
> lower measure
> than universes with low precision. Yes, there are more of the former,
> but they are of lower measure.
The universal dovetailer is kind of pointless for all sorts of reasons.
First of all, why would it be easier to imagine that a single
UD-computation is being performed than that all possible computations
are being performed (the UD being one of them)? Also, as noted, it
doesn't provide a measure for the individual computations.
So, how high is the measure of a computation? Schmidhuber's prior can be
rephrased to state O(p~)=Output of shortest halting initial fraction p
of the program p~.
M(s)=sum(all bit-strings p~ of length N whose O(p~)=s)/3^N, N->inf
Indeed, most programs of any significant length will probably not halt.
Then, we will have a lot of very simple outputs. In the case of unary
outputs, we will get a lot of zeroes and probably fewer ones and so
forth. The question is; how quickly does this measure decrease? (I
certainly agree that it does decrease - i.e. M(1) >> M(10^10^10), but
that is not enough to prove either point (see my other reply)).
It's relatively easy to make both standpoints sound plausible using
simplified turing machines etc. . But I have failed to come up with a
really convincing argument.
I also sense that you have a bias against large programs. Apart from
these resolution-, grid extent- and precision-issues, one argument for
the need of considering large programs is that it may not be enough to
simply consider a simulation of the universe, much like it is not enough
to consider a universal dovetailer the solution for all problems. When
doing a simulation of the universe, we pretty much leave it to the
observers to find themselves in, e.g. a wavefunction or so. I have no
doubt they will. But, then again, I have no doubt we find ourselves
where we are without even doing the simulation in the first place. And
the question of measure of a single Everettian world is ignored.
So, as has been suggested previously, perhaps we should require the
program to select a world-moment or an observer-moment. Needless to say,
the programs that select this exact world-moment are pretty dang large
(Though they may still be shorter/more numerous than the ones where a
dragon-rider called Niclas Thisell, having recollections of reading the
everything-list, is rescuing maidens in distress).
Best regards,
Niclas
Received on Tue Jan 04 2000 - 04:10:41 PST
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:06 PST