RE: Renormalization

From: Niclas Thisell <niclas.domain.name.hidden>
Date: Tue, 4 Jan 2000 17:24:21 +0100

Marchal wrote
> Hal Finney wrote:
>
> >Marchal <marchal.domain.name.hidden> writes:
> >> So I think Thisell is right when he said <<the number of
> computations in the
> >> Schmidhuber plenitude using an insanely high number of
> decimals is a lot
> >> higher than the ones that use a specific measurable but
> life-permitting
> >> precision>>.
> >
> >This is true as stated, but I claim that the programs which
> implement the
> >versions with insanely high decimal places are MUCH LARGER
> than those which
> >implement smaller numbers of decimal places. Hence they
> have much lower
> >measure and do not make a significant calculation.
>
> I don't agree. As you said yourself:
>
> > <<Crudely, a Fortran program using single precision
> > reals is about the same size as the same program using
> double precision.>>
>
> And I add that similarly a Fortran program dovetailing on reals with
> arbitrary big precision is about the same size as the same
> program using
> single precision. With a UTM using dynamical data structure, you
> don't even need to specify the needed precision. So the program using
> arbitrary great precision is the shorter program. You don't need busy-
> beaver for generating vastly huge outputs, the little
> counting algorithm
> does it as well, though more slowly but that is not relevant for the
> measure.
>
> We could have a doubt in the following situation. You have a "little
> program" P, using a "real" variable X and a real parameter K.
> And the program works only if K is given with a googol decimals.
> And let us suppose that that googol decimals are incompressible.
> This is implausible for some reasons but let us accept it for
> the sake
> of the argument.
>
> Now a description of P(X,K) is very big, and a description of
> P(X,Y) is very little. But a description Pdu(X,Y) of P(X,Y)
> dovetailing
> on all reals with arbitrary precision is still very little. So, slowly
> but surely, Pdu(X,Y) will compute the needed P(X,K) and his relevant
> continuation, with higher and higher insanely huge number of decimals
> producing in the limit a continuous set of relevant continuations.
>
> You would be right in the case a program need a real parameter with
> a *fixed* but huge number of decimals. That is indeed equivalent to
> a huge program. But if the number of decimals needed is
> *arbitrary*, the
> program can be as little, if not more little, than any program using
> decimals with a precision fixed in advance.

There is no reason to believe that K (representing a fundamental
constant, I presume) is not compressible. And, presuming the theory
looks like a linear differential equation, we could iterate the state
without loss of precision using rational numbers or even integers. The
problem arises when we need to do calculate or multiply by an irrational
number, like sqrt(2). They can, of course, be approximated by a power
series expansion or so. But the series needs a cutoff. I think Hal
refers to this number - not the actual fundamental constant. Of couse,
it too can be compressed, but I'm fairly sure his point is that values
around 5 are still much more likely than values around 100^100. (I
agree, but I don't agree that this necessarily needs to imply that 'low'
number dominate).

Of course, you could write a dovetailing program that iterates through
all cut-offs simultaneously. And I have no doubt that _you_ could write
a program that calculates the universe with a seemingly infinite
precision. The question is if this is automatically given by the
Schmidhuber plenitude (i.e. including the measure given by your
dovetailer).

>
> >The universal dovetailer creates all possible universes, using a very
> >small program. By itself this does not tell you the relative measure
> >of the various universes. So this line of argument does not seem to
> >help in judging whether universes with high precision have
> lower measure
> >than universes with low precision. Yes, there are more of
> the former,
> >but they are of lower measure.
>
> I don't agree. The UD multiplies the executions by
> dovetailing them (even
> in an admitedly dummy and ugly ways) on the reals.
> I'm not sure I understand you when you say "there are more of
> the former,
> but they are of lower measure". The measure is defined by the
> number of
> (infinite) computations. (This is linked to the old ASSA/RSSA
> question of
> course).

I can sort of accept this point of view as well - i.e. I'm not willing
to discard it (especially since there are problems with the ASSA as
well). But I do think that the interpretation of a state of a particular
universe introduces difficulties. For instance, we can interpret a rock
to implement a univeral turing-machine. So any universe can be
interpreted to be equivalent with any other universe.
Also, the evolution process is not very clear. Relativity teaches us
that it is usually better to think of time as just another dimension
with almost the same properties as the spatial dimensions. And
relativistic QM does indeed not treat time very differently than other
axes. Whereas these infinite computations more or less assert an
absolute time.

Best regards,
Niclas
Received on Tue Jan 04 2000 - 08:30:31 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST