- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Marchal <marchal.domain.name.hidden>

Date: Tue Jan 4 05:58:52 2000

Hal Finney wrote:

*>Marchal <marchal.domain.name.hidden> writes:
*

*>> So I think Thisell is right when he said <<the number of computations in the
*

*>> Schmidhuber plenitude using an insanely high number of decimals is a lot
*

*>> higher than the ones that use a specific measurable but life-permitting
*

*>> precision>>.
*

*>
*

*>This is true as stated, but I claim that the programs which implement the
*

*>versions with insanely high decimal places are MUCH LARGER than those which
*

*>implement smaller numbers of decimal places. Hence they have much lower
*

*>measure and do not make a significant calculation.
*

I don't agree. As you said yourself:

*> <<Crudely, a Fortran program using single precision
*

*> reals is about the same size as the same program using double precision.>>
*

And I add that similarly a Fortran program dovetailing on reals with

arbitrary big precision is about the same size as the same program using

single precision. With a UTM using dynamical data structure, you

don't even need to specify the needed precision. So the program using

arbitrary great precision is the shorter program. You don't need busy-

beaver for generating vastly huge outputs, the little counting algorithm

does it as well, though more slowly but that is not relevant for the

measure.

We could have a doubt in the following situation. You have a "little

program" P, using a "real" variable X and a real parameter K.

And the program works only if K is given with a googol decimals.

And let us suppose that that googol decimals are incompressible.

This is implausible for some reasons but let us accept it for the sake

of the argument.

Now a description of P(X,K) is very big, and a description of

P(X,Y) is very little. But a description Pdu(X,Y) of P(X,Y) dovetailing

on all reals with arbitrary precision is still very little. So, slowly

but surely, Pdu(X,Y) will compute the needed P(X,K) and his relevant

continuation, with higher and higher insanely huge number of decimals

producing in the limit a continuous set of relevant continuations.

You would be right in the case a program need a real parameter with

a *fixed* but huge number of decimals. That is indeed equivalent to

a huge program. But if the number of decimals needed is *arbitrary*, the

program can be as little, if not more little, than any program using

decimals with a precision fixed in advance.

*>The universal dovetailer creates all possible universes, using a very
*

*>small program. By itself this does not tell you the relative measure
*

*>of the various universes. So this line of argument does not seem to
*

*>help in judging whether universes with high precision have lower measure
*

*>than universes with low precision. Yes, there are more of the former,
*

*>but they are of lower measure.
*

I don't agree. The UD multiplies the executions by dovetailing them (even

in an admitedly dummy and ugly ways) on the reals.

I'm not sure I understand you when you say "there are more of the former,

but they are of lower measure". The measure is defined by the number of

(infinite) computations. (This is linked to the old ASSA/RSSA question of

course).

Bruno

Received on Tue Jan 04 2000 - 05:58:52 PST

Date: Tue Jan 4 05:58:52 2000

Hal Finney wrote:

I don't agree. As you said yourself:

And I add that similarly a Fortran program dovetailing on reals with

arbitrary big precision is about the same size as the same program using

single precision. With a UTM using dynamical data structure, you

don't even need to specify the needed precision. So the program using

arbitrary great precision is the shorter program. You don't need busy-

beaver for generating vastly huge outputs, the little counting algorithm

does it as well, though more slowly but that is not relevant for the

measure.

We could have a doubt in the following situation. You have a "little

program" P, using a "real" variable X and a real parameter K.

And the program works only if K is given with a googol decimals.

And let us suppose that that googol decimals are incompressible.

This is implausible for some reasons but let us accept it for the sake

of the argument.

Now a description of P(X,K) is very big, and a description of

P(X,Y) is very little. But a description Pdu(X,Y) of P(X,Y) dovetailing

on all reals with arbitrary precision is still very little. So, slowly

but surely, Pdu(X,Y) will compute the needed P(X,K) and his relevant

continuation, with higher and higher insanely huge number of decimals

producing in the limit a continuous set of relevant continuations.

You would be right in the case a program need a real parameter with

a *fixed* but huge number of decimals. That is indeed equivalent to

a huge program. But if the number of decimals needed is *arbitrary*, the

program can be as little, if not more little, than any program using

decimals with a precision fixed in advance.

I don't agree. The UD multiplies the executions by dovetailing them (even

in an admitedly dummy and ugly ways) on the reals.

I'm not sure I understand you when you say "there are more of the former,

but they are of lower measure". The measure is defined by the number of

(infinite) computations. (This is linked to the old ASSA/RSSA question of

course).

Bruno

Received on Tue Jan 04 2000 - 05:58:52 PST

*
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:06 PST
*