Juergen wrote (on 12th Oct):
> . . . In most possible futures your computer will
> vanish within the next second. But it does not. This indicates that our
> future is _not_ sampled from a uniform prior.
I don't wish to comment directly on the computer-vanishing problem as it
applies to Juergen's scheme (my own problem with this laudable scheme is
that it appears to be vulnerable to the same 'turtle-itis' criticism as for
all theistic religions - the (literal or abstract GP) 'turtle' responsible
for the world seems to need another turtle to support it and so on - there
is no full explanation), but I would like to say that certain other proposed
solutions don't suffer from this computer-vanishing problem (also known as
the WR/dragon problem), if one thinks of infinite length bit strings /
formal system descriptions via Limit n -> infinity, where n is the relevant
string/description length (see appendix below). It seems to me that in
thinking in simple infinity terms one can lose essential information (for
example integers cannot be determined to be more numerous than the odd
numbers - both are of the lowest order of infinity - not a problem for limit
analysis).
Alastair Malcolm
APPENDIX
One might naively think that as there are at least hundreds of possible
states (call them V1, V2... ) where some part of our computer clearly
vanishes (and only one (N), where normality prevails), then even if one
considers bit string or other types of formal description involving other
variations in our universe or indeed other universes, one could still
'divide through' to find that we are most likely to be in a universe where
our computer vanishes in whole or part.
However, we note that in any minimal description of our universe (and the
following argument does not depend on there having to *only* be a minimal
description), deviations from the actual physical laws causing V1, V2...
will involve additional ad-hoc rules/events, so we can say that in
complexity terms (whether as a bit string or formal system description) we
will have V1 = N + VE1, V2 = N + VE2 etc, where VE1, VE2 etc are the extra
segments of description required to cater for the part-vanishings (strictly:
Length(V1) = Length(N) + Length(VE1) etc, but hopefully this is clear).
Moreover, if we are considering all possible descriptions, we also have to
allow for extra descriptions corresponding to entities beyond our
observability. For each V = N + VE we will have many DC = N + DCE, where DC
is a 'don't care' description - it is a universe (or set of universes)
indistinguishable by us from N (our own, non-computer-vanishing one), yet
objectively different.
Now, the key point is that in any objective descriptive framework (whether
by bit string or formal system), one should take the Limit as the
description length increases to infinity, not as our (humanly biassed)
visible universe is progressively and unobservably added to (say by other
universes). As we do this, we are far more likely to be in a DC (= N + DCE)
universe than a V (= N + VE) universe: computers don't normally vanish, in
whole or in part.
More details at:
http://www.physica.freeserve.co.uk/p105.htm
linking to:
http://www.physica.freeserve.co.uk/p111.htm
http://www.physica.freeserve.co.uk/p112.htm
Received on Mon Oct 22 2001 - 07:42:46 PDT