What exactly is a White Rabbit (Wabbit for short),
in the techinical sense we want to use? I've stated
that it's an extra bit of complexity in the laws of
physics. But how can we distinguish that from the
apparent complexity that is merely due to being in one
of many worlds?
Well, one way is to be a physicist. Physicists
like to distinguish between general theories and
particular data sets. Again, how precise is that? A
borderline case seems to be the constants of nature,
which appear to us to be real numbers with infinite
Kolmogorov complexity. Are the general laws
infinitely complex then? No, as MWIers we just say
that it must mean that the whole set of all possible
"constants" exists, thus making the whole thing not
very complex.
But we don't want to invoke the MWI too often, or
we will end up defining all possible observed
complexity out of existance. We want to be able to
distinguish wabbits, if only to show that the
statement "the AUH implies the statistical lack of
wabbits" means something.
I will consider a bitstring, b. If it has very
high K-complexity, it is random; it must be the
product of a MWI process, and therefore indicates low
complexity in the "laws of physics" that produced it.
If it has very low K-complexity, it is nonrandom,
and must the product of simple "laws of physics" in
operation, where we don't need to invoke the MWI. If
K(b) is medium, b might just be a random segment
appended to a simple segment.
Clearly K-complexity is not good enough to find
wabbits. We could try turning to the Santa Fe
Institute and see what they have to say about
complexity, but what I want is an algorithm.
One way to try to get at what I want is to
consider the use of induction on a string. Suppose we
use induction, starting as usual with the ensemble of
all programs having uniform Bayesian probability, to
continue a finite string. We will obtain a
probability distribution of continuations p(r) for the
string.
If b was random, the continutation r will probably
also be random, but now we lose the specific
information about the exact specification of b. If b
was simple, the continuation will probably also be
simple. In both cases, consider R = sum_r p(r) p0(r)
where p0(r) is the probability distribution of
continuations for an empty string, ie is the universal
distribution.
Now R will be fairly large in either the random or
the simple case. But suppose b consists of 100
sequences, interwoven, each digits of some slightly
complex number like pi or sqrt(3). In that case the
induced continuation is likely to continue those
patterns, and therefore will have small R. One can
thus use -log R as a kind of complexity.
That seems OK if the string b is fairly uniform,
but problems seem to happen if it consists of an
isolated region with complicated stuff, surrounded by
long stretches of random bits. It is likely that the
induced continuations would be mostly random, losing
the information about the special region. One could
try averaging over all truncated segments of b, but
the average would approach the value for random
strings. Summing instead of averaging would result in
something that is about proportional to the length of
b, not what I want.
One possibility is to consider the measure of
strings formed by interleaving such continuations.
Let r_n be the induced continuation r for the string
truncated at the nth bit. Let v be the interleaved
string consisting of bits from such. Let
T = - log (sum_v p(v) p0(v)).
I'm just thinking out loud here. What do you
think?
=====
- - - - - - -
Jacques Mallah (jackmallah.domain.name.hidden)
Physicist / Many Worlder / Devil's Advocate
"I know what no one else knows" - 'Runaway Train', Soul Asylum
My URL:
http://hammer.prohosting.com/~mathmind/
__________________________________________________
Do You Yahoo!?
Send instant messages & get email alerts with Yahoo! Messenger.
http://im.yahoo.com/
Received on Wed May 17 2000 - 16:42:07 PDT