Jacques Mallah wrote:
>
> ...But we don't want to invoke the MWI too often, or
> we will end up defining all possible observed
> complexity out of existance.
I agree, we should try to simplify/explain as much as possible, though
this task may never be completed.
> We want to be able to
> distinguish wabbits, if only to show that the
> statement "the AUH implies the statistical lack of
> wabbits" means something.
> I will consider a bitstring, b. If it has very
> high K-complexity, it is random; it must be the
> product of a MWI process, and therefore indicates low
> complexity in the "laws of physics" that produced it.
> If it has very low K-complexity, it is nonrandom,
> and must the product of simple "laws of physics" in
> operation, where we don't need to invoke the MWI. If
> K(b) is medium, b might just be a random segment
> appended to a simple segment.
> Clearly K-complexity is not good enough to find
> wabbits. We could try turning to the Santa Fe
> Institute and see what they have to say about
> complexity, but what I want is an algorithm.
> One way to try to get at what I want is to
> consider the use of induction on a string. Suppose we
> use induction, starting as usual with the ensemble of
> all programs having uniform Bayesian probability, to
> continue a finite string. We will obtain a
> probability distribution of continuations p(r) for the
> string.
> If b was random, the continutation r will probably
> also be random, but now we lose the specific
> information about the exact specification of b. If b
> was simple, the continuation will probably also be
> simple.
> In both cases, consider R = sum_r p(r) p0(r)
> where p0(r) is the probability distribution of
> continuations for an empty string, ie is the universal
> distribution.
So, wouldn't p(r,b)=p0(r+b)? Does this correspond to your 'p(r)'? So
p(r,b) is large(r) for r, b both random or r,b both simple.
If b is simple, but r is random, that is probably your wabbit. The
remaining case (b random, r simple) - order out of chaos, also seems
unlikely.
> Now R will be fairly large in either the random or
> the simple case. But suppose b consists of 100
> sequences, interwoven, each digits of some slightly
> complex number like pi or sqrt(3). In that case the
> induced continuation is likely to continue those
> patterns, and therefore will have small R. One can
> thus use -log R as a kind of complexity.
> That seems OK if the string b is fairly uniform,
> but problems seem to happen if it consists of an
> isolated region with complicated stuff, surrounded by
> long stretches of random bits. It is likely that the
> induced continuations would be mostly random, losing
> the information about the special region. One could
> try averaging over all truncated segments of b, but
> the average would approach the value for random
> strings. Summing instead of averaging would result in
> something that is about proportional to the length of
> b, not what I want.
R=sum_r p0(r+b)*p0(r) or similar function might work. The special
information about b would be contained in p0(r+b).
> One possibility is to consider the measure of
> strings formed by interleaving such continuations.
> Let r_n be the induced continuation r for the string
> truncated at the nth bit. Let v be the interleaved
> string consisting of bits from such. Let
> T = - log (sum_v p(v) p0(v)).
> I'm just thinking out loud here. What do you
> think?
There is no trivial representation of p0(b) in terms of its possible
truncated segments, is there? You would have to capture all possible
relationships between the segments, etc., right?
>
>
> =====
> - - - - - - -
> Jacques Mallah (jackmallah.domain.name.hidden)
> Physicist / Many Worlder / Devil's Advocate
> "I know what no one else knows" - 'Runaway Train', Soul Asylum
> My URL: http://hammer.prohosting.com/~mathmind/
>
> __________________________________________________
> Do You Yahoo!?
> Send instant messages & get email alerts with Yahoo! Messenger.
> http://im.yahoo.com/
Received on Wed May 17 2000 - 20:54:04 PDT