Wei Dai's theory

From: Russell Standish <r.standish.domain.name.hidden>
Date: Mon, 6 Jun 2005 11:19:54 +1000

I remembered Wei Dai posting on this topic in the early days of this
list, and indeed some of his postings influenced my "Why Occam's
Razor" paper. However, I do not recall his suggestions as being as
detailed as what you describe here. Do you have a reference to where
this might be written up? I'm also intrigued by the possibility of
demonstrating that transhumanist observer moments would have
substantially less measure than human observer moments. Such a result
would be a transhumanist counter to the Doomsday argument of course.

Cheers

On Fri, Jun 03, 2005 at 11:10:15AM -0700, "Hal Finney" wrote:

...
>
> Years ago Wei Dai on this list suggested a better approach. He proposed
> a formula for determining how much of a universe's measure contributes to
> an OM that it instantiates. It is very specific and also illustrates
> some problems in the rather loose discussion so far. For example,
> what does it really mean to instantiate an OM? How would we know if a
> universe is really instantiating a particular OM? Aren't there fuzzy
> cases where a universe is only "sort of" instantiating one? What about
> the longstanding problem that you can look at the atomic vibrations in
> a crystal, select a subset of them to pay attention to, and have that
> pattern match the pattern of any given OM? Does this mean that every
> crystal instantiates every OM? (Hans Moravec sometimes seems to say yes!)
>
> To apply Wei's method, first we need to get serious about what is an OM.
> We need a formal model and description of a particular OM. Consider, for
> example, someone's brain when he is having a particular experience. He is
> eating chocolate ice cream while listening to Beethoven's 5th symphony,
> on his 30th birthday. Imagine that we could scan his brain with advanced
> technology and record his neural activity. Imagine further that with the
> aid of an advanced brain model we are able to prune out the unnecessary
> information and distill this to the essence of the experience. We come
> up with a pattern that represents that observer moment. Any system which
> instantiates that pattern genuinely creates an experience of that observer
> moment. This pattern is something that can be specified, recorded and
> written down in some form. It probably involves a huge volume of data.
>
> So, now that we have a handle on what a particular OM is, we can more
> reasonably ask whether a universe instantiates it. It comes down to
> whether it produces and contains that particular pattern. But this may
> not be such an easy question. It could be that the "raw" output format of
> a universe program does not lend itself to seeing larger scale patterns.
> For example, in our own universe, the raw output would probably be at
> the level of the Planck scale, far, far smaller than an atomic nucleus.
> At that level, even a single brain neuron would be the size of a galaxy.
> And the time for enough neural firings to occur to make up a noticeable
> conscious experience would be like the entire age of the universe.
> It will take considerable interpretation of the raw output of our
> universe's program to detect the faint traces of an observer moment.
>
> And as noted above, an over-aggressive attempt to hunt out observer
> moments will find false positives, random patterns which, if we are
> selective enough, happen to match what we are looking for.
>
> Wei proposed to solve both of these problems by introducing an
> interpretation program. It would be take as its input, the output of the
> universe-creation program. It would then output the observer moment in
> whatever formal specification format we had decided on (the exact format
> will not be significant).
>
> So how would this program work, in the case of our universe? It would
> have encoded in it the location in space and time of the brain which
> was experiencing the OM. It would know the size of the brain and the
> spatial distribution of its neurons. And it would know the faint traces
> and changes at the Planck scale that would correspond to neural firings
> or pauses. Based on this information, which is encoded into the program,
> it would run and output the results. And that output would then match
> the formal encoding of the OM.
>
> Now, Wei applies the same kind of reasoning that we do for the measure
> of the Schmidhuber ensemble itself. He proposes that the size of the
> interpretation program should determine how much of the universe's measure
> contributes to the OM. If the interpretation program is relatively small,
> that is evidence that the universe is making a strong contribution to
> the OM. But if the interpretation program is huge, then we would say
> that little of the universe's measure should go into the OM.
>
> In the most extreme case, the interpretation program could just encode the
> OM within itself, ignore the universe state and output that data pattern.
> In effect that is what would have to be done in order to find an OM
> within a crystal as described above. You'd have to have the whole OM
> state in the program since the crystal doesn't actually have any real
> relationship to the OM. But that would be an enormous interpretation
> program, which would deliver only a trivial measure.
>
> For a universe like our own, the hope and expectation is that the
> interpretation program will be relatively small. Such a program takes
> the entire universe as input and outputs a particular OM. I did some
> back of the envelope calculations and you will probably be amazed that
> I estimate that such a program could be less than 1000 bits in size.
> (This is assuming the universe is roughly as big as what is visible, and
> neglecting the MWI.) Compared to the information in an OM, which I can't
> even guess but will surely be at least gigabytes, this is insignificant.
> Therefore we do have strong grounds to say that the universe which
> appears real is in fact making a major contribution to our OMs.
>
> To be specific, Wei's idea was to count the measure of a universe's
> contribution to an OM as 1/2^(n+m), where n is the size of the program
> that creates the universe, and m is the size of the interpretation
> program that reads the output of the first program, and outputs the OM
> specification from that. In effect, you can think of the two programs
> together as a single program which outputs the formal spec of the OM,
> and ask what are the shortest ways to do that. In this way you can
> actually calculate the measure of an OM directly without even looking at
> the intermediate step of calculating a universe. But I prefer thinking of
> the two step method as it gives us a handle on such concepts as whether
> we are living in the Matrix or as a brain in a vat.
>
> Overall I think this is a very attractive formulation. It's quantitative,
> and it gives the intuitively right answer for many cases. The counting
> program contributes effectively no measure, because the only way we can
> find an OM is by encoding the whole thing in the interpretation program.
> And as another example, if there are multiple OMs instantiated by a
> particular universe, that will allow the interpretation program to be
> smaller because less information is needed to localize an OM. It also
> implies that small universes will devote more of their measure to OMs
> that they instantiate than large ones, which basically makes sense.
>
> There are a few unintuitive consequences, though, such as that large
> instantiations of OMs will have more measure than small ones, and likewise
> slow ones will have more measure than fast ones. This is because in each
> case the interpretation program can be smaller if it is easier to find the
> OM in the vastness of a universe, and the slower and bigger an OM is the
> easier it is to find. I am inclined to tentatively accept these results.
> It does imply that the extreme future vision of some transhumanists,
> to upload themselves to super-fast, super-small computers, may greatly
> reduce their measure, which would mean that it would be like taking a
> large chance of dying.
>
> There is one big problem with the approach, though, which I have not yet
> solved. I wrote above that a very short program could localize a given OM
> within our universe. It only takes ~300 bits to locate a brain (i.e. a
> brain-sized piece of space)! However this neglects the MWI. If we take
> as our universe-model a world governed by the MWI, it is exponentially
> larger than what we see as the visible universe. Every decoherence-time,
> the universe splits. That's like picoseconds, or nanoseconds at best.
> The number of splittings since the universe was created is vast, and
> the size of the universe is like 2 (or more!) to that power.
>
> Providing the information to localize a particular OM within the vastness
> of a universe governed by the MWI appears to be truly intractable.
> Granted, we don't necessarily have to narrow it down to an exact branch,
> but unless there are tremendous amounts of de facto convergent evolution
> after splits, it seems to me that the percentage of quantum space-time
> occupied by a given OM is far smaller than the 1/2^1000 I would estimate
> in a non-MWI universe. It's more like 1/2^2^100. At that rate the
> interpretation program to find an OM would be much *bigger* than the one
> that just hard-codes the OM itself. In short, it would appear that an MWI
> universe cannot contribute significant measure to an OM, under this model.
> That's a serious problem.
>
> So there are a couple of possible solutions to this problem. One is to
> reject the MWI on these grounds. That's not too attractive; this line of
> argument is awfully speculative for such a conclusion. Also, creating a
> program for a non-MWI universe requires a random number generator, which
> is an ugly kludge and implies that quantum randomness is algorithmic
> rather than true, a bizarre result. A more hopeful possibility is that
> there will turn out to be structure in the MWI phase space that will
> allow us to localize OM's much more easily than the brute force method
> I assumed above. I have only the barest speculations about how that
> might work, to which I need to give more thought.
>
> But even with this problem, I think the overall formulation is the
> best I have seen in terms of grappling with the reality of a multiverse
> and addressing the issue of where we as observers fit into the greater
> structure. It provides a quantitative and approximable measure which
> allows us to calculate, in principle, how much of our reality is as it
> appears and how much is an illusion. It answers questions like whether
> copies contribute to measure. And it provides some interesting and
> surprising predictions about how various changes to the substrate
> of intelligence (uploading to computers, etc.) may change measure.
> In general I think Wei Dai's approach is the best foundation for
> understanding the place of observers within the multiverse.
>
> Hal Finney

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.
----------------------------------------------------------------------------
A/Prof Russell Standish                  Phone 8308 3119 (mobile)
Mathematics                         	       0425 253119 (")
UNSW SYDNEY 2052         	         R.Standish.domain.name.hidden             
Australia                                http://parallel.hpc.unsw.edu.au/rks
            International prefix  +612, Interstate prefix 02
----------------------------------------------------------------------------



Received on Sun Jun 05 2005 - 22:46:34 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:10 PST