Re: Wei Dai's theory

From: Hal Finney <>
Date: Sun, 5 Jun 2005 20:01:21 -0700 (PDT)

Russell Standish writes:
> I remembered Wei Dai posting on this topic in the early days of this
> list, and indeed some of his postings influenced my "Why Occam's
> Razor" paper. However, I do not recall his suggestions as being as
> detailed as what you describe here. Do you have a reference to where
> this might be written up? I'm also intrigued by the possibility of
> demonstrating that transhumanist observer moments would have
> substantially less measure than human observer moments. Such a result
> would be a transhumanist counter to the Doomsday argument of course.

Well, I tend to be a lot more long-winded than Wei. He did not write up
the idea formally but it was just something he proposed in the context
of one of our discussions on the list. I don't know if he even believes
in it now.

He proposed it in the context the thread on "consciousness based on
information or computation?" in January 1999, specifically , which I will take
the liberty of quoting here:

Wei Dai wrote:

: Let me be more specific and precise about my proposal. I propose that the
: measure of a conscious experience is related to the measure of the
: associated state information, and take this measure to be the universal a
: priori distribution.
: The universal a priori probability of a string is inversely related to the
: length of the shortest program that outputs that string (the distribution
: actually takes into account all programs, but the shortest ones contribute
: most to the distribution). Now take an AI running on some computer, and
: consider its state at some given time. The shortest program (P1) that
: produces this state as output probably consists of two parts. The first
: part of the program simulates the physical universe (which let's say is a
: newtonian universe) which contains the computer running the AI. The second
: part of the program extracts the AI's state from this simulation.
: Now if the *memory* elements containing the AI's state were doubled in
: size, that should allow the second part of the program to be shorter,
: since it would take less information to "find" the AI's state in the
: wavefunction simulation. The smaller program size implies a larger measure
: of the state.
: If the AI were simultaneously running on two computers, there would be two
: shortest programs that produce the state as output (they would be
: identical in the simulation part but slightly different in the extraction
: part), and these two programs together would make twice as much
: contribution to the universal a priori distribution as P1, and again the
: measure of the state would be increased.

This came out of a discussion in which I claimed it was obvious
that the size of an implementation should not matter, with regard to
its contribution to measure, and from this I concluded via a thought
experiment that the number of copies shouldn't matter(!), which causes
some problems. Wei then challenged this assumption that "size doesn't
matter" and followed up with this detailed proposal, which I eventually
came to like very much. The part about speed also mattering was my own
addition, but it is a pretty obvious corollary as speed is just a matter
of "size in time".

Hal Finney
Received on Sun Jun 05 2005 - 23:52:47 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:10 PST