Re: Anthropic Principle

From: Jesse Mazer <lasermazer.domain.name.hidden>
Date: Tue, 06 Nov 2001 20:02:25 -0500

>From: Russell Standish <R.Standish.domain.name.hidden>
>To: everything-list.domain.name.hidden
>Subject: Anthropic Principle
>Date: Wed, 7 Nov 2001 09:39:13 +1100 (EST)
>
>Mind mind was ticking over a comment made by Wei Dai in his last
>post. I had a thought, that I'll share with this group, even though it
>might be completely crazy:
>
>What if there were an absolute measure that favours increased
>complexity of the mind (ie more complex minds are more likely), along
>the lines Wei Dai suggested in the last post. Also, it seems likely
>that at a certain level of complexity, minds become self reflective
>(observe themeselves). By my observer-dependent measure arguments,
>each self-reflective mind (regardless of complexity) will observe a
>universe, containing a mind that is just complex enough for
>self-reflection.
>
>This would neatly explain why we're not ants (minds are too simple),
>and why we're not superhumans (we may be, but we'll only observe
>minimal complexity self-reflective minds).
>
>Crazy perhaps, but hopefully food for thought.

Aha! You've hit on something similar to my own pet TOE, which is that the
global measure on observer-moments can somehow be derived from a sort of
"theory of the anthropic principle" which would have to blend in with some
type of formal theory of consciousness, a bit like what the philosopher
David Chalmers proposes. Here was a post I wrote on this in September:

http://www.escribe.com/science/theory/m3143.html

A section from this post:

>I'm not sure it's possible to take a third-person perspective on the
>self-sampling assumption. For one thing, the reasoning only works if I
>assume *my* observer-moment is randomly selected--I can't use anyone else's
>or I may get incorrect results, as if I reasoned from Adam and Eve's point
>of view in the doomsday argument. Then there is what Nick Bostrom calls the
>"problem of the reference class," and I think there is a very good case to
>be made that the problem can only be solved by making reference to some
>sort of objective measure of the "consciousness level" of a particular
>observer-moment. For example, suppose I find that I was created as one of
>two "batches" of humans, the first batch containing 950 members and the
>second containing only 50. One batch is all-male and the other is
>all-female, and I know that which batch is which sex was determined by a
>coinflip, so that a third-person observer would say there is a 50% chance
>that the large batch is the male batch. However, since I observe myself to
>be a male, I use the self-sampling assumption to reason that there is
>actually a 95% chance that the large batch was all-male, simply because I'm
>assuming that I'm as likely to be any human as any other, and 95% of the
>humans were members of the large batch.
>
>All right so far. But suppose I now find out that one of the two batches
>was genetically engineered to lack a brain, having no consciousness
>whatsoever? Or, if you prefer, suppose that one batch is not made of real
>humans at all, but just marble statues of them. Can I still assume I am as
>likely to be any individual as any other? I don't think so--I think some
>kind of "anthropic principle" must come into play here, guaranteeing that I
>will be a conscious individual rather than an unconscious one. So, even if
>the small-batch was the all-male one, I would still be guaranteed to find
>myself a male, simply because all the "females" in this experiment are
>going to totally lack consciousness. It's just the same as how I can't
>assume I'm "randomly sampled" from the set of all computations being
>implemented in the universe, because the vast majority of computations,
>associated with things like the random collisions of gas molecules in a
>nebula, will not lead to any high-level conscious experiences...only the
>very rare ones associated with biological brains which have evolved on
>planets with just the right conditions will have these sorts of experiences
>(or so I'd assume, anyway). Again, the anthropic principle must be taken
>into account.
>
>I think this should be a matter of degree, rather than an all-or-nothing
>affair. I think other animals, at least other mammals and birds, almost
>certainly have some kind of high-level conscious experience, so there is
>"something it is like" to be them, but I don't think I should reason as if
>I was randomly sampled from the set of all these animals, either. Indeed, I
>don't think it's just a lucky break that I find myself to be a member of
>what is probably the most intelligent species that has ever existed on
>planet earth, despite the fact that the number of animals who have ever
>lived probably vastly outweighs the number of homo sapiens who have ever
>lived. I think some sort of graded anthropic principle is likely to be
>responsible here...the usual self-sampling assumption perhaps needs to be
>replaced by some kind of weighted self-sampling assumption, with the
>"weights" on an observer-moment having something to do with the complexity
>of the consciousness involved. Indeed, I think it would be particularly
>elegant if the whole global measure function turned out to be nothing but
>this sort of weighted self-sampling assumption, although the weights would
>probably have to be determined by more than *just* the level of
>consciousness (after all, an observer-moment experiencing a white rabbit
>could be just as complex as a 'normal' one). In other words, I think a TOE
>should incorporate a "theory of the anthropic principle" rather than just
>adding it onto a sort of global "physical" measure as in theories like Max
>Tegmark's. The anthropic principle/physical measure distinction seems to me
>to be another version of the old mind/body duality, and it would be nice if
>a TOE could erase this distinction.

The other idea I have is that there might be an interrelationship between
the "global measure" and some kind of "relative measure" (which tells me
what my next experience is likely to be like, given the experience I am
having now). In other words, the probability of being a particular
observer-moment would not depend solely on that observer-moment's level of
consciousness (ant vs. human, for example) but also on how many other
observer-moments have memories of having had an experience similar to the
one that observer-moment is having. So in some sense each observer-moment's
"degree of reality" (its absolute measure) might depend on the degree to
which other observer-moments "recognize" it (the relative measure between
them and it), and their "votes" might themselves be weighted by their own
absolute measure (so if I'm 'recognized' by an observer with a higher
absolute measure it counts for more than being recognized by one with a
lower absolute measure). This is all very sketchy and needs to be fleshed
out, but my idea is that somehow this might give something like an infinite
set of simultaneous equations which would allow you to bootstrap into
existence a unique global measure function, a function which would be the
only self-consistent way to solve these equations.



_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
Received on Tue Nov 06 2001 - 17:04:19 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST