Re: Conditional probability & continuity of consciousness (was: Re: FIN Again)

From: Jacques Mallah <jackmallah.domain.name.hidden>
Date: Thu, 06 Sep 2001 18:06:01 -0400

>From: "Jesse Mazer" <lasermazer.domain.name.hidden>
>>From: "Jacques Mallah" <jackmallah.domain.name.hidden>
>> "You" is just a matter of definition. As for the conditional
>>effective probability of an observation with characteristics A given that
>>it includes characteristics B, p(A|B), that is automatically defined as
>>p(A|B) = M(A and B) / M(B). There is no room to have a rival "relative
>>conditional probability". (E.g. A = "I think I'm in the USA at 12:00
>>today", B="I think I'm Bob".)
>
>Well, I hope you'd agree that which observer-moment I am right now is not a
>"matter of definition," but a matter of fact.

    Depends what you mean by that ...

>My opinion is that the global measure on all observer-moments is not
>telling us something like the "number of physical instantiations" of each
>one, but rather the probability of *being* one particular observer-moment
>vs. some other one.

    No, if taken at face value that really doesn't make any sense at all.
There is no randomness in the multiverse.
    On the other hand, it is proportional to the *effective*
"probability of being" one. In this case, "effective" refers to the role it
plays in Bayesian reasoning. The reason it plays that role is to maximize
the fraction of people who, using Bayesian reasoning, guess well. By
"people" here I mean what you would call "instantiations of OM's".

>I would be interested to hear what you think the measure means, though,
>since my version seems to require first-person facts which are separate
>from third-person facts (i.e., which observer-moment *I* am).

    The measure is just the number of observer-moments (where I mean
different people count as different people) that see that type of
observation. It is really a measure on the characteristics of OM's, rather
than on OM's, since each O-M is counted equally. # of O-M = # of observers
* moments.

>In any case, I'm pretty sure there's room in a TOE for a "conditional
>probability" which would not be directly deducible from the global
>probability distribution. Suppose I have a large population of individuals,
>and I survey them on various personal characteristics, like height, IQ,
>age, etc. Using the survey results I can create a global probability
>function which tells me, for example, what the likelihood is that a random
>individualis more than 5 feet tall. But If I then want to find out the
>conditional probability that a given individual over 5 feet tall weighs
>more than 150 pounds, there is no way to deduce this directly given only
>the global probability distribution.

    Sure there is, as you go on to say ...

>In this example it may be that p(A|B) = M(A and B) / M(B), but the point is
>that M(A and B) cannot be found simply by knowing M(A) and M(B).

    Of course it can't, unless you know that A and B are independent. Why
the heck would you even think of trying?
    The global measure is on the whole set of OM characteristics:
M(...,a,b,c,d,...). To find M(A), you have to set a = A and sum over all
possible values of b, c, d, etc.
    The global measure has all the information, so to actually use it you
have to ignore most of that stuff by summing over irrelevant details.

>And a TOE could conceivably work other ways too. Suppose we have a large
>number of interconnected bodies of water, each flowing into one another at
>a constant rate so that the total amoung of water in any part stays
>constant over time. In that case you could have something like a "global
>measure" which would tell you the probability that a randomly selected
>water molecule will be found in a given body of water at a given time, but
>also a kind of conditional probability that a water molecule currently in
>river A will later be found in any one of the various other rivers that
>river A branches into. This would approximate the idea that my
>consciousness is in some sense "flowing" between different experiences,
>splitting and merging as it goes.
>Just as the path of a given molecule is determined by the geographical
>relationships between the various bodies of water, so the path of my
>conscious experience might be determined by some measure of the
>"continuity" between different observer-moments...even though an
>observer-moment corresponding to my brain 5 seconds from now and another
>one corresponding to your own brain at this very moment might have equal
>*global* measure, I would presumably be much more likely to flow into a
>future observer-moment which is more similar to my current one.

    The appeal of that kind of model is based on the illusion that we can
remember past experiences. We can't remember past experiences at all,
actually. We only experience "memory" because of the _current_ way our
brains are structured. It's possible to "remember" things that never
happenned, not just a la "Total Recall" but even in simple cases like
swearing that you just parked in one place, but your car is on the other
side of the parking lot. Eyewitness evidence is the least reliable form.
    Well, with actual mind-like hidden variables to play the role of your
molecule, what you describe would be theoretically possible, yes. I have to
admit I have no idea what a "mind-like hidden variable" would mean or how
one might think that such a thing could be possible. But as any MWIer
should know, hidden variables are unsightly stubble. I recommend Occam's
Razor for a close shave.

>Most generally, we can imagine that a TOE defines both a global measure on
>individual observer-moments, but also a "conditional measure" on ordered
>pairs of observer-moments, or perhaps longer ordered chains. There would
>probably be some kind of mathematical relation between the two types of
>measure, but it wouldn't necessarily have to be of the form p(A|B) = M(A
>and B) / M(B) as you said. Do you see anything inherently contradictory
>about this idea?

    It seems to contradict the idea of an observer-moment as being an
experience. Experiences, by their very nature, are isolated things. I
can't see what it would mean for them to be linked.

>> It means - and I admit it does take a little thought here - _I want to
>>follow a guessing procedure that, in general, maximizes the fraction of
>>those people (who use that procedure) who get the right guess_. (Why
>>would I want a more error-prone method?) So I use Bayesian reasoning with
>>the best prior available, the uniform one on observer-moments, which
>>maximizes the fraction of observer-moments who guess right. No
>>soul-hopping in that reasoning, I assure you.
>
>I'm not sure it's possible to take a third-person perspective on the
>self-sampling assumption.

    Third person? Please don't use that kind of terminology. I guess you
mean an objective perspective. In fact, it's perfectly objective to note
that it is the best procedure available for people to use.

>For one thing, the reasoning only works if I assume *my* observer-moment is
>randomly selected

    Just a reminder: you don't literally assume it's randomly selected.
It's all deterministic. You do set the fraction of observers who see
something equal to the Bayesian probability that you would see it, but that
is only a neat little trick that lets you guess well.

>--I can't use anyone else's or I may get incorrect results, as if I
>reasoned from Adam and Eve's point of view in the doomsday argument.

    I'm not sure what you are saying here, but certainly you have to use
what you see, rather than what you don't see.

>Then there is what Nick Bostrom calls the "problem of the reference class,"
>and I think there is a very good case to be made that the problem can only
>be solved by making reference to some sort of objective measure of the
>"consciousness level" of a particular observer-moment. For example, suppose
>I find that I was created as one of two "batches" of humans, the first
>batch containing 950 members and the second containing only 50. One batch
>is all-male and the other is all-female, and I know that which batch is
>which sex was determined by a coinflip, so that a third-person observer
>would say there is a 50% chance that the large batch is the male batch.
>However, since I observe myself to be a male, I use the self-sampling
>assumption to reason that there is actually a 95% chance that the large
>batch was all-male, simply because I'm assuming that I'm as likely to be
>any human as any other, and 95% of the humans were members of the large
>batch.

    OK so far, with the disclaimer I gave already regarding "likely to be"
language.

>All right so far. But suppose I now find out that one of the two batches
>was genetically engineered to lack a brain, having no consciousness
>whatsoever?
>... the "females" in this experiment are going to totally lack
>consciousness.

    Maybe you shouldn't have used that particular example ...
    It's really not a problem. The procedure works because most of the
people that use it will guess as well as possible. Therefore, it is not
designed to work for people that can't use it or that are statues. You must
assume a priori that you are capable of Bayesian reasoning.
    In other words, the reference class consists of those people capable of
Bayesian reasoning. (Or indeed, those who in fact use it.) Using this is
not arbritrary and there is nothing mystical or magical about it. It's just
that this criterion ensures that the maximum fraction of people who do use
it (and therefore _can use it_) will guess well.
   In this case, only the humans could use it, so there should be no
correlation between the batch size you guess and the gender you are.

>I think this should be a matter of degree, rather than an all-or-nothing
>affair.

    Only to the extent that some creatures might be more or less likely to
use the reasoning procedure.

>I think other animals, at least other mammals and birds, almost certainly
>have some kind of high-level conscious experience, so there is "something
>it is like" to be them, but I don't think I should reason as if I was
>randomly sampled from the set of all these animals, either.

    Right. It's as I said above. By no means do I imply that you must be
able to use Bayesian reasoning in order to have consciousness, just that you
need to use it in order to be in the correct reference class.

>Indeed, I don't think it's just a lucky break that I find myself to be a
>member of what is probably the most intelligent species that has ever
>existed on planet earth, despite the fact that the number of animals who
>have everlived probably vastly outweighs the number of homo sapiens who
>have ever lived. I think some sort of graded anthropic principle is likely
>to be responsible here...the usual self-sampling assumption perhaps needs
>to be replaced by some kind of weighted self-sampling assumption, with the
>"weights" on an observer-moment having something to do with the complexity
>of the consciousness involved. Indeed, I think it would be particularly
>elegant if the whole global measure function turned out to be nothing but
>this sort of weighted self-sampling assumption, although the weights would
>probably have to be determined by more than *just* the level of
>consciousness (after all, an observer-moment experiencing a white rabbit
>could be just as complex as a 'normal' one).

    I think you're making the whole question a lot more mysterious and
complicated than you need to. Bayesian reasoning is a guessing procedure,
it works due to the way it benefits the majority of guessers, and that means
if you're not a Bayesian guesser you shouldn't be taken into account by the
optimum procedure.

>In other words, I think a TOE should incorporate a "theory of the anthropic
>principle" rather than just adding it onto a sort of global "physical"
>measure as in theories like Max Tegmark's. The anthropic principle/physical
>measure distinction seems to me to be another version of the old mind/body
>duality, and it would be nice if a TOE could erase this distinction.

    I hope the way I explained it makes sense to you.

>So I guess I should repeat the question, when you talk about a global
>measure on all observer-moments, what do you think this measure means? Does
>it just represent the "number of instantiations" in the multiverse, so that
>if the vast majority of observer-moments/computations turned out to be of a
>very simple nature, we might need to make additional recourse to the
>anthropic principle to explain why we find ourselves to be experiencing
>these particularly complex type of observer-moments?

    That might be one of putting it, yes. But it's a rather convoluted way
of saying that, while most beings may be quite simple, the mere fact that we
are thinking about this stuff means that we are at least semi-intelligent.

>Or do you think, as I do, that the global measure tells us the actual
>probability that "my" current experience will be a given observer-moment,
>and in that case do you agree that the global measure function must already
>take into account something along the lines of the anthropic principle?

    No.
    I should mention that the measure to be used in Bayesian reasoning is
not exactly the same as the global measure on OM characteristics, since the
latter covers all sorts of OMs. The Bayesian measure is the conditional
measure given that we are using the reasoning, M(...|B).
    The actual non-conditional measure is still important for things like
utility functions, since for example you might want to increase the measure
of happy butterflies.

>Well, you're assuming that reality itself has no opinion on the definition
>of "me," that it all depends on my own choice of utility function.

    No. Those are two seperate issues. The definition is just something
that allows people to communicate and to make precise their thinking. The
utility function just describes what you want. There is no objective
standard for either, but they are different questions.

>Likewise, perhaps someone might have a TOE in which there is no "objective"
>global measure on the set of all observer-moments, that each
>observer-moment would have his own measure defined by what he believes to
>be real.

    That makes no sense.

>But I think that in both cases the TOE should give me an answer which does
>not depend on my own preferences--as Phillip K. Dick once said, "reality is
>that which, when you stop believing in it, doesn't go away." I don't think
>I could "save myself" from a hellish fate simply be redefining my utility
>function so that only those copies which go to heaven are defined as
>"me"--whether I find myself in heaven or hell seems to be something imposed
>on me by external reality, unless the flow of consciousness is a complete
>illusion.

    Well, it certainly is an illusion. But again, your utility function has
nothing to do with it. You really have no control over your utility
function: you want what you want.
    As for ending up in hell: by one definition, you end up both in heaven
and in hell. By another, you are just your current OM, so you end up in
neither.
    By another, you might correspond to a particular implementation of a
computation - a mapping from physical to formal states, and in that case you
I think you might end up in hell - but that's not random, it all depends on
what the mapping is and what the physical situation is. This one is the
closest to what you seem to want, but remember: a mapping can easily enter a
region in which the formal states no longer apply or are no longer changing,
and then you'd be dead.
    But as for your utility function - it should give the same results
*regardless* of what definitions you use. Definitions are just a way of
talking, and have nothing to do with whether the real situation that exists
is one that you approve of.

>I talked about this question in an earlier post called "3 possible
>views of consciousness:"
>>1. Consciousness is not "real"--our decision to call a system "conscious"
>>or not is based only on subjective aesthetic criteria, like "cuteness"
>>(Daniel Dennett's example). The only facts about reality are third-person
>>facts, in this view.
>
>>2. Consciousness is real, but the feeling of continuity of consciousness
>>over time (the 'flow of related thoughts in time' above) is not. In this
>>view, only moments of experience exist, but nothing flows between these
>>moments.

>>3. Consciousness is real, and so is continuity of consciousness over
>>time. Proponents of this view may still believe that identity can split
>>or merge though (think of many-worlds, or replicator experiments).
>
>Presumably you'd choose either 1 or 2, although I'm not quite sure which.

    I'm not sure just how real consciousness is. I'm basically a
reductionist - that is, I think that the math contains all the facts about
consciousness. I'd like to think that, even if it's really an illusion,
then it's at least an interesting or important illusion.
    So I'm somewhere between #1 and #2. The only facts are those that are
objectively true, and that includes some sort of facts about what it's like
to be a particular OM.

>But do you think view #3 is "crazy," or is it just quantum immortality
>specifically that you find crazy?

    #3 isn't crazy the way the FIN is, but it's wrong and as I said above
it's based on the illusion of remembering past experiences.

>It would be possible to believe in #3 without believing in quantum
>immortality, of course...I do think that once you accept #3, as well as the
>"splitting" and "merging" of consciousness-streams, then are some
>thought-experiments which make quantum immortality very plausible. But
>they're only plausible if you already find view #3 to be plausible in the
>first place.

    No, even if you believe #3 there's no way the FIN could be plausible.

>> I've explained that in other posts, but as you see, the idea is indeed
>>mathematically incoherent - unless you just mean the conditional effective
>>probability which a measure distribution defines by definition. And
>>_that_ one, of course, leads to a finite expectation value for ones's
>>observed age (that is, no immortality).
>
>The "expectation value" is a problem, but as I said it's possible to accept
>#3 without accepting quantum immortality. For example, there could be a
>"null observer-moment" (death) which any given observer-moment has a small
>probability of becoming at any given time, and this probability could
>become large given enough time. Another interesting possibility is that
>whatever "conditional probability" we choose will be nonzero for *any* pair
>of observer-moments, so that there is some tiny probability that my next
>observer-moment will be completely unlike my current one--in situations
>where the probability of my physical death is large (like observing myself
>falling off a cliff), perhaps the combined conditional probability of my
>next moment being fairly similar to my current one is smaller than the
>combined conditional probability of a dissimilar next moment, so that I may
>suddenly find myself waking up from a dream of falling as a totally
>different person in a different region of the multiverse. In this way my
>conscious experience could be infinite even if at any given time I find
>myself as a finite organism with a finite memory.

    In this case I can see no point to the whole construct. If you jump
randomly to other people and lose all of your old memories - what's the
point of being the original person vs. the other person?

>Finally, if conditional measure and global measure mutually determine each
>other in some way, there might be a way in which "young" observer-moments
>could have greater global measure than "old" ones. Imagine an experiment
>where I am duplicated, as in Bruno Marchal's example, once in Moscow and
>once in Washington. Then a year later, if a democrat wins the U.S.
>presidency, the Washington twin will be duplicated 1000 times; if a
>republican wins the presidency, the Moscow twin will be duplicated 1000
>times. Even though this subsequent duplication happens a year after the
>original Moscow/Washington split, I think it could have an effect on my
>original first-person probability of finding myself in Moscow vs.
>Washington (since after all this is done, 1000 out of 1001 descendants of
>the original person will remember having ended up in the city corresponding
>to the future president)...we might then have the strange effect that I
>would have a pretty good idea of who was going to win the presidency a year
>from now based on whether I found myself in Washington or Moscow, even
>though this prophecy would be useless to everyone else since they'd have no
>way of knowing which twin has a lower measure/probability of being
>experienced.

    Some people on this list think that way; I see no reason to believe in
such hidden variables.

>something like the anthropic principle may also be involved in the fact
>that I find myself at this particular point in human history, on the verge
>of a possible "technological singularity" which could make my indefinite
>survival seem entirely natural (see
http://www.aleph.se/Trans/Global/Singularity/).

    I am a bit more pessimistic ...
    How about this scenario. In the future, humanity could be replaced by a
machine. That is, at first there may be many machines, but they will reduce
their numbers - not by dying, but by merging into one being, thus losing
measure.

>So, there are a variety of ways I can imagine solving the "expected age"
>problem without having to throw out quantum immortality.

    You haven't done so. Even if the present measure is proportional to the
number of future copies, the total measure would remain constant as a
function of time (as future guys have less measure per capita but greater
numbers), so the expected age would still diverge.

>I expect you'll find them all somewhat crazy...that's why, for now, I'd
>rather focus on the more general question of whether you think it's
>completely impossible that the correct TOE would make continuity of
>consciousness into something "objective" rather than something defined only
>by our own choices about utility functions.

    It's an illusion. Why should it be real as well?
    You see a floating man; it's certain that there are smoke and mirrors on
the stage producing the illusion of that floating man, you saw that
equipment backstage before the show. You even know how the trick can be
done. Why do you put such stock into the idea that in addition to all that,
there is really a floating man there?

>I don't entirely rule out the possibility that continuity of consciousness
>is a kind of illusion, but I still don't see how you can have grounds to be
>sure that it is.

    Sure, eh? I'm 'sure'. But I'm a scientist, not a mathematician.

                         - - - - - - -
               Jacques Mallah (jackmallah.domain.name.hidden)
         Physicist / Many Worlder / Devil's Advocate
"I know what no one else knows" - 'Runaway Train', Soul Asylum
         My URL: http://hammer.prohosting.com/~mathmind/

_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
Received on Thu Sep 06 2001 - 15:07:15 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST