Wei Dai, <weidai.domain.name.hidden>, writes:
> The typical setup for a DA [doomsday argument]
> is two possible universes with some a priori
> probability for one of them being the real one. It doesn't seem to apply
> directly to a theory where all objects/universes exist.
I don't see any difference how many universes exist. The DA is
simple: considering yourself to be a random selection among humanity,
the fraction of human lives that have passed, compared to the the amount
over the lifetime of the species, can be assumed to have a uniform
probability distribution on the interval (0,1). This is true whether or
not there are other universes. So the median future lifetime of the human
species can be expected to be fairly short. (Due to the recent population
boom, most human lives have occurred in this century. So has yours.)
Of course this can be modified in the usual Bayesian way if you
have any additional information, such as good predictions of what exactly
will happen in world history based on any other observations.
On Wed, 27 Jan 1999 hal.domain.name.hidden wrote:
> Wei Dai, <weidai.domain.name.hidden>, writes:
> > The way I see it, the premise should not be used to draw conclusions, but
> > rather serves as an explanation. Because you already know you are a human
> > being named Hal Finney, it no longer matters that you are a random sample
> > of all beings. However, the premise helps explain why you are Hal Finney
> > and not some bug-eyed alien, namely that Hal has a larger measure than the
> > alien (assuming that is actually true).
There is nothing about effective probabilities that argues against
the existance of aliens. What it does argue is that humans are not likely
to be a very atypical species, if there are any major regularities among
intelligent species. For example if there are A bipedal species and B
quadrapedal species among the group of intelligent species in the galaxy,
we would be more likely to be in the larger group, if one group is much
larger (A > B more likely than B > A).
> > Here's an analogy: suppose you have just been dealt a hand in a card game.
> > Since you know what cards you have, the distribution from which they were
> > chosen no longer matters and won't help you play, but that distribution
> > helps explain why you got the cards you did.
Right.
> I do have a problem with the way explanations use probability. In your
> card game example, what happens when you are dealt an unlikely hand?
> What kind of explanation can the theory offer? If you are playing many
> times, you can say that the theory does give the frequency with which
> such hands will appear. But in life, it seems like we only play once.
So what? It tells you how likely that hand was. Luck explains
the rest.
> I look around the world and see that the vast majority of people are poor
> third-worlders struggling with difficult lives. I am fortunate enough to
> live in a wealthy country, have a good education, and tremendous riches
> by the standards of most people. What kind of explanation is there for
> this based on the assumption that I am a random selection from among
> all people? We can't run the universe again and let me be a different
> random selection next time. This is the same conceptual problem I keep
> encountering with this notion.
I don't see the problem. You are obviously not in the group of
maximum probability; that just means you were lucky. It is not as if the
fraction of people in wealthy countries is even that small, compared to
many of the issues considered; that fraction is about .1, of order 1.
> It gets worse if you consider not just me as a random sample from among
> all observers, but if you consider me-now as a random sample from among
> all observer-instants. Now it seems that I have to adopt an atemporal
> perspective where my consciousness dwells here in Hal Finney, 1999, for
> an instant, then jumps back to a slave in ancient Rome, then dwells for
> a moment in our bug-eyed alien. I can't make sense out of this.
No, it doesn't jump at all.
> This view is especially perplexing if my measure changes drastically
> over time. With some of our thought experiments, I could boost the
> measure of an instant of my consciousness by making copies of my brain
> state (say, a high-resolution X-ray). But the next instant, my measure
> drops again. Would I somehow expect to notice myself spending more
> time in that amplified instant? Suppose I spent half my days with a
> big brain and half my days with a small brain. Am I to be puzzled on
> those days I am in the small brain, faced with the mystery of why I am
> not experiencing the measure-enhanced big-brain days?
Making copies would not enhance your measure unless they are
functional copies, according to computationalism. Even if they are, it
would just mean you could expect an enhanced a priori chance of being on a
'big brain day'. 'Small brain' days would not be of zero measure, so
would still occur, and if you find yourself in one, fine. It won't feel
any different, just as it wouldn't feel any different if you had an exact
twin you didn't know about.
> This is related to the quantum immortality question. As time goes on,
> my measure becomes less. But it is hard to understand what I should
> expect to experience as a result of this.
No, it's easy. You should expect most of your experiences to be
in the region of time where you have significant measure.
- - - - - - -
Jacques Mallah (jqm1584.domain.name.hidden)
Graduate Student / Many Worlder / Devil's Advocate
"I know what no one else knows" - 'Runaway Train', Soul Asylum
My URL:
http://pages.nyu.edu/~jqm1584/
Received on Fri Jan 29 1999 - 15:25:29 PST