Re: another anthropic reasoning

From: Wei Dai <weidai.domain.name.hidden>
Date: Wed, 21 Mar 2001 10:24:35 -0800

On Tue, Mar 20, 2001 at 06:14:58PM -0500, Jacques Mallah wrote:
> Effectively it is, since Bob has a Bayesian probability of affecting
> Alice and so on.

He doesn't know whether he is Alice or Bob, but he does know that his
payoff only depends on his own action. "Bob has a Bayesian probability of
affecting Alice" is true in the sense that Bob doesn't know whether he is
Alice or Bob, so he doesn't know whether his action is going to affect
Alice or Bob, but that doesn't matter if he cares about himself no matter
who he is, rather than either Alice or Bob by name.

> You are correct as far as him thinking he is more likely to be in round
> 2. However, you are wrong to think he will push button 1. It is much the
> same as with the Bob and Alice example:

You said that in the Bob and Alice example, they would push button 1 if
they were selfish, which I'm assuming that they are, and you said that the
seeming paradox is actually a result of game theory (hence the above
discussion). But in this example you're saying that the participant would
push button 2. How is that the same?

> He thinks he is only 1/101 likely to be in round 1. However, he also
> knows that if he _is_ in round 1, the effect of his actions will be
> magnified 100-fold. Thus he will push button 2.
> You might see this better by thinking of measure as the # of copies of
> him in operation.
> If he is in round 1, there is 1 copy operating. The decision that copy
> makes will affect the fate of all 100 copies of him.
> If he is in round 2, all 100 copies are running. Thus any one copy of
> him will effectively only decide its own fate and not that of its 99
> brothers.

That actually illustrates my point, which is that the measure of oneself
is irrelevant to decision making. It's really the magnitude of the effect
of the decision that is relevant. You say that the participant should
think "I'm more likely to be in round 2, but if I were in round 1 my
decision would have a greater effect." I suggest that he instead think
"I'm in both round 1 and round 2, and I should give equal
consideration to the effects of my decision in both rounds."

This way, we can make decisions without reference to the measure of
conscious thoughts (unless you choose to consider it as part of your
utility function), and we do not need to have a theory of consciousness,
which at least involves solving the implementation problem (or in my own
proposal where static bit strings can be conscious, the analogous
interpretation problem).
Received on Wed Mar 21 2001 - 10:31:27 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST