>From: Wei Dai <weidai.domain.name.hidden>
>To: Jacques Mallah <jackmallah.domain.name.hidden>
>On Tue, Mar 20, 2001 at 06:14:58PM -0500, Jacques Mallah wrote:
> > Effectively it is [a game], since Bob has a Bayesian probability of
>affecting Alice and so on.
>
>He doesn't know whether he is Alice or Bob, but he does know that his
>payoff only depends on his own action. "Bob has a Bayesian probability of
>affecting Alice" is true in the sense that Bob doesn't know whether he is
>Alice or Bob, so he doesn't know whether his action is going to affect
>Alice or Bob, but that doesn't matter if he cares about himself no matter
>who he is, rather than either Alice or Bob by name.
[Prepare for some parenthetical remarks.
(I assume you mean (s)he cares only about his/her implementation, body,
gender or the like. A utility function that depends on indexical
information. Fine, but tricky. If I care only about my implementation,
then I don't care about my "brothers". Things will depend on exactly how
the experiment works.
On the other hand, I don't think it's unreasonable for the utility
function to not depend on indexical information. For example, Bob might
like Alice and place equal utility on both Alice's money and his own, like
in the example I used.
In practice, I think people mainly place utility on those who will
remember the stuff they are currently experiencing. Thus if there was a way
to partially share memories, things could get interesting.)
(Note for James Higgo: the concept of "self" can be defined in various
ways. I do not mean to imply that there is any objective reason for him to
use these ways. e.g. I might decide, not knowing my gender, that I still
care only about people who have the same gender as me. Thus Bob would not
care about Alice. Silly, but possible. I guess the drugs also make you
forget which body parts go with what, but you can still look, so the
experiences are not identical. Just mentioning that in case someone would
have jumped in on that point.)
(I would also say that any wise man - which I am not - will certainly
have a utility function that does _not_ depend on indexical information! We
are fools.)
OK, back to the question. Forget I said a thing.]
It's effectively a game. But there's no point in debating sematics. In
any case, just choose a utility function (which will also depend on
indexical information), analyse it based on that, and out comes the correct
answer to maximize expected utility. So let's concentrate on the case with
just Bob below.
> > You are correct as far as him thinking he is more likely to be in
>round 2. However, you are wrong to think he will push button 1. It is
>much the same as with the Bob and Alice example:
>
>You said that in the Bob and Alice example, they would push button 1 if
>they were selfish, which I'm assuming that they are, and you said that the
>seeming paradox is actually a result of game theory (hence the above
>discussion). But in this example you're saying that the participant would
>push button 2. How is that the same?
If you're saying that even if they are "selfish" they would push button
2, I won't argue. I was just using a different utility function for being
"selfish", one that did not depend on indexical info. Pushing 2 is better
anyway, so why complain?
> > He thinks he is only 1/101 likely to be in round 1. However, he
>also knows that if he _is_ in round 1, the effect of his actions will be
>magnified 100-fold. Thus he will push button 2.
> > You might see this better by thinking of measure as the # of copies
>of him in operation.
> > If he is in round 1, there is 1 copy operating. The decision that
>copy makes will affect the fate of all 100 copies of him.
> > If he is in round 2, all 100 copies are running. Thus any one copy
>of him will effectively only decide its own fate and not that of its 99
>brothers.
>
>That actually illustrates my point, which is that the measure of oneself is
>irrelevant to decision making. It's really the magnitude of the effect of
>the decision that is relevant. You say that the participant should think
>"I'm more likely to be in round 2, but if I were in round 1 my decision
>would have a greater effect."
First, it's nice to see that you accept my resolution of the "paradox".
But I have a hard time believing that your point was, in fact, the
above. You brought forth an attack on anthropic reasoning, calling it
paradoxical, and I parried it. Now you claim that you were only pointing
out that anthropic reasoning is just an innocent bystander? Of course it's
just a friendly training exercise, but you do seem to be pulling a switch
here.
>I suggest that he instead think
>"I'm in both round 1 and round 2, and I should give equal
>consideration to the effects of my decision in both rounds."
I assume you mean he should think "I am, or was, in round 1, and I am,
or will be, in round 2". There is no need for him to think that, and it's
not true. Only one of the "brothers" was in round 1.
If he doesn't care even about his own "brothers" we are back to game
theory. And don't say it's not a game because this time, the guy who is in
round 1 definately does affect his "brothers" as well as himself.
>This way, we can make decisions without reference to the measure of
>conscious thoughts (unless you choose to consider it as part of your
>utility function), and we do not need to have a theory of consciousness,
>which at least involves solving the implementation problem (or in my own
>proposal where static bit strings can be conscious, the analogous
>interpretation problem).
First, anyone whose utility function does not depend on measure is
definately insane in my book.
One good utility function could have the form
U = [sum_(thoughts) f(thought) M(thought)] + V
where f is some function, M is the (unnormalized!!) measure, and V is to
take into account other stuff he may care about. V = 0 is perhaps the
wisest choice. This does not take indexical info into account, so you will
probably not like it.
Secondly, as we have seen anthropic reasoning leads to the conclusion
that both rounds are equally important. So you are not suggesting that he
act any different in practice, you are just suggesting that he lie to
himself about why he should act that way.
- - - - - - -
Jacques Mallah (jackmallah.domain.name.hidden)
Physicist / Many Worlder / Devil's Advocate
"I know what no one else knows" - 'Runaway Train', Soul Asylum
My URL:
http://hammer.prohosting.com/~mathmind/
_________________________________________________________________
Get your FREE download of MSN Explorer at
http://explorer.msn.com
Received on Wed Mar 21 2001 - 19:06:57 PST