Re: another anthropic reasoning

From: Jacques Mallah <jackmallah.domain.name.hidden>
Date: Thu, 01 Mar 2001 03:14:03 -0500

>From: Wei Dai <weidai.domain.name.hidden>
>Consider the following thought experiment.
>
>Two volunteers who don't know each other, Alice and Bob, are given
>temporary amnesia and placed in identical virtual environments. They are
>then both presented with three buttons and told the following:
>
>If you push 1, you will lose $9
>If you push 2 and you are Alice, you will win $10
>If you push 2 and you are Bob, you will lose $10
>
>I'll assume that everyone agrees that both people will push button 2.

    Of course it depends on their utility functions, but we can assume they
will push 2.

>The paradox is what happens if we run Alice and Bob's minds on different
>substrates, so that Bob's mind has a much higher measure than Alice's. If
>they apply anthropic reasoning they'll both think they're much more likely
>to be Bob than Alice, and push button 1.

    No paradox, but this time the choice of utility function is more
important. While they have amnesia, their utility functions will surely be
different than what they usually are, since it could be a big giveaway if
(for example) the person knows it doesn't give a hoot about Bob.
    To an outside observer who cares about Bob and Alice equally, the best
outcome _would_ be if button #1 is pressed more often in this situation.
(Assuming the substrate would not later be switched.)

>If you don't think this is paradoxical, suppose we repeat the choice but
>with the payoffs for button 2 reversed, so that Bob wins $10 instead of
>Alice, and we also swap the two minds so that Alice is running on the
>substrate that generates more measure instead of Bob. They'll again both
>push button 1. But notice that at the end both people would have been
>better off if they pushed button 2 in both rounds.

    If they knew during the first round that this would happen, they
probably wouldn't press #1. What they would have thought is equivalent to
"I'm probably Bob, and after this round I'm probably going to die while
Alice will replace me."
    Then look at the expected utilities. First, assume that they place
equal utility on Bob's and Alice's money. Then they will press #2. The
expected utility of this is:
= (-10) (final measure of Bob) + (10) (final measure of Alice)

    You might think that the current measures of Bob and Alice (during the
first round) should be a factor. Although the person in round 1 is more
likely to be Bob, it's also true that if he is Bob the effect of his action
will be more diluted (afterwards); if she is Alice, the effect will be
magnified. The final measure distribution is what counts.

    True, they could be more "selfish". Effectively they are playing a
prisoner's dilemma type of game where first Bob is given a move, then Alice
is. In this case they might both push 1, but only if they don't expect to
interact in the future, and don't care about each other. (And also don't
expect to gain a bad reputation.)

    Still no paradox; this case is an example of game theory. Consider the
case where they always have the same measure and never lose memory. They
have the choice of 1) hurt yourself a little, or 2) hurt yourself a lot but
help the other person by an equal amount. They might both choose 1, but
that's no paradox.

From: George Levy <GLevy.domain.name.hidden>
>Erasing their memory puts a big cabosh on the meaning of who they really
>are. I could argue that Bob is not really Bob and Alice is not really
>Alice. Their identity has been reduced to a *bare* "I"
and they are actually identical!!!!

    No, he didn't say they were now identical. For example, if Bob was evil
and likes to smoke, while Alice was good and likes to yoyo, we can assume
they still have those traits. They just don't remember who they are or who
has their traits.
    Of course, hardware doesn't make the man. Nothing does, really:
"Bobness" is just a loose set of traits. This shows the hurluberlu nature
of "1st person" merde.
    Likewise, if you want to base your utility function on "what's good for
'you'", you will find that this means nothing. The closest you can come is
to place utility upon certain traits within the measure distribution that
are based on what you remember.
    Most naive forms of selfishness are not really possible utility
functions, but it is certainly easy to be _effectively_ selfish, e.g. you
value traits which are mostly just associated with certain thoughts _highly_
similar to your own.

>If you could measure your measure you would find the measurement always
>identical no matter where, when or who you are.

    If that were true there would be as many white rabbits as there are
crackpots.

                         - - - - - - -
               Jacques Mallah (jackmallah.domain.name.hidden)
         Physicist / Many Worlder / Devil's Advocate
"I know what no one else knows" - 'Runaway Train', Soul Asylum
         My URL: http://hammer.prohosting.com/~mathmind/
_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com
Received on Thu Mar 01 2001 - 00:17:38 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST