Re: another anthropic reasoning

From: Jacques Mallah <jackmallah.domain.name.hidden>
Date: Fri, 30 Mar 2001 02:43:12 -0500

    My apologies to Bruno for not replying yet on the 'merde' thread, but I
am busy and this one is shorter.

>From: Wei Dai <weidai.domain.name.hidden>
>On Fri, Mar 23, 2001 at 04:53:24PM -0500, Jacques Mallah wrote:
> > [Wei Dai wrote:]
> > >You'll have to define what "effectively decide" means and how to apply
>that concept generally. (Have you introduced it before? I think this is the
>first time I've seen it.)

    I should have mentioned this _explicitly_ last time but you're linking
the 'effectively' to the wrong word. It goes to show the disadvantages of
conversing by email, since this little misunderstanding of phrase seems to
have been inflated into a big issue :)
    I said he 'would effectively decide his own fate'. I meant he 'would
decide a fate that is effectively his own'. So, really, I never used the
term 'effectively decide' as a linked phrase, but rather 'effectively ...
his own'. Meaning just that the amount of fate he decides has the same
measure he does and is of the same type 'he' would evolve into. Hence ...

> > I thought the meaning to be obvious in this context. The simplest
>interpretation of your little experiment is that whatever fraction of him
>push a particular button, is the same fraction of him that end up with the
>corresponding payoff. That's how I always interpreted it.
> > If the measure of him is the same for the 2nd round as for after
>both rounds, then it's the same as if each copy gets to influence its own
>payoff.

    (The above was clearly not enough to tell you what I meant, even though
I really _thought_ it would be, so I explicitly did so above, now. Of
course, who knows what you'll read into the new wording ...)

>In my original message I never talked about multiple copies in round 2. I
>was assuming that there would be just one copy in each round, but the
>measures of the copies would be different. You were the one who brought up
>the idea of 1 copy in round 1, and 100 copies in round 2 (presumably the
>measures of the copies are the same) as an analogy. I tried to work with
>your analogy, but here it seems to break down.

    What I actually said was "You might see this better by thinking of
measure as the # of copies of him in operation." For me that's not just an
analogy, although the number can of course be infinite (with the way to take
the limit to be specified). That's basically my definition of "measure".
(Where I use different implementation-mappings to define the "copies".)
    Of course people often use a more generalized notion of "measure", which
I can play along with for hypothetical purposes.

> > >Suppose in round 2 he gets the $-9 payoff if any of the copies decide
>to push button 1. Intuitvely, each copy affects the fate of every other
>copy.
> >
> > Now you're changing the game. And it is a game, since as you said
>yourself, each guy affects the others.
>
>Again, I was trying to follow your analogy. But really what I have in mind
>is that in round 2 there is just one copy, which just affects itself.

    If there is just one copy, we must ask why the measure would be greater.
  At first glance it would seem that this argument of yours could actually
help me against those who would want to decouple the notions of "measure"
and "# of implementations", since indeed it would not make sense for the
expected utility of choosing 1 to be greater, and my way it clearly isn't.
    However, I doubt it is that easy. First, technically one should not
assume a priori that consciousness is the same as decision-making.
Therefore, there is no reason the measure distribution of consciousness
should really come into decision-making at all.
    For example, suppose the guy has a magic switch on top of his head,
which can be set to 0 or 1, and that his measure is equal to the value of
the switch. Assume he can't tell what position it's in. For round 1 the
switch will be set to 0, while for round 2 it's set to 1. (This is a slight
extension of your experiment.)
    Should he assume that he must be in round 2? If so the zombie in round
1 will also assume that. So, overall, he would have been better off if he
decided in advance that he would always push button 2, instead of trying to
decide during the experiment. That won't do.
    I think the solution is that one must define a "decision measure" and
this will always be equal to the number of copies. The zombie in round 1
gets equal weight because it is a computer making a decision. The computers
should act as though the Bayesian probability of being one type of computer
or the other is based on that.
    But, it does not prove that they are equally conscious. Indeed, for the
latter purpose the guy should _believe_ he is in round 2, even though he
shouldn't _act_ as if he is. And that's because *he can't* act. _Bodies_
act, _not minds_. The zombie in round one will say to himself "I am
conscious and in round 2", but luckily his error will not be experienced by
a conscious being and that's the point. A similar argument goes if he just
has low measure.
    Unless, that is, you believe in reductive computationalism, like me!
Then there can be no zombies. Do you think my above defense of the zombie /
near-zombie concept is plausible? I hope not, but it's what I would expect
to see from dualists or (with a related experiment) furstpursunasts, so it's
better to put it out there myself ...

> > Who isn't? :) I admit it's not a perfect model, though.
>
>Why have this model at all? What's the advantage?

    It's a good approximation in many situations.

> > Unlikely. First, there may be no maximum of f. For example, f
>could be proportional to the depth of the thought (roughly, the age the guy
>seems to be) as well as to a "happiness" factor.
>
>A genie comes to you and says he can grant you any wish, if you agree to
>first being tortured for a year. Would you accept? What if there is only
>10% probability that he will grant you the wishes? 1%? .1%? If there is no
>maximum of f for you, then you would agree no matter how small the
>probability is or how long the torture lasts, since you would just wish for
>a thought that has a sufficiently high utility that even after being
>multiplied by that probability is still larger than the disutility of being
>tortured.

    That's called a career in academic physics :(

>Do you think this is reasonable?

    It might be. You'd have a similar situation with _any_ unbounded
utility function. And a bounded one seems pretty bad.

> > Second, it is unlikely that his resources would be such that doing
>so would maximize the utility. Even if they are, it doesn't seem so
>strange that he would want to relive his happiest moment.
>
>But it is strange that he would value each re-simulation of his happiest
>moment as much as the original moment. For example, a person who has a
>utility function in the form you described would be indifferent between
>running a re-simulation of a past sexual experience and having a new sexual
>experience. Obviously genes that cause this kind of preferences are not
>going to be evolutionarily successful, and we shouldn't expect that most
>people have these kinds of utility functions or will have them in the
>future.

    Running simulations has not been an option during the time in which man
evolved, so that is not an argument against such utility functions being
common now. Masturbation while thinking of the past is definately not the
same.

                         - - - - - - -
               Jacques Mallah (jackmallah.domain.name.hidden)
         Physicist / Many Worlder / Devil's Advocate
"I know what no one else knows" - 'Runaway Train', Soul Asylum
         My URL: http://hammer.prohosting.com/~mathmind/
_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com
Received on Thu Mar 29 2001 - 23:48:03 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST