Re: another anthropic reasoning
On Fri, Mar 23, 2001 at 04:53:24PM -0500, Jacques Mallah wrote:
> >You'll have to define what "effectively decide" means and how to apply that
> >concept generally. (Have you introduced it before? I think this is the
> >first time I've seen it.)
>
> I thought the meaning to be obvious in this context. The simplest
> interpretation of your little experiment is that whatever fraction of him
> push a particular button, is the same fraction of him that end up with the
> corresponding payoff. That's how I always interpreted it.
> If the measure of him is the same for the 2nd round as for after both
> rounds, then it's the same as if each copy gets to influence its own payoff.
In my original message I never talked about multiple copies in round 2. I
was assuming that there would be just one copy in each round, but the
measures of the copies would be different. You were the one who brought up
the idea of 1 copy in round 1, and 100 copies in round 2 (presumably the
measures of the copies are the same) as an analogy. I tried to work with
your analogy, but here it seems to break down. What would "effectively
decide" mean if there were just one copy in each round with different
measures?
A more important question is, what does it mean in general? Can
you define it precisely enough that it can be applied in any situation?
> >Suppose in round 2 he gets the $-9 payoff if any of the copies decide to
> >push button 1. Intuitvely, each copy affects the fate of every other copy.
>
> Now you're changing the game. And it is a game, since as you said
> yourself, each guy affects the others.
Again, I was trying to follow your analogy. But really what I have in mind
is that in round 2 there is just one copy, which just affects itself.
> Who isn't? :) I admit it's not a perfect model, though.
Why have this model at all? What's the advantage?
> Unlikely. First, there may be no maximum of f. For example, f could be
> proportional to the depth of the thought (roughly, the age the guy seems to
> be) as well as to a "happiness" factor.
A genie comes to you and says he can grant you any wish, if you agree to
first being tortured for a year. Would you accept? What if there is only
10% probability that he will grant you the wishes? 1%? .1%? If there is
no maximum of f for you, then you would agree no matter how small the
probability is or how long the torture lasts, since you would just wish
for a thought that has a sufficiently high utility that even after being
multiplied by that probability is still larger than the disutility of
being tortured. Do you think this is reasonable?
> Second, it is unlikely that his resources would be such that doing so
> would maximize the utility. Even if they are, it doesn't seem so strange
> that he would want to relive his happiest moment.
But it is strange that he would value each re-simulation of his happiest
moment as much as the original moment. For example, a person who has a
utility function in the form you described would be indifferent between
running a re-simulation of a past sexual experience and having a new
sexual experience. Obviously genes that cause this kind of preferences are
not going to be evolutionarily successful, and we shouldn't expect that
most people have these kinds of utility functions or will have them in the
future.
Received on Wed Mar 28 2001 - 16:32:08 PST
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:07 PST