RE: More is Better (was RE: another puzzle)

From: Jesse Mazer <lasermazer.domain.name.hidden>
Date: Sun, 26 Jun 2005 18:04:54 -0400

Lee Corbin wrote:

>
>Jesse writes
>
> > > First, I think that it's important to remove the qualifier "identical"
> > > here. Would two copies cease to be identical if one atom were out of
> > > place?
> >
> > I meant something more like "running the same program"
>
>Okay, that's fine.
>
> > > On another tack, you are the same person, etc., that you were
> > > five minutes ago where strict identicalness isn't even close.
> >
> > From a third-person POV, why am I the same person? If you don't believe
> > there's an objective truth about continuity of identity, isn't it just a
> > sort of aesthetic call?
>
>When we say that you are the same person you were a few
>minutes ago, of course, we are starting from common usage
>and going from there. Normal people value their lives,
>and don't want to die, say, next week. Even legally, people
>are regarded as having an identity that doesn't change much
>over time.
>
>Objectively, (i.e. 3rd person), there really *is* a fuzzy set
>of states that ought to be regarded as Jesse Mazur.

MazEr!

Any
>intelligent investigator (or even a program that we cannot
>quite write yet) could examine each of the six billion people
>in the world and give a "Yes" or "No" answer to whether this
>is an instance of Jesse Mazur. Naturally in the case of duplicates
>(running on computers or running on biological hardware doesn't
>matter) it may be found that there is more than one Jesse running.
>
>It's *not* aesthetic whether, say, George Bush is you or not. He's
>definitely not! He doesn't have your memories, for the first thing.
>It's simply objectively true that some programs---or some clumps
>of biological matter---are Jesse Mazur and others are not. (Even
>though the boundary will not be exact, but fuzzy.)

I disagree--George Bush certainly has a lot sensory memories (say, what
certain foods taste like) in common with me, and plenty of
life-event-memories which vaguely resemble mine. And I think if you scanned
the entire multiverse it would be possible to find a continuum of minds with
memories and lives intermediate between me and George Bush. There's not
going to be a rigorous, totally well-defined procedure you can use to
distinguish minds which belong to the set "Jesse-Mazer-kinda-guys" from
minds which don't belong.

>
> > > Second, suppose that someone loves you, and wants the best for you.
> > > The person who loves you...
> > > If she finds out that although dead on Earth, you've been copied into
> > > a body out near Pluto, (and have the same quality of life there),
>she's
> > > once again happy for you.
> >
> > That's a pretty unhuman kind of "love" though--if a person I know dies,
>I'm
> > sad because I'll never get to interact with them again,
>
>Then you don't know true love :-) (just kidding) because as
>the great novelists have explained, truly loving someone involves
>wanting what is best for *them*, not just that you'll get the
>pleasure of their company. Hence the examples where one lover
>dies to save the other.

Yeah, but that's because those guys only believed in a single universe! Do
you think anyone would buy a story where someone sacrificed their unique,
unbacked-up life to save copy #348 of 1000 copies running in perfect
lockstep? Do *you* think this would be a good thing to do, even though it
would mean the loss of unique information (all the person's memories,
thoughts, wisdom etc.) from the universe in order to prevent a "death" that
won't remove any unique information at all from the universe?

>From a first-person POV, I do believe the concept of self-sacrificing
unselfish love still makes sense in a multiverse, it's just that it would
involve trying to maximize the other person's subjective probability of
experiencing happiness in the future.

>This, then, is the big question: how may I appeal to your intuition
>in such a way that you come to agree that benefit is strictly additive?
>
>Let me resort to another torture experiment. Suppose that I invite
>you into my house, take you down to the torture chamber, and allow
>you to look through a tiny peephole inside the entire steel-encased
>chamber. You see some Nazis torturing a little girl, and her screams
>are reproduced electronically so that you hear them.
>
>You are appalled. You beg me to dissolve the chamber and put an end
>to the atrocity. But then I say the following peculiar thing to you:
>"Ah, but you see, this is an *exact* molecular---down to the QM
>details---reenactment of an incident that happened in 1945. So you
>see, since it's identical, it doesn't matter whether the little girl
>suffers once or twice."

Well, of course *I* would want to dissolve the chamber, because I think that
dissolving this chamber will decrease the subjective first-person
probability of having that experience of being tortured by the Nazis. I'm
just saying it's not clear what difference dissolving it would make from the
POV of a "zombie" like yourself. ;)

>Now contrive translated versions of that to programs, where a program
>here is suffering exactly the same way that it's suffering on Mars.
>Still feel that since one is taking place anyway, it doesn't matter
>whether a second one is?
>
>The love of a mother who understood all the facts would not mislead
>her into making the correct decision: all other things being equal,
>(say that her daughter was to live happily in any case after 2007),
>she would judge that it is better for her daughter to suffer only
>one computation---here, say---than two, (say here and on Mars).
>Each time that the girl's suffering is independently and causally
>calculated is a terrible thing.

I don't see why it's terrible, if you reject the notion of first-person
probabilities. You've really only given an appeal to emotion rather than an
argument here, and I would say the emotions are mostly grounded in
first-person intuitions, even if you don't consciously think of them that
way. I suppose there's also the issue of having to *witness* a replay of
horrible suffering in the present rather than just leave it dead and buried
in the past, but in this case the argument for shutting down the chamber
would be the same regardless of whether it was an actual simulation of that
past torture chamber or just a very realistic holographic recording.

>
> > The real problem here is that when we talk about what's "better" from a
> > first-person POV, we just mean what *I* would prefer to experience
>happening
> > to me;
>
>Again, I think that the first-person point of view can lead to
>errors just as incorrect as those of Ptolemaic astronomy.

And I think it's the universal third-person view that leads to those errors,
since I think that the ultimate "objective" truth about reality takes the
form of a measure on all possible observer-moments, a notion that you'd
never think of if you tried to think purely in third-person terms and reject
the first-person perspective.

>
> > but if you want to only think in terms of a universal objective
> > third-person POV, then you must define "better" in terms of some
>universal
> > objective moral system, and there doesn't seem to be any "objective" way
>to
> > decide questions like whether multiple copies of the same happy A.I. are
> > better than single copies.
>
>You're right, and here's how I go about it. We must be able to decide
>(or have an AI decide) whether or not an entity (person or program)
>is being benefited by a particular execution.

But that's ignoring the main issue, because you haven't addressed the more
primary question of *why* we should think that if a single execution of a
simulation benefits the entity being simulated, then multiple executions of
the same simulation benefit it even more.

Jesse
Received on Sun Jun 26 2005 - 18:08:11 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:10 PST