More is Better (was RE: another puzzle)

From: Lee Corbin <lcorbin.domain.name.hidden>
Date: Sun, 26 Jun 2005 14:24:16 -0700

Jesse writes

> > First, I think that it's important to remove the qualifier "identical"
> > here. Would two copies cease to be identical if one atom were out of
> > place?
>
> I meant something more like "running the same program"

Okay, that's fine.

> > On another tack, you are the same person, etc., that you were
> > five minutes ago where strict identicalness isn't even close.
>
> From a third-person POV, why am I the same person? If you don't believe
> there's an objective truth about continuity of identity, isn't it just a
> sort of aesthetic call?

When we say that you are the same person you were a few
minutes ago, of course, we are starting from common usage
and going from there. Normal people value their lives,
and don't want to die, say, next week. Even legally, people
are regarded as having an identity that doesn't change much
over time.

Objectively, (i.e. 3rd person), there really *is* a fuzzy set
of states that ought to be regarded as Jesse Mazur. Any
intelligent investigator (or even a program that we cannot
quite write yet) could examine each of the six billion people
in the world and give a "Yes" or "No" answer to whether this
is an instance of Jesse Mazur. Naturally in the case of duplicates
(running on computers or running on biological hardware doesn't
matter) it may be found that there is more than one Jesse running.

It's *not* aesthetic whether, say, George Bush is you or not. He's
definitely not! He doesn't have your memories, for the first thing.
It's simply objectively true that some programs---or some clumps
of biological matter---are Jesse Mazur and others are not. (Even
though the boundary will not be exact, but fuzzy.)

> > Second, suppose that someone loves you, and wants the best for you.
> > The person who loves you...
> > If she finds out that although dead on Earth, you've been copied into
> > a body out near Pluto, (and have the same quality of life there), she's
> > once again happy for you.
>
> That's a pretty unhuman kind of "love" though--if a person I know dies, I'm
> sad because I'll never get to interact with them again,

Then you don't know true love :-) (just kidding) because as
the great novelists have explained, truly loving someone involves
wanting what is best for *them*, not just that you'll get the
pleasure of their company. Hence the examples where one lover
dies to save the other.

> Obviously my sadness is not because the death of the copy here
> means that there are only 10^10^29 - 1 copies of that person...

By the way, this figure 10^10^29 is a *distance*. It is, according
to Tegmark, very approximately how close in terms of meters the
nearest exact copy of you who is reading this is. (And it doesn't
matter whether one uses meters or lightyears.)

> > Well, lots of things can go better or worse for me without me
> > being informed of the difference. Someone might perpetrate a
> > scam on me, for example, that cheated me of some money I'd
> > otherwise get, and it is still bad for me even if I don't know
> > about it.
>
> OK, in that case there are distinct potential experiences you might have had
> that you now won't get to have. But in the case of a large number of copies
> running in lockstep, there are no distinct experiences the copies will have
> that a single copy wouldn't have.

I am speaking even of the case you bring up where the experiences
are *exactly* alike, although, as I say, for physical copies a
few atoms (or even many) doesn't matter much.

This, then, is the big question: how may I appeal to your intuition
in such a way that you come to agree that benefit is strictly additive?

Let me resort to another torture experiment. Suppose that I invite
you into my house, take you down to the torture chamber, and allow
you to look through a tiny peephole inside the entire steel-encased
chamber. You see some Nazis torturing a little girl, and her screams
are reproduced electronically so that you hear them.

You are appalled. You beg me to dissolve the chamber and put an end
to the atrocity. But then I say the following peculiar thing to you:
"Ah, but you see, this is an *exact* molecular---down to the QM
details---reenactment of an incident that happened in 1945. So you
see, since it's identical, it doesn't matter whether the little girl
suffers once or twice."

Now contrive translated versions of that to programs, where a program
here is suffering exactly the same way that it's suffering on Mars.
Still feel that since one is taking place anyway, it doesn't matter
whether a second one is?

The love of a mother who understood all the facts would not mislead
her into making the correct decision: all other things being equal,
(say that her daughter was to live happily in any case after 2007),
she would judge that it is better for her daughter to suffer only
one computation---here, say---than two, (say here and on Mars).
Each time that the girl's suffering is independently and causally
calculated is a terrible thing.

It is this last sentence that well sums up the entire objective
viewpoint about these matters, and I hope you entirely understand
what it's saying.

> The real problem here is that when we talk about what's "better" from a
> first-person POV, we just mean what *I* would prefer to experience happening
> to me;

Again, I think that the first-person point of view can lead to
errors just as incorrect as those of Ptolemaic astronomy.

> but if you want to only think in terms of a universal objective
> third-person POV, then you must define "better" in terms of some universal
> objective moral system, and there doesn't seem to be any "objective" way to
> decide questions like whether multiple copies of the same happy A.I. are
> better than single copies.

You're right, and here's how I go about it. We must be able to decide
(or have an AI decide) whether or not an entity (person or program)
is being benefited by a particular execution. Now, there will be many
doubtful cases, perhaps, where it's hard to say. But then, there are
usually borderline cases in all important realistic questions.

In ninety-nine percent of the cases, even we today with our primitive
science and paucity of knowledge (say, compared to the engines of
2080 A.D.) can determine with high probability whether someone is
gaining benefit from an experience or not. Several signature tests
help. One is to simply ask the person. This is one of the very most
dependable, because only in a few pathological cases can an argument
be made that the person is wrong.

Another great test is to consult other human beings who have the
welfare of the subject uppermost in mind. For example, the people
who deeply love an individual and want only the best for him are
almost always good guides. Of course, difficult cases can be found,
and we must concede that no set of indicators we have in 2005 can
judge 2005 people with total accuracy. The vastly wiser AIs of
centuries from now will be able to point out the cases where we
went wrong, but they'll judge that we were right ninety-nine percent
of the time. For example, they'll probably confirm that Terry Schaivo
was not alive while the courts debated her case, but, of course, I
could be wrong.

But even though our instruments and methodologies today may still
be relatively poor, it's nonetheless in almost all cases objectively
true whether or not a person is receiving benefit from some experience.
And, to repeat, in almost all cases it suffices to consult the person's
own value system.

Lee
Received on Sun Jun 26 2005 - 17:27:01 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:10 PST