RE: More is Better (was RE: another puzzle)
Jesse writes
> > It's *not* aesthetic whether, say, George Bush is you or not. He's
> > definitely not! He doesn't have your memories, for the first thing.
> > It's simply objectively true that some programs---or some clumps
> > of biological matter---are Jesse Mazur and others are not. (Even
> > though the boundary will not be exact, but fuzzy.)
>
> I disagree--George Bush certainly has a lot sensory memories (say,
> what certain foods taste like) in common with me, and plenty of
> life-event-memories which vaguely resemble mine. And I think if
> you scanned the entire multiverse it would be possible to find a
> continuum of minds with memories and lives intermediate between
> me and George Bush.
Of course. And that's true of anything you care to name (outside
mathematics and perhaps some atomic physics, I suppose).
> There's not going to be a rigorous, totally well-defined procedure
> you can use to distinguish minds which belong to the set "Jesse-
> Mazer-kinda-guys" from minds which don't belong.
I never said that there was. The allure of mathematically
precise, absolutely decisive categories must be resisted
for most things. Don't throw out what is important: namely
that there are such things as *stars* which are different
from *planets*, even though (of course, like everything)
there is a continuum.
We have tests which today can pick out Jesse Mazer from all
other humans, six billion or so, that live on the planet.
Even before we knew about DNA, it was possible for it do be
determined on Earth in the year 1860 who was and who was not
Abraham Lincoln.
> Well, of course *I* would want to dissolve the chamber,
> [where an exact re-enactment was taking place]
> because I think that dissolving this chamber will decrease
> the subjective first-person probability of having that
> experience of being tortured by the Nazis.
Me too.
> I'm just saying it's not clear what difference dissolving
> it would make from the POV of a "zombie" like yourself. ;)
Your meaning is unclear. But you may wish to just elide all this.
> > The love of a mother who understood all the facts would not mislead
> > her into making the correct decision: all other things being equal,
> > (say that her daughter was to live happily in any case after 2007),
> > she would judge that it is better for her daughter to suffer only
> > one computation---here, say---than two, (say here and on Mars).
> > Each time that the girl's suffering is independently and causally
> > calculated is a terrible thing.
>
> I don't see why it's terrible, if you reject the notion of first-person
> probabilities. You've really only given an appeal to emotion rather than
> an argument here, and I would say the emotions are mostly grounded in
> first-person intuitions, even if you don't consciously think of them that
> way.
It's true that all my beliefs are "analytically-continued" from
my intuitions. I think that everyone's are. I've tried to find
an entirely consistent objective description of my values. It
seems to me that I have an almost entirely consistent version
of the values that a lot of people share (but then, I'm biased).
It started like this: I know what it's like for me to have a
bad experience, and when I then look at the physics I understand
that there is a certain organism that is causally going through
a number of states, and that it results in a process I don't
like. Conveniently, my instincts also suggest that I shouldn't
like it when other people suffer too. Dropping the error-prone
first person account, I then generalize on what is intrinsically
bad about people suffering to a wider view that includes programs
and processes. It wasn't rocket science, and many others have done
so just as I have.
> > > but if you want to only think in terms of a universal
> > > objective third-person POV, then you must define "better"
> > > in terms of some universal objective moral system, and
> > > there doesn't seem to be any "objective" way to
> > > decide questions like whether multiple copies of the
> > > same happy A.I. are better than single copies.
> >
> > You're right, and here's how I go about it. We must be able to decide
> > (or have an AI decide) whether or not an entity (person or program)
> > is being benefited by a particular execution.
>
> But that's ignoring the main issue, because you haven't addressed the more
> primary question of *why* we should think that if a single execution of a
> simulation benefits the entity being simulated, then multiple executions of
> the same simulation benefit it even more.
I have given some reasons, namely, it's a smooth extension of
our values from the cases of how we'd feel on seeing repeated
suffering. But it's understandable and correct for you to be
asking for more.
Let's suppose that we want an evaluation function, that is, a
way of pronouncing judgment on the issues of the day, or of
philosophic choices, or of other things that ask us our values.
A main purpose of philosophy, in my opinion, is prescriptive:
It should tell us how to choose in various situations.
One other thing I value about such a function would be that it would
be able to take a process or a situation as input, and tell me
whether I would approve of that activity or action or processing.
For example, if given a slave suffering under his master's lashes,
I want the function to tell me that it's bad, but to do so in the
most consistent manner possible. (Now where I get my values is
another story, but it so happens that my values are very typical
of human beings on issues like this.)
The function should be *local*; the answer it gives (whether a
certain process is good or bad) should NOT depend on what happened
billions of light years ago in a far, far, far away galaxy causally
disjoint from here (i.e. outside the current situation's light cone).
Or what will happen eons from now. Ideally, it should apply to any
spatially and temporally closed system, and also to many that are not.
So if there is a program running on hardware in a certain cubic
meter of space, and that program is suffering inordinately, then
what I want with regard to that cubic meter ought to be independent
of whether or not someone ran the same program twelve billion years
ago. Surely that other incident is irrelevant! Am I clear? Am I
being consistent? Does this "locality" not add yet another
reason to the one I gave before to respect repeated experience?
Lee
Received on Sun Jun 26 2005 - 20:07:48 PDT
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:10 PST