IMO belief in the ASSA is tantamount to altruism. The ASSA would imply
taking action based on its positive impact on the whole multiverse of
observer-moments (OMs).
We have had some discussion here and on the extropy-chat (transhumanist)
mailing list about two different possible flavors of altruism. These are
sometimes called averagist vs totalist.
The averagist wants to maximize the average happiness of humanity.
He opposes measures that will add more people at the expense of decreasing
their average happiness. This is a pretty common element among "green"
political movements.
The totalist wants to maximize the total happiness of humanity.
He believes that people are good and more people are better. This
philosophy is less common but is sometimes associated with libertarian
or radical right wing politics.
These two ideas can be applied to observer-moments as well. But both
of these approaches have problems if taken to the extreme.
For the extreme averagist, half the OMs are below average. If they were
eliminated, the average would rise. But again, half of the remaining
OMs would be below (the new, higher) average. So again half should be
eliminated. In the end you are left with the one OM with the highest
average happiness. Eliminating almost every ounce of intelligence in
the universe hardly seems altruistic.
For the extreme totalist, the problem is that he will support adding
OMs as long as their quality of life is just barely above that which
would lead to suicide. More OMs generally will decrease the quality
of life of others, due to competition for resources, so the result is
a massively overpopulated universe with everyone leading terrible lives.
This again seems inconsistent with the goals of altruism.
In practice it seems that some middle ground must be found. Adding
more OMs is good, up to a point. I don't know if anyone has a good,
objective measure that can be maximized for an effective approach to
altruism.
Let us consider these flavors of altruism in the case of Stathis' puzzle:
> You are one of 10 copies who are being tortured. The copies are all being
> run in lockstep with each other, as would occur if 10 identical computers
> were running 10 identical sentient programs. Assume that the torture is so
> bad that death is preferable, and so bad that escaping it with your life is
> only marginally preferable to escaping it by dying (eg., given the option of
> a 50% chance of dying or a 49% chance of escaping the torture and living,
> you would take the 50%). The torture will continue for a year, but you are
> allowed one of 3 choices as to how things will proceed:
>
> (a) 9 of the 10 copies will be chosen at random and painlessly killed, while
> the remaining copy will continue to be tortured.
>
> (b) For one minute, the torture will cease and the number of copies will
> increase to 10^100. Once the minute is up, the number of copies will be
> reduced to 10 again and the torture will resume as before.
>
> (c) the torture will be stopped for 8 randomly chosen copies, and continue
> for the other 2.
>
> Which would you choose?
For the averagist, doing (a) will not change average happiness. Doing
(b) will improve it, but not that much. The echoes of the torture and
anticipation of future torture will make that one minute of respite
not particularly pleasant. Doing (c) would seem to be the best choice,
as 8 out of the 10 avoid a year of torture. (I'm not sure why Stathis
seemed to say that the people would not want to escape their torture,
given that it was so bad. That doesn't seem right to me; the worse it
is, the more they would want to escape it.)
For the totalist, since death is preferable to the torture, each
person's life has a negative impact on total happiness. Hence (a)
would be an improvement as it removes these negatives from the universe.
Doing (b) is unclear: during that one minute, would the 10^100 copies
kill themselves if possible? If so, their existence is negative and
so doing (b) would make the universe much worse due to the addition
of so many negatively happy OMs. Doing (c) would seem to be better,
assuming that the 8 out of 10 would eventually find that their lives
were positive during that year without torture.
So it appears that each one would choose (c), although they would differ
about whether (a) is an improvement over the status quo.
(b) is deprecated because that one minute will not be pleasant due to
the echoes of the torture. If the person could have his memory wiped
for that one minute and neither remember nor anticipate future torture,
that would make (b) the best choice for both kinds of altruists. Adding
10^100 pleasant observer-moments would increase both total and average
happiness and would more than compensate for a year of suffering for 10
people. 10^100 is a really enormous number.
Hal Finney
Received on Mon Jun 13 2005 - 18:39:27 PDT
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:10 PST