Re: being inside a universe

From: Hal Finney <hal.domain.name.hidden>
Date: Fri, 12 Jul 2002 10:20:16 -0700

I have begun reading the book Wei recommended, Foundations of Causal
Decision Theory, by James Joyce. I have a few comments and questions
to see if I should finish it. I'm at about page 100 of about 250 pages.

I didn't know what decision theory was before starting this book.
In fact it seemed trivial: pick the best outcome! How can you build a
theory around what is essentially the "maximum" function?

It seems that the point of decision theory is to justify this kind of
reasoning based on more elementary axioms. These axioms try to capture
notions of what we mean by rationality. In this respect the book is
as much philosophy as mathematics. It draws fine distinctions: for
example, when we make a decision to take an action, that is different
from actually taking the action (it comes before taking the action).
And preferring one alternative to another is different from simply
saying that we would choose the first alternative over the second;
it is a matter of dispositions and feelings, rather than behavior.

Another very surprising aspect relates to the word "causal"
in the title. Apparently the world of decision theorists has
undergone a schism based on some seemingly obscure issues relating
to Newcomb's Paradox. A sample page describing this paradox is at
http://members.aol.com/kiekeben/newcomb.html.

In the paradox, you can take one box or two. Deciding to take one
box gives you reason to believe that there is more money in the box.
This is the preference of "evidentiary" decision theory; by making this
decision, you gain evidence that the world is such that the outcome
will be beneficial. But the other argument is that the money is in
the boxes already, so taking both boxes will directly cause you to get
more money than taking one. This is the preference of "causal" decision
theory, where your actions are analyzed in terms of their direct causes.
This book is about causal decision theory in this sense.

The author argues that causal decision theory is correct and specifically
that in the Newcomb paradox one should take both boxes, even though he
concedes that under the terms of the paradox you get less money that way.
I have always been a take-one-box guy but I will admit that he makes a
pretty good case for it being rational to take both boxes. In fact the
author claims that his side has essentially won among decision theorists,
with only a few holdouts still clinging to the other side. No doubt he
is somewhat biased in making this claim, but if even partially true it
is quite surprising and interesting.

Causal decision theory is apparently a little more difficult to set
up mathematically, because you have to distinguish outcomes which are
directly caused by your actions from those where your actions lead
more generally to a positive outcome. Much of the point of this book
is to show a sound axiomatic formulation for causal decision theory.
This requires bringing in notions of causality and setting up probability
distributions based on "caused" outcomes rather than just the outcomes
themselves.

I have looked ahead a little bit at this part of the book, and
unfortunately it appears that the discussion of causality is rather
abbreviated and mostly brings in some rather complex results from the
literature. The philosophical literature on causality is quite large
in itself and probably some familiarity with that would help to see how
it relates to decision theory.

Overall I am not sure how to relate the ideas in this book to the issues
we deal with. One connection is that the causal aspect sheds light on
soem of the paradoxes we have discussed, such as the paradox of Adam
and Eve which Nick Bostrom covers in his thesis. In this situation
Adam wants a deer to come along so he can eat it, and to arrange this
he plans never to mate with Eve if that happens. If the deer doesn't
come along then he will mate with Eve and create the whole human race,
in which case it is highly unlikely that his particular observer-moments
would be chosen. So he reasons that from his perspective, since he is
in fact experiencing his observer-moments, it is likely that no large
human race will be produced and hence that the deer will come along.

There are various resolutions to this seeming paradox, but based on this
book I can see it as a Newcomb problem. Adam is giving himself evidence
that the deer will come, he is not causing the deer to come. So from
the perspective of causal decision theory, he is wrong to try this plan.
However this reasoning does require you to accept the causal prescription
in all Newcomb problems, which may be a harder nut to swallow than other
solutions to the Adam and Eve paradox.

More directly, the book provides various proposed axioms for rationality
and choice. I have tried to consider how they might relate to the
existence of multiple universes, but I don't see much connection.
It seems that one is still faced with the simple prescription of choosing
the outcome that maximizes the expected utility, with considerable
freedom for the utility functions.

One question is whether we can restrict or limit the decisions people make
in some of the paradoxes of copying. I am going to have some duplicates
made, and then they will have some experiences, and then some of those
duplicates get further duplicated, etc. Can we constrain the decisions
a rational person will make in these experiments? For example, if he
is given a chance to be duplicated to two people, and one will have
something good happen and one will have something bad, can we argue that
he should agree to this based on a 50% mixture of the utilities of the
two experiences? If we can make these kinds of statements, or even some
weaker ones, then I think this theory may be helpful in shedding light
on the issues we have dealt with.

Hal Finney
Received on Fri Jul 12 2002 - 10:34:08 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST