Re: decision theory papers

From: Hal Finney <hal.domain.name.hidden>
Date: Mon, 22 Apr 2002 19:38:02 -0700

I thought the debate between Brent Meeker and Wei Dai was quite
interesting regarding self-predictability in a deterministic universe.
(Perhaps the same issue would come up in a nondeterministic universe,
where one sought to predict the probability distribution of one's future
actions.)

The issue seems related to the undecideability of the halting problem,
and Wei's arguments seem to parallel some of the reasoning in that proof.

One of the proofs of the undecideability of the halting problem works
as follows. Take a supposed halting-problem-solving program H, and
embed it in a program P. P consists of running H on a copy of P itself,
then if H returns "halts", go into an infinite loop. If H returns
"doesn't halt", then P should halt.

This sets up a contradictory condition where if H says P should halt,
it doesn't halt, and vice versa. This contradiction establishes that
H cannot exist.

Wei offered a similar argument in saying that no being could predict all
their future actions. If they could, they could run this prediction
algorithm to determine what they would do, then do the opposite, just
like program P above.

Brent suggested that it might be impossible for the being to do the
opposite, that somehow it might be constrained always to do what it had
predicted that it would do. And I think that is as far as the debate
along these lines got.

Clearly in the case of computers, Brent's alternative would not work.
We have a model of how computers work and we can always write P in the
form in which it runs H as a "black box" and then does the opposite of
what H predicts. However, what about the case of intelligent beings?

It seems that based on our understanding of free will, beings ought to
have the same kind of freedom of action that is needed in this case.
They ought to be able to run any prediction algorithm, good or bad, as
a "black box", and then take whatever actions they choose as a result
of that prediction. The nature of free will is such that, given a
prediction that action X will be taken, the being can therefore refuse
to take action X.

So Brent's alternative amounts to suggesting that beings in such
a universe would not have free will. Furthermore, they would be
aware of this fact once they advanced to the point where they could
make predictions about themselves. The fact that they did not have
the freedom to contradict their predictions would directly violate the
meaning of free will.

In fact it is worse than that, as people would not only be unable to
contradict their own predictions about their future actions, they would be
unable to contradict similar predictions about them, made by other people.
If being X tells being Y what he will do, based on a completely accurate
theory, being Y will be incapable of doing other than what is predicted.

In fact, if these beings actually have this nature, then it would
seem that their absence of free will would be noticeable even without a
complete theory of the universe. If being X tells Y what it will do using
a complete theory, Y cannot contradict it. But if X tells Y what it will
do using an incomplete theory, and it happens to be the same prediction,
Y can't contradict that either, because the only input it got was the
prediction from X. There is no way that Y can know whether X's prediction
was based on a complete theory or an incomplete one. If we assume that Y
is unable to contradict predictions based on complete theories, then he
must also be unable to contradict predictions based on incomplete ones,
if they happen to match what the complete theory would say.

In other words, if Y is about to make a binary choice (calling heads
or tails), and X tells him which one he will do, X has a 50% chance of
making the same prediction that a complete theory would. Hence at least
half the time Y would be forced to make the same call that X predicted,
and he would be unable to consistently do the opposite of X's prediction.
His absence of free will would be manifest, and no such beings could be
under the illusion that they had free will.

These are all examples of how different the minds would have to be in a
universe which uses Brent's alternative. They would not have free will,
and in fact in many circumstances they would be unable to do other than
what someone predicted of them. This is sufficiently different from
the workings of minds as we understand them that I doubt whether it
is relevant to the issues involving decision theory that we may want
to pursue.

Hal Finney
Received on Mon Apr 22 2002 - 19:48:37 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST