Re: decision theory papers

From: Brent Meeker <meekerdb.domain.name.hidden>
Date: Thu, 18 Apr 2002 21:58:04 -0700

On 18-Apr-02, Wei Dai wrote:
> On Thu, Apr 18, 2002 at 05:39:39PM -0700, Brent Meeker wrote:
>> Keeping to the idea of a deterministic universe - wouldn't the
>> mathematical description of the universe include a description
>> of the brain of the subject. And if the universe is computable
>> it follows that the behavoir of the subject is computable. If
>> the person, or anyone else, runs the algorithm predicting the
>> subjects behavoir - an operation that will itself occur in the
>> universe and hence is predicted - and *then the subject
>> doesn't do what is predicted* there is indeed a contradiction.
>> But the conclusion is only that one of the assumptions is
>> wrong. I'm pointing to the assumption that the subject could
>> "then do the opposite of what it predicted" - *that* could be
>> wrong. Thus saving the other premises.

>> Obviously the contradiction originates from assuming a
>> deterministic universe in which someone can decide to do other
>> than what the deterministic algorithm of the universe says he
>> will do.

> Consider what the prediction algorithm would have to do. It
> basicly has to simulate the entire history of the universe from
> the beginning until it reaches the point where the subject
> makes his decision.

Why the whole universe? Why would you suppose that is has to
simulate more than a very tiny part of the universe? But really
that's beside the point - your argument rests on the idea that
the algorithm is in the tiny part, which we assume includes the
subjects brain, and therefore, in simulating the this part, it
must simulate itself - thus requiring a vicious recursion. This
is different than supposing the algorithm reaches a conclusion
and then the subject does something contrary. It entails that
the algorithm never reaches a conclusion.

However, I don't see that the argument is conclusive. Suppose the
algorithm is run on hardware outside the tiny part that must be
included to predict the subjects decision. Then it doesn't have
to simulate itself and it can reach a decision. If that is
possible, then it is also possible that it could be in the
subjects brain in a part (where "part" means logically distinct
not necessarily spacially) that does not have to be simulated to
predict his behavoir. For example, suppose the subject is going
to decide whether to have chocolate ice cream or vanilla ice
cream and further suppose that his brain is structured such that
he always orders the one different from what he last time. Then
the algorithm need only access his memory to see what he had
last time and with a single if-then the decision is predicted.

I don't know that all decision algorithms can be split off this
way - but on the other hand I don't see any contradiction is
supposing they could.

Now what if the subject has a copy of the
> algorithm in his brain and tries to run the algorithm on
> himself? The algorithm would go into an infinite recursion
> trying to simulate itself simulating itself ... If you have
> only a finite amount of time and computational power with which
> to reach a decision, there is no way you can complete a run of
> the prediction algorithm within that time. So again you have to
> make the decision without being able to predict your choice
> from the mathematical description of the universe.

Of course we all know that there is no prediction algorithm and
if they were you couldn't execute it. So I wonder how important
it is that there be this in-principle prohibition? Are you
going to make some further argument that depends on the logical
(not just practical) impossibility of an prediction algorithm?

Brent Meeker
"If I had known then what I know now, I would have made the same
mistakes sooner."
   --- Robert Half



Brent Meeker
Received on Thu Apr 18 2002 - 22:01:14 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST