Re: decision theory papers
On 23-Apr-02, Wei Dai wrote:
> I think it's pretty obvious that you can't predict someone's
> decisions if you show him the prediction before he makes his
> final choice. So let's consider a different flavor of
> prediction. Suppose every time you make a choice, I can predict
> the decision, write it down before you do it, and then show it
> to you afterwards. Neither the infinite recursion argument nor
> the no fixed point argument work against this type of
> prediction. If this is actually possible, what would that imply
> for free will?
> If you are an AI, this would be fairly easy to do. I'll just
> make a copy of you, run your copy until it makes a decision,
> then use that as the "prediction". But in this case I am not
> able to predict the decision of the copy, unless I made another
> copy and ran that copy first.
> The point is that algorithms have minimal run-time
> complexities. There are many algorithms which have no faster
> equivalents. The only way to find out their results is to
> actually run them. If you came up with an algorithm that can
> predict someone's decisions with complete accuracy, it would
> probably have to duplicate that person's thought processes
> exactly, perhaps not on a microscopic level, but probably on a
> level that still results in the same conscious experiences. So
> now there is nothing to rule out that the prediction algorithm
> itself has free will. Given that the subject of the prediction
> and the prediction algorithm can't distinguish between
> themselves from their subjective experiences, they can both
> identify with the prediction algorithm and consider themselves
> to have free will. So you can have free will even if someone is
> able to predict your actions.
I think "free will" is an incoherent concept and useless as a
basis for aruguments about how the world works. Most people
would say that the existence of a deterministic algorithm which
modelled and predicted one's decisions would contradict free
will. On the other hand, they would not accept a randomness in
the decision process as free will either. Both viewpoints
neglect the fact that a person is in almost continuous
interaction with their evironment and to regard them as isolated
computers is only an approximation.
I suppose that the brain's function is something close to
deterministic chaos. One's behavoir is unpredictable, to some
degree, because the brain has a large amount of stored
information that interacts with the stream of new information
that has provoked the need for decision. All most all of this
is below the level on consciousness. Although the brain must be
almost completely deterministic, it is certainly possible that
quantum randomness could play a part.
> The more obvious fact that you can't predict your own actions
> really has less to do with free will, and more with the
> importance of the lack of logical omniscience in decision
> theory. Classical decision theory basically contradicts itself
> by assuming logical omniscience. You already know only one
> choice is logically possible at any given time in a
> deterministic universe,
I don't understand "logically possible". Decision theory at most
provides a quantification that identifies a certain choice as
logically optimum and this optimality is only probabilisitic.
But the optimality is relative to some value system of the
decider. The value system is not logically entailed by anything
in decision theory.
and with logical omniscience you know
> exactly which one is the possible one, so there are no more
> decisions to be made. But actually logical omniscience is
> itself logically impossible, because of problems with infinite
> recursion and lack of fixed points. That's why it's great to
> see a decision theory that does not assume logical omniscience.
> So please read that paper (referenced in the first post in this
> thread) if you haven't already.
Brent Meeker
"Every complex problem has a solution that is simple, direct,
plausible, and wrong."
-- HL Mencken
Received on Sat May 04 2002 - 13:34:21 PDT
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:07 PST