Re: decision theory papers

From: Marcus Hutter <marcus.domain.name.hidden>
Date: Wed, 24 Apr 2002 16:51:18 +0200

H J Ruhl wrote:

> In any event in my view your argument makes many assumptions - i.e.
> requires substantial information, isolates sub systems, and seems to allow
> many sub states between states of interest all of which are counter to my
> approach.

Imo the assumption of a limited information exchange between an
intelligent being and its environment (nearly isolated subsystem)
is unavoidable, maybe even the key, to DEFINE (intelligent)
beings. Of course the details of complete isolation in the
intervals [t,t'] was just to illustrate the point.

Hal Finney wrote:

> So I don't think the argument against predictability based on infinite
> recursion is successful. There are other ways of making predictions which
> avoid infinite recursion. If we want to argue against predictability
> it should be on other grounds.

I don't talk about how to physically implement this infinite
recursion, e.g. "brute force crunching a particle-level
simulation" and I don't argue against predictability in general.
But if you assume that a part of the brain can perfectly predict
the outcome of the whole brain, then this is a mathematical
recursion. The same holds if you take an external device
predicting the brains behaviour and telling it the result
beforehand. Then you have to predict brain + external device on
a third level and so on. This is again a mathematical recursion.
Before discussion how this recursion could physically be realized
we have to think whether this recursion HAS a fixed point at all -
and this is already not always the case. The free will<->computability
paradox has actually nothing to do with computability. You could
also formulate it if you want to as free will <-> brain can be
described by a mathematical function.

Wei Dei wrote:

> I think it's pretty obvious that you can't predict someone's decisions if
> you show him the prediction before he makes his final choice.

For me its pretty obvious too, but as this thread discussing this
paradox got longer and longer I got the impression that it is at
least not obvious to all members of the list.

I liked the paper of David Deutsch, although his assumptions in
deriving decision/probability theory from QM could have been a bit
more explicit, mathematical and clearly stated. Although quite
different it reminded me on the derivation of probability theory
from Cox axioms.

I scanned the article by Barton Lipman but I'm not much interested
in "rational decisions based on logic", because I think there is
no necessity to refer to logic at all when making rational
decisions.

In "A Theory of Universal Artificial Intelligence based on
Algorithmic Complexity" http://www.idsia.ch/~marcus/ai/pkcunai.htm
I developed a rational decision maker which makes optimal
decisions in any environment. The only assumption I make is that
the environment is sampled from a computable (but unknown!)
probability distribution (or in a deterministic world is
computable), which should fit nicely into the basic assumptions of
this list. Although logic plays a role in optimal resource bounded
decisions, it plays no role in the unrestricted model.

I would be pleased to see this work discussed here.

There is also a shorter 12 page article of this 62 page report
available from
http://www.idsia.ch/~marcus/ai/paixi.htm
and a 2 page summary available from
http://www.idsia.ch/~marcus/ai/pdecision.htm
but they are possibly hard(er) to understand.

Best regards

Marcus
Received on Wed Apr 24 2002 - 07:58:36 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST