Re: my current position (was: AUDA)

From: Wei Dai <weidai.domain.name.hidden>
Date: Tue, 15 Jan 2002 12:10:05 -0800

On Tue, Jan 15, 2002 at 12:47:18PM +0100, Marchal wrote:
> This is because I include in the first person its possible
> compassion feeling for what are possible others. (This is
> similar to what Brent Meeker said in its last post).
> Compassion, although it bears on others, is a feeling, isn't it?

But what is the compassion about? In this case it's about events that
you'll never experience in the first person. If you want to reason about
your compassion and make rational decisions based on it, you have to do it
from the third-person point of view.

> I did. (and it is why I try to understand your evolution).
> The very beginning of this discussion-list by you and Hal Finney has been
> very attractive to me at the start. I still don't understand your shift.

I'm not sure why it's hard to understand. I've concluded that it's not
always possible to make rational decisions based only on expected first
person experiences. Instead you must take into account the consequences of
your actions on the entire multiverse, even the parts of it that you don't
expect to observe. Then I went back and asked myself, if decision theory
is based on the third-person point of view, do we even need a concept of
expected first person experiences? I think the answer is no, which is good
because those issues I brought up earlier about problems with defining
probability of first person experiences were never resolved and I don't
think are resolvable.

> Not so easy question indeed. But here the methodology I use forces me
> to define the measure by the AUDA logic Z1*. The "infinite ratios" will be
> the same thanks (hopefully) to the non trivial constraints given by
> computational self-reference. Remember that our first person expectations
> rely on *all* our consistent (self)-extensions.

Are you saying that you believe the ratios in the infinite will be the
same, but you don't have a proof of that yet?

> All "relative universe/computation" exists and are on the same
> ontological footing, but we must bet on those which are more likely
> to be apparent for ourself, which are the one which provide
> relatively more numerous consistent extensions with respect to our
> current state. That's what can make our decisions purposefull, I think.
> In Schmidhuber term we need a "high multiplication" prior, which
> makes our neighborhood consistent expectation very slow to compute.

Think about it from the third person point of view. Why are you rationally
bound to treat each of your extensions equally? Why can't you care about
some of them more than others?
Received on Tue Jan 15 2002 - 12:14:05 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST