Re: What are the consequences of UD+ASSA?

From: Rolf Nelson <rolf.h.d.nelson.domain.name.hidden>
Date: Sat, 27 Oct 2007 17:18:23 -0000

> To put it more generally, thinking in terms of "how much you care about the
> consequences of your actions" *allows* you to have an overall preference
> about A and B that can be expressed as an expected utility:
>
> P(A) * U(A) + P(B) * U(B)
>
> since P(A) and P(B) can denote how much you care about universes A and B,
> but it doesn't *force* you to have a preference of this form. Standard
> decision theory does force you to.

True. So how would an alternative scheme work, formally? Perhaps
utility can be formally based on the "Measure" of "Qualia" (observer
moments). If you have a halting oracle, certain knowledge of a
Universal Prior, and infinite cognitive resources, you can choose your
action to maximize a utility function U(X); X is the sequence M(Q1),
M(Q2), ..., where the measures of all possible Qualia are enumerated.
In the typical case of everyday life decisions in 2007, M would often
reduce to an objective probability oP; and U(X) = U(M(Q1), M(Q2), ...)
maybe has an affine (in other words, a decision-theory-order-
preserving) transformation, for a typical 2007 human, to some function
U(how good life is expected to be for earthly observer O1, how good
life is expected to be for earthly observer O2, ...), (pretending for
now that you don't have any way of altering the "total measure" taken
up by a human being.)

"How good life is expected to be for observer O1" in turn perhaps
reduces, in typical life, to oP(O1 experiences Q1) * (desirableness of
Q1) + oP(O1 experiences Q2) * (desirableness of Q2) + ...

But now we have to say that no one actually has infinite cognitive
resources, let alone a halting Oracle. So, we probably still want a
"logical probability" lP to deal with things like "To what extent do I
currently believe that the Riemann Hypothesis is true." So you can't
choose an action to maximize U directly, instead you want to maximize
the expected utility, by maximizing the following: lP(X1) * U(X1) +
lP(X2) * U(X2) + ...

Humans would perceive, as "subjective probability", a combination of
the Measure-based "objective probability" and the logic-based "logical
probability".

Clear as mud, I'm sure. Plus the odds are that I got something wrong
in the details. But that's my take on it, anyway.


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sat Oct 27 2007 - 13:18:35 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:14 PST