Re: Decision theory

From: Jacques M Mallah <jqm1584.domain.name.hidden>
Date: Fri, 1 Jan 1999 21:41:38 -0500

On Fri, 1 Jan 1999, Wei Dai wrote:
> I [argue] that is not the problem. The problem is a physical theory making
> the same (global) predictions for every possible course of action. This is
> a problem because in classical decision theory, utility functions are
> defined over the global states of the universe.

        Yes, I can see what you mean, but I still think it would work.

> > Or to put it another way, even if the universe as a whole has no
> > free parameters, one is still free to consider a subsystem and has free
> > parameters available to specify what subsystem one chooses to look at.
>
> This might work for a universe that CAN be decomposed into subsystems, or
> in other words if the physical theory you're considering has some notion
> of locality.

        I think you're getting too abstract. How would the problem arise
in a practical situation? I have a decision to make, and I can calculate
the consequences of each possibility. It is true that I know that
overall, me-like beings choose option A 99% of the time, and option B 1%
of the time. But suppose my utility function is proportional to the
fraction of times A is chosen.
        Then maybe I should choose option A even though it won't alter the
overall statistics; this means I am considering the subsystem to be me and
the things I know to be causally affected by me, knowing that I have a 1%
chance to try to choose option A but fail.
        The problem is, does that mean by choosing option A this time, am
I thereby forcing another me-like being to choose option B in order to
conserve the overall statistics? If I am, then it would seem that I
indeed have no basis to make the decision. This is the problem you have.
        But what I suspect is that it would not be the case. For my
ability to forcast the fraction that choose A is inexact; and part of the
uncertainty is related to not knowing what I will choose this time.
Indeed if there were only one me-like being in the whole theory, there is
no way I could forcast what decision I will make until I make the
decision.
        Put another way, a true forecast of how many times me-like beings
choose A would have to take into account the decision I make in the case
at hand.
        So what decision I make in this case *does still alter the global
utility function*. This is possible because, remember, my mental
processes are not in competition with the laws of physics; they *are an
example of the laws of physics in action*.
        I can't say to myself 'there are 100 me-like beings, 99 of them
choose A, 1 chooses B, so I might as well choose B' because if me-like
beings did tend to think that way, at least half would have chosen B.
Achieving that 99% requires me to try to choose A.
        Of course for all practical purposes I can not calculate any such
thing as what fraction chooses A, which is another reason I can't say that
to myself. All I can say in most cases is that if I choose A now, it is
probably a good assumption that most me-like beings also chose A.

        Getting back to your statement:

> The problem is a physical theory making
> the same (global) predictions for every possible course of action.

        The global predictions are partially determined by which course of
action I take, and can't be fully predicted without knowing my choice;
they are determined by the laws of physics; that is not a contradiction
because my choices are the laws in action, which is no different from the
usual case.
        That's the answer: use the usual decisions, they do make a
difference. But why can't I calculate the global predictions a priori,
then truthfully tell myself it's already determined, and falsely tell
myself the direct consequence that my decisions don't matter = the global
utility function will not be affected?
        I suppose what it amounts to is that even if the physical theory
has no free parameters, my knowledge of its predictions for me-like
beings (and thus its overall predictions) is limited and must be
calculated with a model that does, allowing me to use a nontrivial
estimated utility function for my decisions.
        I think this may have some relation to Godel's theorem, which I
think implies that a computer can not fully model itself in a subprogram.
        Similarly, I can not use a physical theory to calculate what
decisions I will make until I can input those decisions into the model.
(I might know the true theory, but couldn't solve for that quantity with
a computer.) If I could, it would be a paradox because if the theory
predicts I do A, I could then try to defy it by doing B. The MWI might
remove that paradox by only predicting statistics, but I doubt if it would
qualitatively change the fact that my prediction power is limited, and my
attempts to defy it could ruin the predicted statistics. If my
calculation only applies to me-similar beings who make no such attempts
or don't know what the predicted statistics are, it clearly does not deal
with a sufficiently me-like case to tell me truly me-like statistics.
        It would be pretty cool if Godel's theorem could thus be made to
serve as a basis of tradition and non-weirdness. (Not that I believe in a
theory with no free parameters anyway).

                         - - - - - - -
              Jacques Mallah (jqm1584.domain.name.hidden)
       Graduate Student / Many Worlder / Devil's Advocate
"I know what no one else knows" - 'Runaway Train', Soul Asylum
            My URL: http://pages.nyu.edu/~jqm1584/
Received on Fri Jan 01 1999 - 18:43:06 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST