Re: no need for anthropic reasoning

From: Wei Dai <weidai.domain.name.hidden>
Date: Sat, 17 Feb 2001 01:28:27 -0800

On Fri, Feb 16, 2001 at 10:22:35PM -0500, Jacques Mallah wrote:
> Any reasonable goal will, like social welfare, involve a function of the
> (unnormalized) measure distribution of conscious thoughts. What else would
> social welfare mean? For example, it could be to maximize the number of
> thoughts with a "happiness" property greater than "life sucks".

My current position is that one can care about any property of the entire
structure of computation. Beyond that there are no reasonable or
unreasonable goals. One can have goals that do not distinguish between
conscious or unconscious computations, or goals that treat conscious
thoughts in emulated worlds differently from conscious thoughts in "real"
worlds (i.e., in the same level of emulation as the goal-holders). None of
these can be said to be unreasonable, in the sense that they are not
ill-defined or obviously self-defeating or contradictory.

In the end, evolution decides what kinds of goals are more popular within
the structure of computation, but I don't think they will only involve
functions on the measure distribution of conscious thoughts. For example,
caring about thoughts that arise in emulations as if they are real (in the
sense defined above) is not likely to be adaptive, but the distinction
between emulated thoughts and real thoughts can't be captured in a
function on the measure distribution of conscious thoughts.

> So you also bring in measure that way. By the way, this is a bad idea:
> if the simulations are too perfect, they will give rise to conscious
> thoughts of their own! So, you should be careful with it. The very act of
> using the oracle could create a peculiar multiverse, when you just want to
> know if you should buy one can of veggies or two.

The oracle was not meant to be a realistic example, just to illustrate my
proposed decision procedure. However to answer your objection, the oracle
could be programmed to ignore conscious thoughts that arise out of its
internal computations (i.e., not account for them in its value function)
and this would be a value judgement that can't be challenged on purely
objective grounds.

> You need to know which type of thought has greater measure, "I saw
> heads, and ..." or "I saw tails, and ...". I call the measure of one,
> divided by the total measure, the *effective* probability, since it
> (roughly) plays the role of the probability for decision theory. But you
> have a point in a way ...
> Decision theory is not exactly the same as anthropic reasoning. In
> decision theory, you want to do something to maximize some utility function.
> By contrast, anthropic reasoning is used when you want to find out some
> information.

Anthropic reasoning can't exist apart from a decision theory, otherwise
there is no constraint on what reasoning process you can use. You might as
well believe anything if it has no effect on your actions.
Received on Sat Feb 17 2001 - 01:30:15 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST