>From: Wei Dai <weidai.domain.name.hidden>
>On Fri, Feb 16, 2001 at 10:22:35PM -0500, Jacques Mallah wrote:
> > Any reasonable goal will, like social welfare, involve a function of
>the (unnormalized) measure distribution of conscious thoughts. What else
>would social welfare mean? For example, it could be to maximize the number
>of thoughts with a "happiness" property greater than "life sucks".
>
>My current position is that one can care about any property of the entire
>structure of computation. Beyond that there are no reasonable or
>unreasonable goals. One can have goals that do not distinguish between
>conscious or unconscious computations, or goals that treat conscious
>thoughts in emulated worlds differently from conscious thoughts in "real"
>worlds (i.e., in the same level of emulation as the goal-holders). None of
>these can be said to be unreasonable, in the sense that they are not
>ill-defined or obviously self-defeating or contradictory.
I disagree on two counts. First, I don't consider self-consistency to
be the only requirement to call something a reasonable goal. To be honest,
I consider a goal reasonable only if it is not too different from my own
goals. It is only this type of goal that I am interested in.
Second, there is no way of knowing whether you are in a so called "real
world" or in a "virtual world". So if I don't care about "virtual" people,
I don't even know whether or not I care about myself. That doesn't seem
reasonable to me.
>In the end, evolution decides what kinds of goals are more popular within
>the structure of computation, but I don't think they will only involve
>functions on the measure distribution of conscious thoughts. For example,
>caring about thoughts that arise in emulations as if they are real (in the
>sense defined above) is not likely to be adaptive, but the distinction
>between emulated thoughts and real thoughts can't be captured in a function
>on the measure distribution of conscious thoughts.
"Evolution" is just the process that leads to the measure distribution.
(Conversely, those who don't believe in an absolute measure distribution
have no reason to expect Darwin to appear in their world to have been
correct.)
Also, I disagree that caring about others (regardless of who they are)
is not likely to be "popular". In my speculation, it's likely to occur in
intelligent species that divide into groups, and then merge back into one
group peacefully.
> > So you also bring in measure that way. By the way, this is a bad
>idea: if the simulations are too perfect, they will give rise to conscious
>thoughts of their own! So, you should be careful with it. The very act of
>using the oracle could create a peculiar multiverse, when you just want to
>know if you should buy one can of veggies or two.
>
>The oracle was not meant to be a realistic example, just to illustrate my
>proposed decision procedure. However to answer your objection, the oracle
>could be programmed to ignore conscious thoughts that arise out of its
>internal computations (i.e., not account for them in its value function)
>and this would be a value judgement that can't be challenged on purely
>objective grounds.
I've already pointed out a problem with that. Let me add that your
solution is also a rather boring solution to what could be an interesting
problem, for those who do care about "virtual" guys (and have the computer
resources).
> > Decision theory is not exactly the same as anthropic reasoning. In
>decision theory, you want to do something to maximize some utility
>function.
> > By contrast, anthropic reasoning is used when you want to find out
>some information.
>
>Anthropic reasoning can't exist apart from a decision theory, otherwise
>there is no constraint on what reasoning process you can use. You might as
>well believe anything if it has no effect on your actions.
I find that a very strange statement, especially coming from you.
First, I (and other people) value knowledge as an end in itself. Even
if I were unable to take other actions, I would seek knowledge. (You might
argue that it's still an action, but clearly it's the *outcome* of this
action that anthropic reasoning will affect, not the decision to take the
action.)
Further, I do not believe that even in practice my motivation for
studying the AUH (or much science) is really so as to make decisions about
what actions to take; it is pretty much just out of curiousity. One so
motivated could well say "you might as well do anything, if it has no effect
on your knowledge". (But you can't believe just anything, since you want to
avoid errors in your knowledge.)
Secondly, it well known that you believe a static string of bits could
be conscious. Such a hypothetical observer would, by definition, be unable
to take any actions. (Including thinking, but he would "have one thought
stuck in his head".)
- - - - - - -
Jacques Mallah (jackmallah.domain.name.hidden)
Physicist / Many Worlder / Devil's Advocate
"I know what no one else knows" - 'Runaway Train', Soul Asylum
My URL:
http://hammer.prohosting.com/~mathmind/
_________________________________________________________________
Get your FREE download of MSN Explorer at
http://explorer.msn.com
Received on Tue Feb 20 2001 - 14:48:46 PST