Re: need for anthropic reasoning

From: Jacques Mallah <jackmallah.domain.name.hidden>
Date: Mon, 26 Feb 2001 23:06:00 -0500

>From: Wei Dai <weidai.domain.name.hidden>
>On Tue, Feb 20, 2001 at 04:52:10PM -0500, Jacques Mallah wrote:
> > I disagree on two counts. First, I don't consider self-consistency
>to be the only requirement to call something a reasonable goal. To be
>honest, I consider a goal reasonable only if it is not too different from
>my own goals. It is only this type of goal that I am interested in.
>
>That's fine, but when most people say "reasonable" the *reason* is not
>just similarity to one's own beliefs.

    I disagree. This is an empirical question - what *do most people* mean
- so we would need to take some kind of survey to find out. And it would
have to be carefully done, because while they might claim not to mean it if
just asked, I believe that they secretly will mean exactly that. I am just
more honest about it.

> > Second, there is no way of knowing whether you are in a so called
>"real world" or in a "virtual world". So if I don't care about "virtual"
>people, I don't even know whether or not I care about myself. That doesn't
>seem reasonable to me.
>
>That's right, you don't know which world you are in. The proposal I made
>was to consider your actions to affect all worlds that you can be in. But
>you may not care about some of those worlds, in which case you just don't
>take the effects of your actions on them into account when making your
>decisions.

    But it remains the case that you don't know whether or not you care
about yourself, and that doesn't seem reasonable to me. Shall we go to your
"most people" standard on this one as well?

> > "Evolution" is just the process that leads to the measure
>distribution. (Conversely, those who don't believe in an absolute measure
>distribution have no reason to expect Darwin to appear in their world to
>have been correct.)
>
>I do believe in an absolute measure distribution, but my point is that
>evolution probably does not favor those whose utility function are
>just functions on the measure distribution.

    And I think it does. To find out, we're going to need to simulate 1000
Earth-like planets at the subatomic level. Who's got a big computer?

> > Also, I disagree that caring about others (regardless of who they
>are) is not likely to be "popular". In my speculation, it's likely to
>occur in intelligent species that divide into groups, and then merge back
>into one group peacefully.
>
>Soon we may have AIs or uploaded human minds (i.e. human minds scanned and
>then simulated in computers). It seems to me that those who don't care
>about simulated thoughts would have an advantage in exploiting these beings
>more effectively. I'm not saying that is a good thing, of course.

    You're assuming that the AIs couldn't fight back. With technology
improving, they might be exploiting us soon.
    Do you think that, 150 years ago, white people who didn't care about
blacks had an evolutionary advantage?

>I also value knowledge as an end in itself, but the problems is how do you
>know what is true knowledge? If you don't judge knowledge by how effective
>it is in directing your actions, what do you judge it by,

    Occam's razor. It's effective in directing actions, but that's really
just a side effect.
    (Also, there's no way to judge how effective something is in directing
actions, unless you already have some means of judging what the truth is!)
    It's also possible that just because something is effective, it doesn't
mean it's true! For example, people usually ascribe human-like emotions to
dogs. This leads to an ability to predict dog behavior. Yet many experts
believe that dog owners are often fooling themselves, and the predictions
work for other reasons. Dog reasons.

>and how do you defend those criteria against others who would use different
>criteria?

    Luckily, it is effective, so those who depart too much tend to die off.
That still leaves a large group of semi-Occamites who use it selectively
only. These you just have to make fun of.
    It's a postulate, sure. All foundational beliefs are unprovable and
ultimately come down to the brain's intuition. If a=b, and b=c, then a=c is
an example. For those who don't believe it, I can offer no argument (other
than repeating the word "obvious" over and over). That doesn't mean I'm
going to stop believing it.

>Even if you study science only out of curiousity, you can still choose
>what to believe based on how theoretically effective it would be in making
>decisions.

    No, and neither could you do so ever for any reason. (Since you'd first
need to know how to judge effectiveness. I guess you could postulate some
arbritrary method for that, though. No joke; some people on this list think
that quantum suicide is useful, remember?) It's all based on Occam's razor
(and other postulates, like the a=b=c thing).
    In addition, you'd need a lot of data to judge the effectiveness of a
belief (even with Occam's razor). No one has that much in most cases unless
they also can just use Occam's razor directly. I'd like to see a
counterexample; I doubt one is possible.

                         - - - - - - -
               Jacques Mallah (jackmallah.domain.name.hidden)
         Physicist / Many Worlder / Devil's Advocate
"I know what no one else knows" - 'Runaway Train', Soul Asylum
         My URL: http://hammer.prohosting.com/~mathmind/
_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com
Received on Mon Feb 26 2001 - 20:23:33 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST