Re: need for anthropic reasoning
On Tue, Feb 20, 2001 at 04:52:10PM -0500, Jacques Mallah wrote:
> I disagree on two counts. First, I don't consider self-consistency to
> be the only requirement to call something a reasonable goal. To be honest,
> I consider a goal reasonable only if it is not too different from my own
> goals. It is only this type of goal that I am interested in.
That's fine, but when most people say "reasonable" the *reason* is not
just similarity to one's own beliefs.
> Second, there is no way of knowing whether you are in a so called "real
> world" or in a "virtual world". So if I don't care about "virtual" people,
> I don't even know whether or not I care about myself. That doesn't seem
> reasonable to me.
That's right, you don't know which world you are in. The proposal I made
was to consider your actions to affect all worlds that you can be in. But
you may not care about some of those worlds, in which case you just don't
take the effects of your actions on them into account when making your
decisions.
> "Evolution" is just the process that leads to the measure distribution.
> (Conversely, those who don't believe in an absolute measure distribution
> have no reason to expect Darwin to appear in their world to have been
> correct.)
I do believe in an absolute measure distribution, but my point is that
evolution probably does not favor those whose utility function are
just functions on the measure distribution.
> Also, I disagree that caring about others (regardless of who they are)
> is not likely to be "popular". In my speculation, it's likely to occur in
> intelligent species that divide into groups, and then merge back into one
> group peacefully.
Soon we may have AIs or uploaded human minds (i.e. human minds scanned and
then simulated in computers). It seems to me that those who don't care
about simulated thoughts would have an advantage in exploiting these
beings more effectively. I'm not saying that is a good thing, of course.
> >Anthropic reasoning can't exist apart from a decision theory, otherwise
> >there is no constraint on what reasoning process you can use. You might as
> >well believe anything if it has no effect on your actions.
>
> I find that a very strange statement, especially coming from you.
> First, I (and other people) value knowledge as an end in itself. Even
> if I were unable to take other actions, I would seek knowledge. (You might
> argue that it's still an action, but clearly it's the *outcome* of this
> action that anthropic reasoning will affect, not the decision to take the
> action.)
I also value knowledge as an end in itself, but the problems is how do you
know what is true knowledge? If you don't judge knowledge by how effective
it is in directing your actions, what do you judge it by, and how do you
defend those criteria against others who would use different criteria?
> Further, I do not believe that even in practice my motivation for
> studying the AUH (or much science) is really so as to make decisions about
> what actions to take; it is pretty much just out of curiousity. One so
> motivated could well say "you might as well do anything, if it has no effect
> on your knowledge". (But you can't believe just anything, since you want to
> avoid errors in your knowledge.)
Even if you study science only out of curiousity, you can still choose
what to believe based on how theoretically effective it would be in making
decisions. But again if you have a better idea I'd certainly be interested
in hearing it.
> Secondly, it well known that you believe a static string of bits could
> be conscious. Such a hypothetical observer would, by definition, be unable
> to take any actions. (Including thinking, but he would "have one thought
> stuck in his head".)
I'm not confident enough to say that I *believe* a static string of bits
could be conscious, but that is still my position until a better idea
comes along. I'd say that consciousness and decision making may not have
anything to do with each other, and that consciousness is essentially
passive in nature. A non-conscious being can use my proposed decision
procedure just as well as a conscious being.
To be completely consistent with what I wrote above, I have to say that if
a theory of consciousness does not play a role in decision theory (as my
proposal does not), accepting it is really an arbitrary choice. I guess
the only reason to do so is for the psychological comfort.
Received on Thu Feb 22 2001 - 20:35:39 PST
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:07 PST