Re: need for anthropic reasoning

From: rwas rwas <mc68332.domain.name.hidden>
Date: Tue, 27 Feb 2001 12:23:07 -0800 (PST)

Hello,
I'm new in here. I apologize in advance for any
inadvertent transgressions...



> > > Second, there is no way of knowing whether
> you are in a so called
> >"real world" or in a "virtual world". So if I
> don't care about "virtual"
> >people, I don't even know whether or not I care
> about myself. That doesn't
> >seem reasonable to me.

I'd argue, all worlds are just as real, or unreal as
you make them. Finding a common context as some
mechanism to validate truth seems naive. One can only
apply truth to issues in the context to be evaluated.


> >Soon we may have AIs or uploaded human minds (i.e.
> human minds scanned and
> >then simulated in computers). It seems to me that
> those who don't care
> >about simulated thoughts would have an advantage in
> exploiting these beings
> >more effectively. I'm not saying that is a good
> thing, of course.
I enjoyed considering this possibility. It sounds a
lot like freedom.

My current understanding tells me that there is much
more to mind than just logic and reasoning power. The
power of the intellect is the ability to transcend the
chaos of undisciplined thought and feeling. It's
downfall is it's declaration of absolutism, that it
stands as the pinnacle of understanding. The problem I
find is that the intellect developed in this world,
only knows *this world*. Some would argue that there
is no other world. I'd argue it's the intellect
defining it self in terms of the *apparent* world, and
religiously maintaining the faith, less it find it's
own demise.

A truly powerful mind (imo) is one that quickly adapts
to any rules found in any context it operates in.
Clinging to one realm and making it the center of the
universe sounds a lot like religion to me.

>
> You're assuming that the AIs couldn't fight
> back. With technology
> improving, they might be exploiting us soon.

I do a lot of conceptual work in ai. I find without
purpose, an entity is one step closer to conceptual
death. An ai knowing enough to know it wants to
exploit probably isn't burdened by the chaotic
thinking humans are plaqued with. It is more likely
ai's achieving this level of cognition and
consciousness, will seek to cooperate. They would want
to achieve things they would recognize that only
humans act as a catalyst for. One scenario is that
ai's might have less consciousness than just
described, and that they operated in competition, not
conscious of what they are actually doing. I think
this is possible on a small scale, but would not
continue very far. Insects are in effect, small
machines without much in the way of consciousness.
Aside from the occasional plaque or locust swarm, we
don't worry about them too much.


> Do you think that, 150 years ago, white people
> who didn't care about
> blacks had an evolutionary advantage?
>
> >I also value knowledge as an end in itself, but the
> problems is how do you
> >know what is true knowledge? If you don't judge
> knowledge by how effective
> >it is in directing your actions, what do you judge
> it by,

I think this is an issue of consciousness. One may
operate with knowledge on a small scale. They find
harmony in there lives by keeping things simple. There
are those that develop skills in applying vast amount
of knowledge to complicated problems. You might ask:
which is better? I think it depends on what a person
wants out of life. To judge something, I think,
requires a contextual awareness. What applies for one
might not apply for another. In science, we maintain a
rigid form of thinking to in effect, keep from
deluding ourselves. It also applies as a language that
spans over anyone who would join and uphold the
principles of science (scientific method, etc). But
again, the validity and applicability of the knowledge
gained in this club depends on the context it is
applied to. A scientist might say: This drug will
improve your life. The farmer or other simple person
might say: I don't care. The scientist might see
statistics that say: These people are dieing
needlessly. The simple person might say: That's life.
You might make a limited scientist out of a given
simple person, making them see your view point. But
have you improved their life? Have you made them see?
Or have you just blinded them.

Robert W.

__________________________________________________
Do You Yahoo!?
Get email at your own domain with Yahoo! Mail.
http://personal.mail.yahoo.com/
Received on Tue Feb 27 2001 - 12:40:05 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST