Re: Fwd: Implementation/Relativity

From: Christopher Maloney <dude.domain.name.hidden>
Date: Thu, 29 Jul 1999 08:15:41 -0400

Hans Moravec wrote:
>
> Russell Standish <R.Standish.domain.name.hidden>:
> > I don't think we ever discussed the concept of attributing
> > consciousness to inanimate objects before Hans came along.
>
> But I think you DID agree to attribute consciousness to
> purely abstract entities, notably mathematically defined
> universes containing SASes.
>
> I merely pointed out that it is possible, even natural
> and common, to map such abstractions containing
> self-aware systems onto many things we commonly encounter.
>
> This violates some reflexive assumptions you carry, many
> instilled by a western education.
> Those assumptions badly need to be violated.
>
> They may have been good during our recent naive materialist
> phase of development, but that phase is ending.
> This list's discussion topic is one symptom of that end, as
> are looming questions about conscious machines.
>
> Other traditions have no problem seeing minds in
> inanimate objects, when such interpretation facilitates
> interaction. That acceptance has much to do with the
> Japanese comfortable acceptance of robots.
>
> Western stinginess in attributing minds, on the other
> hand, is becoming a Luddite-rousing impediment to progress.

I believe this is what's called "idolatry". To claim that this
is a new, exciting concept in science and/or philosophy is
preposterous. Also, the claim that somehow western philosophy
is less idolatrous, and therefore flawed, is utterly ludicrous.

Hans, you should try to back up some of your claims a little,
instead of declaring them with certainty, and deriding us for
not seeing the obvious. I don't like being called pedestrian.
And you're idea of attributing consciousness to a teddy bear
is not as novel as you'd like to think. Of course I've
considered that novels are "windows" into other universes, and
I'd be surprised if most of the members of this list hadn't,
also.

But the problem, which makes such a point of view useless, was
pointed out by Hal. Fictional scenarios are lawless. In our
world, we invariably see that some sort of order and persistence
manifests itself. But in fiction, I, or any author, is free to
make up whatever he or she wants, almost without constraint.


Hans Moravec wrote:
>
> Christopher Maloney <dude.domain.name.hidden>:
> > I'm sure that there are ... universes in which a detective named
> > Sherlock Holmes actually exists, fitting all the right descriptions.
> > ... but they are not accessible by us.
>
> The Connan-Doyle books are an access to such universes! Like a spycam
> peeking into them. Universes are, after all, abstractions, exactly as
> are fictional scenarios. Simulations, whether in computers or authors'
> and readers' imaginations, connect alternative worlds to us.

I don't believe that universes are absolute abstractions. I believe
in the "all-universe hypothesis", but I also believe that it must
be possible to establish some kind of measure over the universes,
that gives rise to the physical laws we witness. So, in some sense,
they are "real". If you disagree with this, kindly define what you
mean by "abstractions".

Also, I'm curious about how you react to Descartes "I think, therefore
I am". When you say that each of us is just an abstraction, what do
you mean? I would agree with Descarte that my own existence is the
only thing I can be really sure of. And dammit, I do seem to be
trapped inside some sort of universe that I never made, and that obeys
physical laws. How do you explain that?

 
> I think, deep down, you harbor the conventional illusion that our
> physical world is somehow more real than other possible worlds. You
> should try to liberate yourself from that notion.

Please refrain from psycho-analyzing me. I don't agree. I *do*
believe that there must be some sort of measure. I would put it
either:
  Sup-phys perspective: some mathematical structures have a smaller
  measure than others, and are thus less likely to be observed by
  me (I'm less likely to find myself in those).

  Comp perspective: Out of the entire ensemble of possible next
  inputs to my computational structure(s) (plural because as Bruno
  has pointed out, I cannot be sure of exactly which program I am)
  there is a measure of likelihood, which makes certain sets of
  inputs much more likely than others.

I would never say "more real". I often say things like "less
likely".
 
> As an exercise to make Sherlock' reality more apparent, imagine the
> following progression of alternate implementations:

Okay.
 
> The adventures of Sherlock Holmes described in a book (as you read it,
> a simulation of Sherlock's world is created in your head, but you
> (mistakenly) discount that as not real)

I never discounted the simulation of Sherlock's world in my head as
not real. But I assume you'd agree that the simulation is of
extremely low fidelity. Now, if you work with simulators, you must
know that the lower the fidelity of the simulation, the less useful
it is (in general).

 
> The same stories portrayed by Jeremy Brett and other actors. There is
> now a lot more specific detail, including visual. Sherlock Holmes
> really exists as an interpretation of the behavior of Jeremy Brett
> (there is another interpretation in which Brett is an actor portraying
> Holmes, but that is interesting mainly to acting students).

Also extremely low fidelity.

At this point, Hans, I'd like to say
that I am entirely with you in interpreting a simulation as reality
in many cases. For example, in a weather simulator, if it's of
high enough fidelity, I'd say that weather really is happening in
there. But within any simulation, there is an imposition of new
physical laws. Also, the structures therein have a certain specifiable
complexity and behavior.

The simulation of a fictional character inside someone's head is a far
cry, I'd say, from another human being. It's just not even remotely
similar. It obeys different laws of emotion - which are probably some
sort of bastardized subset of the actor's. It can't have any memories
that are not from the actor, etc.

I just don't believe in magic. As I said in my earlier post, I could
cut open a persons head, and see all kinds of gunk in there that I
could use to explain that person's behavior. But how would I explain
the behavior of a fictional character?

 
> The same stories embedded into interactive video game. You can now
> not only watch Sherlock, but interact and he will respond to you.
> When game AI is sufficiently advanced, you can have long, insightful
> conversations with him.

Then, and only then, does he become an implementation of a conscious
structure. When the computational structure is complex enough to
pass a Turing test, then you have something.

 
> The same AI programs that control the virtual characters installed in
> full-size robot bodies. Not only can you talk with him, you can run
> with him across the moors, and when the story is over, take him out
> and introduce him to your friends, as real a part of the particular
> fantasy we call physical reality as you or I.

I, for one, find your emphasis on robots somewhat pedestrian. Why
should the AI need a body?

 
> (I thought the Star Trek TNG explorations of these kind of ideas
> via holodeck virtual reality was pretty good, and much better than
> some of the pedestrian sentiments I've seen on this list recently.)
>
> > If our tools were sophisticated enough, we could figure out what
> > that creature was experiencing at that moment, independent of his or
> > her report.
>
> NO! We may determine the full physical structure of an organism well
> enough to simulate it faithfully as a purely physical object.
>
> However, any experiences we impute to it will remain a subjective
> matter with different answers for different observers. Some observers
> will be content to say there are no experiences in any case, including
> when they simulate you or me.

And I would say that they are wrong. But that would be a matter of
definition among those super-beings, and there would be no
ambiguity. It's not magic. Sometimes it sounds as though you have
a weird dualist perspective.

Let me say it again, there would be no ambiguity. They would make
the definitions, and then be able to discern and answer questions
with (supposedly) perfect detail about what it is like to be me.
Once they define "subjective experience", they would be able to
tell what it was that I was subjectively experience at a given
moment, with greater accuracy than I'd be able to report it.

>
> Physical measurements don't objectively reveal experiences, because
> experiences are not physical properties. Pain, pleasure, belief and
> the other psychological components of consciousness are abstractions
> that can be mapped onto physical structures, but not in a unique way.

Then they become useless. I would suggest that there are (or could
be) definitions of those concepts that could apply, perhaps statistically,
across a broad range of complex organisms (or computer programs). The
ambiguity exists now because we can't grasp the entire structure. We
know so little about the workings of the brain.

 
> Some mappings are useful for particular purposes. Psychological
> mappings surely evolved so we could coordinate better in social
> groups. It helps me (read "me") act effectively to classify your
> state as hungry or sad or in pain, or liking or disliking me.

This is correct. But when we move into the realm of science, we
need to make our definitions precise.
 
> It also helps me plan my own activities to similarly classify my own
> state, and a richer classification is possible because I have
> privileged access to all sorts of internal variables like signals
> from my peripheral and visceral nerves and brain systems, hormone
> concentrations and so on.
>
> The resulting abstract psychological self-interpretation (our internal
> sense of consciousness) is so tightly integrated to our physical
> functioning that it may seem absolute and inevitable, but it's not.
>
> The standard self-interpretation is replaced by other interpretations,
> for instance when we sleep, become unconscious, are hypnotized or
> otherwise influenced by compelling suggestions or go into meditative
> trances (even as we perform tasks excellently). In many such cases
> seemingly absolute sensations like intense pain simply vanish, no
> longer part of our self-interpretation.
>
> Multiple-personality syndrome also seems to be an example of a body
> and brain interpreting itself differently at different times.

I fail to see the point of any of these examples. You started out
by saying "the resulting ... self-interpretation ... may seem absolute
and inevitable, but it's not." But you have only shown that the
self-interpretation is a function of physical and emotional state.
So, it's another set of variables, what's the problem?
 
> Western philosophy of mind, influenced by soulist religious ideas, has
> hyperinflated the significance of our own mutable self-interpretations
> into absolute immutable bedrocks of existence. But that idea just
> doesn't work, it only befuddles its holders.

I don't know, I don't feel befuddled. Are you saying I'm wrong,
that you somehow have access to my mental state, and my self-
interpretation is questionable?


-- 
Chris Maloney
http://www.chrismaloney.com
"Donuts are so sweet and tasty."
-- Homer Simpson
Received on Thu Jul 29 1999 - 05:26:10 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST