Re: 3 possible views of "consciousness"

From: Jesse Mazer <lasermazer.domain.name.hidden>
Date: Mon, 05 Feb 2001 01:36:48 -0500

hpm.domain.name.hidden wrote:

> > So then we get the same problem, that it seems sort of arbitrary to
> > say that a computer is a "good" implementation but a rock is not.
>
>I think a rock is a perfectly good implementation of any
>self-contained computation, say of a universe and its inhabitants.

It depends what you mean by "good implementation." The context of my
comment above was, *if* you believe there is a single true set of
psychophysical laws, are the laws likely to be defined in terms of
"computation" or not? If you don't believe in these sorts of laws, then the
question of what's a "good"
implementation is somewhat academic--it's really just a question of what's
the neatest way to define our terms. If, on the other hand, there's a
single set of psychophysical laws that reality uses to determine some kind
of global measure on the set of all possible experiences (making some types
of experiences more probable and others less probable), then we can ask
about what those laws might look like, based on our belief that they should
be "elegant" in some way (so we can rule out laws that say only blue-eyed
people are conscious, for example).

My question for you is, does your Platonic view of things rule out the idea
of a single global theory of consciousness? I'm not talking about different
Platonic worlds having their own unique definitions of consciousness...if
that was true we'd still have the problem of why I'm any less likely to
experience the laws of physics suddenly breaking down than I am to
experience the world continuing as normal. We need a truly global,
"objective" theory of consciousness to deal with this problem, I think (and
possibly a global measure on the space of all possible laws of physics,
although not necessarily).

This shouldn't be so hard to swallow, even for a strict Platonist--after
all, we expect the laws of arithmetic to be the same in all possible
universes, and I don't really see how something like information theory
could change either. If there's really a theory of consciousness out there,
my hope is that it would be similar to information theory, deducible from a
few basic first principles which perhaps could be justified on philosophical
grounds.

> > It's true that under the right mapping, the ticks of a clock can be seen
>as
> > doing any "computation" you please, including a simulation of an
>intelligent
> > A.I. But if you wanted to actually implement this mapping in the
>physical
> > world so you could interact with the A.I., you'd end up reproducing the
> > causal structure of the A.I. on the computer (or brain) responsible for
>the
> > mapping. That's my intuition anyway--since I don't have a precise
> > definition of "causal structure" I can't be sure.

[snip comments about whether a clock is a good implementation or not]

>But if the interaction is maintained, why fret about the internal
>implementation?

This position seems pretty close to behaviorism...I believe that there is
more to me than my outward actions. A lookup table that could only simulate
10 seconds of interaction with me would almost certainly not experience what
I experience over the course of 10 seconds, because there's plenty of things
going on in my mind that can't be expressed in a 10-second soundbite, but
are part of my conscious experience nevertheless.

That should be a private matter between an entity and
>its maker (including its code optimizer, which might take a
>straightforward, Chalmers-blessed causal AI formulation and turn it
>into a humongous lookup table purely for the sake of efficiency.
>Chalmers might say that level 20 optimization (partial tables) leaves
>the system conscious, but level 21 (full table) turns it into a
>zombie. I say he's blowing smoke: there's no justificiation for such a
>distinction.).

Well, I'll tell you why *I* think a reasonable theory of consciousness
would distinguish between an A.I. and a lookup table with identical output,
and you can tell me what you think.

My argument against lookup tables is based on the fact that the only way to
build a lookup table is to actually run a regular simulation through all
possible inputs, and then record its outputs. In terms of the intuition I
outlined before, the simulation would have the right "causal structure",
while the lookup table is basically just a huge searchable archive of
recordings of the simulation in different circumstances. There's no more
reason to think a lookup table is conscious then there is to think a
videotape of me talking is conscious.

Once again, I'm making the assumption that there's *some* set of objective
psychophysical laws, and that these laws would tell you a lookup table
doesn't give rise to the same experiences as a detailed simulation. Another
way of saying this is that I think multiple physical instantiations of a
simulation add to the global probability of the associated conscious
experience, but viewing a recording multiple times does not. (The reason to
believe that multiple physical instantiations raise the probability is that
if certain types of physical systems are more common in the multiverse, that
will make certain types of experiences more probable, explaining why 'laws
of physics going crazy'-type experiences are unlikely.)

what they mean by the term
> > "literary goodness" is completely different from the quantity in the
> > axioms.
>
>Of course I assume the goodness axiom operates on the universes
>primitives in conjunction with the other axioms, producing its own
>unique theorems, which would affect experiments, evolution and brain
>operation.

Even if that were possible, if we were to read one of the "great" novels
from such a universe we might still think it was garbage...we might say that
the laws of such a universe were systematically biasing the aesthetic
judgements of the inhabitants of that universe.

Anyway, remember that my original comment was that it doesn't make sense to
have two universes that are *identical* in terms of both physical events and
conscious experiences, but which differ in terms of the "real" literary
goodness of various books. OTOH, it does seem to make sense to imagine a
universe physically like ours but in which all the inhabitants are
"zombies", even if that turns out to be impossible under whatever theory of
consciousness we eventually adopt. If this intuition is correct, it tells
us that consciousness is not the same sort of thing as literary goodness or
cuteness.

> > "mind fire" (one of my favorite parts of the book, incidentally)
>
>Thanks very much for the comment. Some of the reviewers thought
>I'd simply gone loony there.

Yeah, whenever you take A.I. seriously and extrapolate the consequences, you
tend to come up with loony-sounding scenarios...but the "mind fire" idea
really made a lot of sense to me when I started thinking about it (although
it's possible future posthuman intelligences could get more computational
power by travelling through a wormhole into a designer 'baby universe' or
something similar). I also enjoyed the whole discussion on "Exes" in
general, like the section about the diffences in the societies of those on
the moving frontier of inhabited space and those in the crowded center.

Jesse Mazer

_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com
Received on Sun Feb 04 2001 - 22:43:15 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST