Re: 3 possible views of "consciousness"

From: Jesse Mazer <lasermazer.domain.name.hidden>
Date: Tue, 30 Jan 2001 20:01:44 -0500

> > ... if a given system implements all possible computations than no
> > folk-psychological explanation is ultimately better than any other.
>
>I suggest you should pick the one that provides you the most effective
>interaction, of the ones your able to muster. That will be better for
>you, at least, if not better in an absolute cosmic sense.

But it seems like, in your view, planning for my own future would just be a
matter of "interpretation" too. There are plenty of Platonic worlds that
have histories identical to this one up until the present, and then suddenly
the laws of physics go crazy or something. Can I become Superman by using
an interpretation that says the law of gravity will be suspended for me and
me alone?

The key question here is whether reality has an objective global structure
beyond "all possibilities exist"--for example, a weight or probability
measure on the set of all possible worlds or experiences. I think that my
experiences are in some sense imposed on me by external reality--a good
theory of everything should give some explanation for why I observe myself
to be in a lawlike, consistent universe. It's possible that even if there
is an answer, it's beyond our abilities to comprehend...but since we really
have no idea, we might as well try to look for some candidate theories.

> > My view is that a theory of consciousness should involve a one-to-one
> > mapping between subjective experiences and *some* set of mathematical
> > structures, but I don't think the set of all "computations" is a good
> > candidate. I think we need something more along the lines of
>"isomorphic
> > causal structures," but there doesn't yet seem to be a good way to
>define
> > this intuition...Chalmers' paper at
> > <http://www.u.arizona.edu/~chalmers/papers/rock.html>
> > discusses his views on the subject.
>
>I see attempts (notably Chalmers') to classify mappings into valid and
>invalid implementations as arbitrary lines in the sand, good for
>starting arguments, but otherwise having no significance or benefit.

Well, certainly any mapping is OK for you to make if you want. But I think
Chalmers' idea is that there are some sort of psychophysical laws that
determine a correspondence between the physical and the mental, so that the
frequency of certain physical patterns determines the probability of
experiencing certain first-person events. So even though you're free to
choose any mapping that interests you, reality may have a kind of "preferred
mapping" that determines the probability I will experience one outcome or
another.

In the many-worlds interpretation all physically allowable patterns of
matter/energy probably exist somewhere, but different patterns may have
different measure--some may be more "frequent" or more "probable" than
others (although physicists haven't yet found a way to define global
probability in the MWI). Given some set of psychophysical laws, you could
use this to derive the fact that from a first-person point of view I am more
likely to observe certain possibilities than others.

But even if we grant the possibility of psychophysical laws, it seems that
there are strong arguments for the idea that instances of the same
computation lead to the same subjective experience--for example, imagine
gradually replacing each neuron with a chip that responds to inputs and
outputs the same way. So then we get the same problem, that it seems sort
of arbitrary to say that a computer is a "good" implementation but a rock is
not.

But I think something like "isomorphic causal structure" would make more
sense, although it's hard to make this notion precise. It seems like a
computer simulation of my brain should have the same causal structure as my
brain, even if the computer running that simulation is from a vanished
civilization and no external observers have any idea what the shifting 1's
and 0's are supposed to "represent." Likewise, even if the "mind fire" you
talk about in "Robot" (one of my favorite parts of the book, incidentally)
manages to squeeze huge computational power into the tiniest particle
dynamics in a way that's totally incomprehensible to outside observers at
our level of intelligence, such observers could still see the causal
structure of these dynamics--they just wouldn't know what they "mean." In
contrast, I don't believe advanced minds are ever going to download
themselves into the ticks of a clock.

It's true that under the right mapping, the ticks of a clock can be seen as
doing any "computation" you please, including a simulation of an intelligent
A.I. But if you wanted to actually implement this mapping in the physical
world so you could interact with the A.I., you'd end up reproducing the
causal structure of the A.I. on the computer (or brain) responsible for the
mapping. That's my intuition anyway--since I don't have a precise
definition of "causal structure" I can't be sure.

> > It doesn't make much sense to imagine two universes which are
> > identical in terms of both physical events and mental experiences
> > (if they exist), but which differ in a single way: in one,
> > Shakespeare is "really" better, and in the other Danielle Steele is.
>
>Sure it does. Imagine universes created by inferences from axiom
>systems. Two universes could have generating axioms identical except
>for the one about writing quality just as you've stated it. A
>Platonic universe can have "literary goodness" as a basic property
>just as it can have number, energy, or intention.

Even if it was part of a Platonic universe's axioms to assign a measure
called "literary goodness" to various pieces of literature, I don't think
that would settle much--the values could be pretty arbitrary, and observers
in the universe would be free to say that what they mean by the term
"literary goodness" is completely different from the quantity in the axioms.
  But universes with different psychophysical laws would make different
experiences more or less likely, and the observers' opinions wouldn't change
that...just as my opinions won't change the fact that if I jump off a
building I'm overwhelmingly likely to fall to the ground.

>I think, like the recent election, our disagreements are smaller
>than our uncertainties.

No doubt. But unwarranted speculations are what this list is all about,
right?

Jesse Mazer
_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com
Received on Tue Jan 30 2001 - 17:20:55 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST