Re: Implementation

From: <hal.domain.name.hidden>
Date: Mon, 26 Jul 1999 09:27:39 -0700

Russell Standish, <R.Standish.domain.name.hidden>, writes, quoting hal:
> > If we agree with this argument, we can have both supervenience and
> > computationalism, it seems to me. We agree that Maudlin's machine changes
> > the program which is instantiated, but we claim that the new program
> > is also conscious.
> >
> > Hal
>
> Nice try, but I think a brain in a resting state listening to music is
> so much more complex in its processing of "counterfactuals" than the
> Olympia example. There must be a dividing line somewhere between the
> two examples - where the nonconcious entity crosses a threshold to
> conciousness.

This is certainly possible, but the point is that my example shows
that the conclusion of Maudlin's paper is technically incorrect. It is
possible to have both physical supervenience (where consciousness depends
on physical activity) and computationalistm (where consciousness depends
on instantiating one of a set of conscious programs).

And further, I showed that it is plausible that for at least some cases,
pruning the counterfactual tree would not eliminate consciousness.
By that I mean, if you consider all possible inputs to the program as
producing a tree of possibilities (like the familiar many-worlds tree),
then when we eliminate some counterfactuals it is like pruning the tree
(eliminating some branches). I argued that it seems unlikely that
literally any pruning of any remote part of the tree would eliminate
consciousness, hence it can tolerate some amount of pruning.

As you say, it is possible that "light pruning" would leave consciousness
intact, while "heavy pruning" would eliminate it. But why would
you say that? What motivates your belief that heavy pruning of the
counterfactual tree eliminates consciousness?

Our reason for believing that was, I believe, based on an error in
understanding Maudlin's argument. We agreed that the TM was conscious
in virtue of running its program. When the change in counterfactual
circumstances made it so that it was no longer running *that* program,
we fell into the mistake of believing that made it no longer conscious.
What we forgot was that there is more than one program that can make
a computer conscious. Showing that a superficial change makes it stop
running one specific program does not imply, as Maudlin suggests, that
the change also makes it stop being conscious.

So it seems to me we need a new argument for why the computer sans
counterfactuals should not be considered conscious.

You suggest that heavy but not light pruning eliminates consciousness,
but this faces the problem that Chalmers calls "fading qualia". It's one
thing to believe in zombies, beings which act conscious but are not,
as in a heavily pruned counterfactual tree. But it's hard when we
have a continuous range of cases, from light to heavy pruning, with
consciousness at one end and zombiehood (or at least unconsciousness)
at the other. What happens in the intermediate cases?

Either consciousness is lost gradually, with sensations becoming
less intense and fading away as we prune more deeply, or it is lost
suddenly, and we have a case where one additional "snip", one additional
counterfactual possibility eliminated, eliminates consciousness.

The first seems impossible because people never comment on their loss
of consciousness. We have to wonder what it would be like to be "only
a little bit conscious" but to continue to behave normally. If it is
possible to be in that state, how would it differ subjectively from
our present state? And why would be unable to comment on the effects?

The second seems unlikely, as one counterfactual possibility is much like
another, and with the tree so dense it is hard to imagine that pruning one
little possibility from the astronomically numerous branches could make
a difference. Coming up with a theory of which programs are conscious
is going to be difficult, but creating one which has this behavior is
going to be nearly impossible, especially since there is no hint in
the behavior of the computer as to when its consciousness "winks out".
Any theory of consciousness which assumes such phenomena had better have
a good reason for believing that pruning must eliminate consciousness
eventually, and I'd like to hear what such a reason might be.

Hal
Received on Mon Jul 26 1999 - 09:40:59 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST