Re: Implementation

From: Russell Standish <R.Standish.domain.name.hidden>
Date: Tue, 27 Jul 1999 12:49:57 +1000 (EST)

>
> Russell Standish, <R.Standish.domain.name.hidden>, writes, quoting hal:
> > > If we agree with this argument, we can have both supervenience and
> > > computationalism, it seems to me. We agree that Maudlin's machine changes
> > > the program which is instantiated, but we claim that the new program
> > > is also conscious.
> > >
> > > Hal
> >
> > Nice try, but I think a brain in a resting state listening to music is
> > so much more complex in its processing of "counterfactuals" than the
> > Olympia example. There must be a dividing line somewhere between the
> > two examples - where the nonconcious entity crosses a threshold to
> > conciousness.
>
> This is certainly possible, but the point is that my example shows
> that the conclusion of Maudlin's paper is technically incorrect. It is
> possible to have both physical supervenience (where consciousness depends
> on physical activity) and computationalistm (where consciousness depends
> on instantiating one of a set of conscious programs).
>
> And further, I showed that it is plausible that for at least some cases,
> pruning the counterfactual tree would not eliminate consciousness.
> By that I mean, if you consider all possible inputs to the program as
> producing a tree of possibilities (like the familiar many-worlds tree),
> then when we eliminate some counterfactuals it is like pruning the tree
> (eliminating some branches). I argued that it seems unlikely that
> literally any pruning of any remote part of the tree would eliminate
> consciousness, hence it can tolerate some amount of pruning.
>
> As you say, it is possible that "light pruning" would leave consciousness
> intact, while "heavy pruning" would eliminate it. But why would
> you say that? What motivates your belief that heavy pruning of the
> counterfactual tree eliminates consciousness?
>
> Our reason for believing that was, I believe, based on an error in
> understanding Maudlin's argument. We agreed that the TM was conscious
> in virtue of running its program. When the change in counterfactual
> circumstances made it so that it was no longer running *that* program,
> we fell into the mistake of believing that made it no longer conscious.
> What we forgot was that there is more than one program that can make
> a computer conscious. Showing that a superficial change makes it stop
> running one specific program does not imply, as Maudlin suggests, that
> the change also makes it stop being conscious.
>
> So it seems to me we need a new argument for why the computer sans
> counterfactuals should not be considered conscious.
>
> You suggest that heavy but not light pruning eliminates consciousness,
> but this faces the problem that Chalmers calls "fading qualia". It's one
> thing to believe in zombies, beings which act conscious but are not,
> as in a heavily pruned counterfactual tree. But it's hard when we
> have a continuous range of cases, from light to heavy pruning, with
> consciousness at one end and zombiehood (or at least unconsciousness)
> at the other. What happens in the intermediate cases?
>
> Either consciousness is lost gradually, with sensations becoming
> less intense and fading away as we prune more deeply, or it is lost
> suddenly, and we have a case where one additional "snip", one additional
> counterfactual possibility eliminated, eliminates consciousness.
>
> The first seems impossible because people never comment on their loss
> of consciousness. We have to wonder what it would be like to be "only
> a little bit conscious" but to continue to behave normally. If it is
> possible to be in that state, how would it differ subjectively from
> our present state? And why would be unable to comment on the effects?
>
> The second seems unlikely, as one counterfactual possibility is much like
> another, and with the tree so dense it is hard to imagine that pruning one
> little possibility from the astronomically numerous branches could make
> a difference. Coming up with a theory of which programs are conscious
> is going to be difficult, but creating one which has this behavior is
> going to be nearly impossible, especially since there is no hint in
> the behavior of the computer as to when its consciousness "winks out".
> Any theory of consciousness which assumes such phenomena had better have
> a good reason for believing that pruning must eliminate consciousness
> eventually, and I'd like to hear what such a reason might be.
>
> Hal
>

I do believe that light pruning will not affect conciousness, but that
heavy pruning will. The metaphor here is of a percolation threshold,
or "when does a network become fully connected". If you take a densely
connected network, then removing links does not change the property
that the network is fully connected, until you remove a critical link
("the straw that broke the camel's back"). One cannot say which link
is the critical link, indeed it will depend on the order in which you
remove links. However when you remove all links from the network, the
network is obviously unconnected.

I believe there needs to be sufficient options and indeterminism for
free will to operate before a system can be considered concious. I
also believe that this indeterminsm must be supplied internally,
rather than externally, however, I'm not 100% convinced of this, and
could be persuaded otherwise.

As you say, it is very difficult to measure the property of
conciousness. One can assume conciousness as a useful model of a
system, and propose a Turing test as a procedure to test how good this
model is. However, it is always possible to _artificially_ construct a
system to pass the Turing test that isn't concious. The probability of
such a system arising naturally, (by evolution say) however must be
minute.

                                                        Cheers

----------------------------------------------------------------------------
Dr. Russell Standish Director
High Performance Computing Support Unit,
University of NSW Phone 9385 6967
Sydney 2052 Fax 9385 6965
Australia R.Standish.domain.name.hidden
Room 2075, Red Centre http://parallel.hpc.unsw.edu.au/rks
----------------------------------------------------------------------------
Received on Mon Jul 26 1999 - 19:59:36 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST