RE: computationalism and supervenience

From: Colin Geoffrey Hales <c.hales.domain.name.hidden>
Date: Sat, 16 Sep 2006 18:10:56 +1000 (EST)

>
> Colin Hales writes:
>
>> Please consider the plight of the zombie scientist with a huge set of
sensory feeds and similar set of effectors. All carry similar signal
encoding and all, in themselves, bestow no experiential qualities on the
>> zombie.
>> Add a capacity to detect regularity in the sensory feeds.
>> Add a scientific goal-seeking behaviour.
>> Note that this zombie...
>> a) has the internal life of a dreamless sleep
>> b) has no concept or percept of body or periphery
>> c) has no concept that it is embedded in a universe.
>> I put it to you that science (the extraction of regularity) is the
science
>> of zombie sensory fields, not the science of the natural world outside
the
>> zombie scientist. No amount of creativity (except maybe random choices)
would ever lead to any abstraction of the outside world that gave it the
>> ability to handle novelty in the natural world outside the zombie
scientist.
>> No matter how sophisticated the sensory feeds and any guesswork as to a
model (abstraction) of the universe, the zombie would eventually find
novelty invisible because the sensory feeds fail to depict the novelty
.ie.
>> same sensory feeds for different behaviour of the natural world.
Technology built by a zombie scientist would replicate zombie sensory feeds,
>> not deliver an independently operating novel chunk of hardware with a
defined function(if the idea of function even has meaning in this
instance).
>> The purpose of consciousness is, IMO, to endow the cognitive agent with
at
>> least a repeatable (not accurate!) simile of the universe outside the
cognitive agent so that novelty can be handled. Only then can the
zombie
>> scientist detect arbitrary levels of novelty and do open ended science
(or
>> survive in the wild world of novel environmental circumstance). In the
absence of the functionality of phenomenal consciousness and
with
>> finite sensory feeds you cannot construct any world-model (abstraction)
in
>> the form of an innate (a-priori) belief system that will deliver an
endless
>> ability to discriminate novelty. In a very Godellian way eventually a
limit
>> would be reach where the abstracted model could not make any prediction
that
>> can be detected. The zombie is, in a very real way, faced with 'truths'
that
>> exist but can't be accessed/perceived. As such its behaviour will be
fundamentally fragile in the face of novelty (just like all computer
programs are).
>> -----------------------------------
>> Just to make the zombie a little more real... consider the industrial
control system computer. I have designed, installed hundreds and wired up
>> tens (hundreds?) of thousands of sensors and an unthinkable number of
kilometers of cables. (NEVER again!) In all cases I put it to you that the
>> phenomenal content of sensory connections may, at best, be
characterised
>> as
>> whatever it is like to have electrons crash through wires, for that is
what
>> is actually going on. As far as the internal life of the CPU is
concerned...
>> whatever it is like to be an electrically noisy hot rock, regardless of
the
>> program....although the character of the noise may alter with different
programs!
>> I am a zombie expert! No that didn't come out right...erm....
>> perhaps... "I think I might be a world expert in zombies".... yes,
that's
>> better.
>> :-)
>> Colin Hales
>
> I've had another think about this after reading the paper you sent me.
It
> seems that
> you are making two separate claims. The first is that a zombie would not
be able to
> behave like a conscious being in every situation: specifically, when
called upon to be
> scientifically creative. If this is correct it would be a corollary of
the
> Turing test, i.e.,
> if it behaves as if it is conscious under every situation, then it's
conscious. However,
> you are being quite specific in describing what types of behaviour could
only occur
> in the setting of phenomenal consciousness. Could you perhaps be even
more
> specific
> and give an example of the simplest possible behaviour or scientific
theory which an
> unconscious machine would be unable to mimic?
>
> The second claim is that a computer could only ever be a zombie, and
therefore could
> never be scientifically creative. However, it is possible to agree with
the first claim and
> reject this one. Perhaps if a computer were complex enough to truly
mimic
> the behaviour
> of a conscious being, including being scientifically creative, then it
would indeed be
> conscious. Perhaps our present computers are either unconscious because
they are too
> primitive or they are indeed conscious, but at the very low end of a
consciousness
> continuum, like single-celled organisms or organisms with relatively
simple nervous systems
> like planaria.
>
> Stathis Papaioannou

COLIN:
Hi.... a bunch of points...

1) Re paper.. it is undergoing review and growing..
The point of the paper is to squash the solipsism argument ...in
particular the specific flavour of it that deals with 'other minds' and as
it has (albeit tacitly) defined science's attitude to what is/is not
scientific evidence. As such I am only concerned with scientific
behaviour. The mere existence of a capacity to handle exquisite novelty
demands the existence of the functionality of phenomenal consciousness
within the scientist. Novel technology exists, ergo science is possible,
ergo phenomenal consciousness exists. Phenomenal consciousness is proven
by the existence of novel technology. More than 1 scientist has produced
novel technology. Ergo there is more then 1 'mind' (=collection of
phenomenal fields) ergo other minds do exist. Ergo solipsism is false. The
problem is that along the way you have also proved that there is an
external 'reality'...which is a bit of a bonus. So all the philosophical
arguments about 'existence' that have wasted so much of our time is
actually just that...a waste of time.

2) Turing test. I think the turing test is a completely misguided idea.
It's based on the assumption that abstract (as-if) computation can fully
replicate (has access to all the same information) of computation
performed by the natural world. This assumption can be made obvious as
follows:
Q. What is it like to be a human? It is like being a mind. There is
information delivered into the mind by the action of brain material which
bestows on the human intrinsic knowledge about the natural world outside
the human....in the form of phenomenal consciousness. This knowledge is
not a model/abstraction, but a literal mapping of what's there (no matter
how mysterious its generation may seem). The zombie does not have this.
Nor does the Turing machine. A turing machine is a zombie. No matter what
the program, it's always 'like a tape and tape reader' to be a Turing
machine. The knowledge provided by phenonmenal cosnciousness is not an
abstraction (programmed model)...it is a direct mapping.

3) RE:
> and give an example of the simplest possible behaviour or scientific
theory which an
> unconscious machine (UM) would be unable to mimic?

I think this is a meaningless quest. It depends on a) the
sensory/actuation facilities and b) the a-priori knowledge bestowed upon
the UM by its human progenitor.

No matter how good the a-priori abstraction given by the human the UM will
do science on its sensory feeds until it can no longer distinguish any
effect because the senses cannot discriminate it (if the UM has any idea
what this means anyway - remember it has no internal likfe, no idea it is
in any universe, no experience of its sensory feeds...has no idea there's
any thing around it, like a human...it's 'not there'). So this poor UM
will learn within the confines of its ecological niche that it doesn't
even know it is in, reach a point where no matter what it does nothing
novel can be detected through its sensory feeds...Then it will stay that
way for good. To an outide observer it would look very weird. It would
also fall victim to any perceptual failure not consistent with its
survival.

5) Re a fatal test for the Turing machine? Give it exquisite novelty by
asking it to do science on an unknown area of the natural world. Proper
science. It will fail because it does not know there is an outside world.
Get it to make/guide the creation of novel technology. This is a human
behaviour that a Turing machine will never be able to do...because the
humans have not done it yet either.... put the Turing Machine and a human
scientist together and get them to do science on true novelty. The Turing
machine cannot have any a-priori knowledge of the natural world in
question because the humans who would give it to the machine dont have it
either!

This is the real test. Can a Turing machine do science? No way. There is
no 'mimicking' consciousness.... it is an oxymorom as a statement.
--------------------------------
BTW I completeley agree about the continuum of consciousness. I believe it
started with eukaryotes having 'proto-experiences'. In a generalised model
of cognition and consciousness all critters have varying levels of
phenomenal consciousness and intellectual faculties for using that to
survive in an ecological niche.... however... this is not the point of my
paper... the paper was to prove that phenomenal consciousness is necessary
for scientific behaviour...

Having reached that point in a proof...you can then look at other
behaviours (like tennis!) and other species (like bats and zombies). The
key aspect to the idea is that truly scientific behaviour is the only one
we can use as a real proof in respect of the existence of consciousness,
as it makes real demands of the external world and relates them directly
in a structured way to the internal life of the scientist in a way that
has nothing to do with the scientist (it's about unknown/novel natural
laws operating outside the scientist).

Nobody has ever thought about this like this, have they?.... I just had
this shivery feeling... that maybe I've tripped over something useful... I
have found it so weird lately talking to scientists, standing there...the
evidence for consiousness in front of me...saying "there's no
evidence..."...:-) is it any wonder scientists can't see the
evidence...they ARE the evidence!

regards,

Colin Hales





--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---
Received on Sat Sep 16 2006 - 04:12:01 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:12 PST