RE: computationalism and supervenience

From: Colin Hales <C.Hales.domain.name.hidden>
Date: Tue, 12 Sep 2006 12:54:50 +1000

Stathis Papaioannou
<snip>
> Maybe this is a copout, but I just don't think it is even logically
> possible to explain what consciousness
> *is* unless you have it. It's like the problem of explaining vision to a
> blind man: he might be the world's
> greatest scientific expert on it but still have zero idea of what it is
> like to see - and that's even though
> he shares most of the rest of his cognitive structure with other humans,
> and can understand analogies
> using other sensations. Knowing what sort of program a conscious computer
> would have to run to be
> conscious, what the purpose of consciousness is, and so on, does not help
> me to understand what the
> computer would be experiencing, except by analogy with what I myself
> experience.
>
> Stathis Papaioannou
>

Please consider the plight of the zombie scientist with a huge set of
sensory feeds and similar set of effectors. All carry similar signal
encoding and all, in themselves, bestow no experiential qualities on the
zombie.

Add a capacity to detect regularity in the sensory feeds.
Add a scientific goal-seeking behaviour.

Note that this zombie...
a) has the internal life of a dreamless sleep
b) has no concept or percept of body or periphery
c) has no concept that it is embedded in a universe.

I put it to you that science (the extraction of regularity) is the science
of zombie sensory fields, not the science of the natural world outside the
zombie scientist. No amount of creativity (except maybe random choices)
would ever lead to any abstraction of the outside world that gave it the
ability to handle novelty in the natural world outside the zombie scientist.

No matter how sophisticated the sensory feeds and any guesswork as to a
model (abstraction) of the universe, the zombie would eventually find
novelty invisible because the sensory feeds fail to depict the novelty .ie.
same sensory feeds for different behaviour of the natural world.

Technology built by a zombie scientist would replicate zombie sensory feeds,
not deliver an independently operating novel chunk of hardware with a
defined function(if the idea of function even has meaning in this instance).

The purpose of consciousness is, IMO, to endow the cognitive agent with at
least a repeatable (not accurate!) simile of the universe outside the
cognitive agent so that novelty can be handled. Only then can the zombie
scientist detect arbitrary levels of novelty and do open ended science (or
survive in the wild world of novel environmental circumstance).

In the absence of the functionality of phenomenal consciousness and with
finite sensory feeds you cannot construct any world-model (abstraction) in
the form of an innate (a-priori) belief system that will deliver an endless
ability to discriminate novelty. In a very Godellian way eventually a limit
would be reach where the abstracted model could not make any prediction that
can be detected. The zombie is, in a very real way, faced with 'truths' that
exist but can't be accessed/perceived. As such its behaviour will be
fundamentally fragile in the face of novelty (just like all computer
programs are).
-----------------------------------
Just to make the zombie a little more real... consider the industrial
control system computer. I have designed, installed hundreds and wired up
tens (hundreds?) of thousands of sensors and an unthinkable number of
kilometers of cables. (NEVER again!) In all cases I put it to you that the
phenomenal content of sensory connections may, at best, be characterised as
whatever it is like to have electrons crash through wires, for that is what
is actually going on. As far as the internal life of the CPU is concerned...
whatever it is like to be an electrically noisy hot rock, regardless of the
program....although the character of the noise may alter with different
programs!

I am a zombie expert! No that didn't come out right...erm....
perhaps... "I think I might be a world expert in zombies".... yes, that's
better.
:-)
Colin Hales


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---
Received on Mon Sep 11 2006 - 22:55:50 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:12 PST