Stathis Papaioannou wrote:
> Colin Hales writes:
>
>
>>Please consider the plight of the zombie scientist with a huge set of
>>sensory feeds and similar set of effectors. All carry similar signal
>>encoding and all, in themselves, bestow no experiential qualities on the
>>zombie.
>>
>>Add a capacity to detect regularity in the sensory feeds.
>>Add a scientific goal-seeking behaviour.
>>
>>Note that this zombie...
>>a) has the internal life of a dreamless sleep
>>b) has no concept or percept of body or periphery
>>c) has no concept that it is embedded in a universe.
>>
>>I put it to you that science (the extraction of regularity) is the science
>>of zombie sensory fields, not the science of the natural world outside the
>>zombie scientist. No amount of creativity (except maybe random choices)
>>would ever lead to any abstraction of the outside world that gave it the
>>ability to handle novelty in the natural world outside the zombie scientist.
>>
>>No matter how sophisticated the sensory feeds and any guesswork as to a
>>model (abstraction) of the universe, the zombie would eventually find
>>novelty invisible because the sensory feeds fail to depict the novelty .ie.
>>same sensory feeds for different behaviour of the natural world.
>>
>>Technology built by a zombie scientist would replicate zombie sensory feeds,
>>not deliver an independently operating novel chunk of hardware with a
>>defined function(if the idea of function even has meaning in this instance).
>>
>>The purpose of consciousness is, IMO, to endow the cognitive agent with at
>>least a repeatable (not accurate!) simile of the universe outside the
>>cognitive agent so that novelty can be handled. Only then can the zombie
>>scientist detect arbitrary levels of novelty and do open ended science (or
>>survive in the wild world of novel environmental circumstance).
>>
>>In the absence of the functionality of phenomenal consciousness and with
>>finite sensory feeds you cannot construct any world-model (abstraction) in
>>the form of an innate (a-priori) belief system that will deliver an endless
>>ability to discriminate novelty. In a very Godellian way eventually a limit
>>would be reach where the abstracted model could not make any prediction that
>>can be detected. The zombie is, in a very real way, faced with 'truths' that
>>exist but can't be accessed/perceived. As such its behaviour will be
>>fundamentally fragile in the face of novelty (just like all computer
>>programs are).
>>-----------------------------------
>>Just to make the zombie a little more real... consider the industrial
>>control system computer. I have designed, installed hundreds and wired up
>>tens (hundreds?) of thousands of sensors and an unthinkable number of
>>kilometers of cables. (NEVER again!) In all cases I put it to you that the
>>phenomenal content of sensory connections may, at best, be characterised as
>>whatever it is like to have electrons crash through wires, for that is what
>>is actually going on. As far as the internal life of the CPU is concerned...
>>whatever it is like to be an electrically noisy hot rock, regardless of the
>>program....although the character of the noise may alter with different
>>programs!
>>
>>I am a zombie expert! No that didn't come out right...erm....
>>perhaps... "I think I might be a world expert in zombies".... yes, that's
>>better.
>>:-)
>>Colin Hales
>
>
> I've had another think about this after reading the paper you sent me. It seems that
> you are making two separate claims. The first is that a zombie would not be able to
> behave like a conscious being in every situation: specifically, when called upon to be
> scientifically creative. If this is correct it would be a corollary of the Turing test, i.e.,
> if it behaves as if it is conscious under every situation, then it's conscious. However,
> you are being quite specific in describing what types of behaviour could only occur
> in the setting of phenomenal consciousness. Could you perhaps be even more specific
> and give an example of the simplest possible behaviour or scientific theory which an
> unconscious machine would be unable to mimic?
>
> The second claim is that a computer could only ever be a zombie, and therefore could
> never be scientifically creative. However, it is possible to agree with the first claim and
> reject this one. Perhaps if a computer were complex enough to truly mimic the behaviour
> of a conscious being, including being scientifically creative, then it would indeed be
> conscious. Perhaps our present computers are either unconscious because they are too
> primitive or they are indeed conscious, but at the very low end of a consciousness
> continuum, like single-celled organisms or organisms with relatively simple nervous systems
> like planaria.
>
> Stathis Papaioannou
I even know some spirit dualist who allow that spirit might attach to sufficiently
complex computers and hence make them conscious or ensoul them.
Brent Meeker
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---
Received on Fri Sep 15 2006 - 12:57:00 PDT