Stathis Papaioannou wrote:
>
> Brent meeker writes:
>
>>> I don't doubt that there is some substitution level that preserves 3rd person
>>> behaviour and 1st person experience, even if this turns out to mean copying
>>> a person to the same engineering tolerances as nature has specified for ordinary
>>> day to day life. The question is, is there some substitution level which preserves
>>> 3rd person behaviour but not 1st person experience? For example, suppose
>>> you carried around with you a device which monitored all your behaviour in great
>>> detail, created predictive models, compared its predictions with your actual
>>> behaviour, and continuously refined its models. Over time, this device might be
>>> able to mimic your behaviour closely enough such that it could take over control of
>>> your body from your brain and no-one would be able to tell that the substitution
>>> had occurred. I don't think it would be unreasonable to wonder whether this copy
>>> experiences the same thing when it looks at the sky and declares it to be blue as
>>> you do before the substitution.
>> That's a precis of Greg Egan's short story "The Jewel". I wouldn't call it unreasonable to wonder whether the copy experiences the same qualia, but I'd call it unreasonable to conclude that it did not on the stated evidence. In fact I find it hard to think of what evidence would count against it have some kind of qualia.
>
> It would be a neat theory if any machine that processed environmental information
> in a manner analogous to an animal had some level of conscious experience (and consistent
> with Colin's "no zombie scientists" hypothesis, although I don't think it is a conclusion he would
> agree with). It would explain consciousness as a corollary of this sort of information processing.
> However, I don't know how such a thing could ever be proved or disproved.
>
> Stathis Papaioannou
Things are seldom proved or disproved in science. Right now I'd say the evidence favors the no-zombie theory. The only evidence beyond observation of behavior that I can imagine is to map processes in the brain and determine how memories are stored and how manipulation of symbolic and graphic representations are done. It might then be possible to understand how a computer/robot could achieve the same behavior with a different functional structure; analogous say to imperative vs functional programs. But then we'd only be able to infer that the robot might be conscious in a different way. I don't see how we could infer that it was not conscious.
On a related point, it is often said here that consciousness is ineffable: what it is like to be someone cannot be communicated. But there's another side to this: it is exactly the content of consciousness that we can communicate. We can tell someone how we prove a theorem: we're conscious of those steps. But we can't tell someone how our brain came up with the proof (the Poincare' effect) or why it is persuasive.
Brent Meeker
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sat Dec 02 2006 - 14:58:59 PST