Stathis Papaioannou wrote:
>
>
>
> Brent Meeker writes (quoting SP):
>
>
>>>>Consider a computer which is doing something (whether it is dreaming or
>>>>musing or just running is the point in question). If there is no
>>>>interaction between what it's running and the rest of the world I'd say
>>>>it's not conscious. It doesn't necessarily need an external observer
>>>>though. To invoke an external observer would require that we already
>>>>knew how to distinguish an observer from a non-observer. This just
>>>>pushes the problem away a step. One could as well claim that the walls
>>>>of the room which are struck by the photons from the screen constitute
>>>>an observer - under a suitable mapping of wall states. The computer
>>>>could, like a Mars rover, act directly on the rest of the world.
>>>
>>>
>>>The idea that we can only be conscious when interacting with the environment
>>>is certainly worth considering. After all, consciousness evolved in order to help
>>>the organism deal with its environment, and it may be wrong to just assume
>>>without further evidence that consciousness continues if all interaction with the
>>>environment ceases. Maybe even those activities which at first glance seem to
>>>involve consciousness in the absence of environmental interaction actually rely
>>>on a trickle of sensory input: for example, maybe dreaming is dependent on
>>>proprioceptive feedback from eye movements, which is why we only dream
>>>during REM sleep, and maybe general anaesthetics actually work by eliminating
>>>all sensory input rather than by a direct effect on the cortex. But even if all this
>>>is true, we could still imagine stimulating a brain which has all its sensory inputs
>>>removed so that the pattern of neural activity is exactly the same as it would
>>>have been had it arisen in the usual way. Would you say that the artificially
>>>stimulated brain is not conscious, even though everything up to and including
>>>the peripheral nerves is physically identical to and goes through the same
>>>physical processes as the normal brain?
>>
>>No. I already noted that we can't insist that interaction with the
>>environment is continuous. Maybe "potential interaction" would be
>>appropriate. But I note that even in your example you contemplate
>>"stimulating" the brain. I'm just trying to take what I consider an
>>operational defintion and abstract it to the kind of
>>mathematical/philosophical definition that can be applied to questions
>>about rocks thinking.
>
>
> The brain-with-wires-attached cannot interact with the environment, because
> all its sense organs have been removed and the stimulation is just coming from
> a recording. Instead of the wires + recording we could say that there is a special
> group of neurons with spontaneous activity that stimulates the rest of the brain
> just as if it were receiving input from the environment. Such a brain would have
> no ability to interact with the environment, unless the effort were made to
> figure out its internal code and then manufacture sense organs for it - but I
> think that would be stretching the definition of "potential interaction". In any
> case, I don't see how "potential interaction" could make a difference.
Yet you had to refer to "stimulate...as if it were receiving input from the
environment" to create an example. If there were no potential interaction
there could be no "as if". So istm that the potential interaction can be an
essential part of the definition. That's not to say that such a definition
is right - definitions aren't right or wrong - but it's a definition that
makes a useful distinction that comports with our common sense.
>If you had
> two brains sitting in the dark, identical in anatomy and electrical activity except
> that one has its optic nerves cut, will one brain be conscious and the other not?
Where did the brains come from? Since they had optic nerves can we suppose
that they had the potential to see photons and they still have this
potential given replacement optic nerves? Not necessarily. Suppose one
came from a cat that was raised in complete darkness. We know
experimentally that this cat can't see...even when there is light. The lack
of stimulus results in the brain not forming the necessary structures for
interpreting signals from the retina. Now suppose it were raised with no
stimulus whatever, even in utero. I conjecture that it would not "think" at
all - although there would be "computation", i.e. neurons firing in some
order. But it would no longer have the potential for interaction; even with
its own body.
But I think you do bring up a good point - the boundary between "brain" and
"environment" is clear enough for actual animals, but seems rather arbitrary
in the abstract.
Brent Meeker
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---
Received on Thu Aug 03 2006 - 02:48:57 PDT