Stathis Papaioannou wrote:
> 
> 
> Brent meeker writes:
> 
> 
>>>>>>I could make a robot that, having suitable thermocouples, would quickly withdraw it's 
>>>>>>hand from a fire; but not be conscious of it.  Even if I provide the robot with 
>>>>>>"feelings", i.e. judgements about good/bad/pain/pleasure I'm not sure it would be 
>>>>>>conscious.  But if I provide it with "attention" and memory, so that it noted the 
>>>>>>painful event as important and necessary to remember because of it's strong negative 
>>>>>>affect; then I think it would be conscious.
>>>>>
>>>>>
>>>>>It's interesting that people actually withdraw their hand from the fire *before* they experience 
>>>>>the pain. The withdrawl is a reflex, presumably evolved in organisms with the most primitive 
>>>>>central nervour systems, while the pain seems to be there as an afterthought to teach us a 
>>>>>lesson so we won't do it again. Thus, from consideration of evolutionary utility consciousness 
>>>>>does indeed seem to be a side-effect of memory and learning. 
>>>>
>>>>Even more curious, volitional action also occurs before one is aware of it. Are you 
>>>>familiar with the experiments of Benjamin Libet and Grey Walter?
>>>
>>>
>>>These experiments showed that in apparently voluntarily initiated motion, motor cortex activity 
>>>actually preceded the subject's awareness of his intention by a substantial fraction of a second. 
>>>In other words, we act first, then "decide" to act. These studies did not examine pre-planned 
>>>action (presumably that would be far more technically difficult) but it is easy to imagine the analogous 
>>>situation whereby the action is unconsciously "planned" before we become aware of our decision. In 
>>>other words, free will is just a feeling which occurs after the fact. This is consistent with the logical 
>>>impossibility of something that is neither random nor determined, which is what I feel my free will to be.
>>>
>>>
>>>
>>>>>I also think that this is an argument against zombies. If it were possible for an organism to 
>>>>>behave just like a conscious being, but actually be unconscious, then why would consciousness 
>>>>>have evolved? 
>>>>
>>>>An interesting point - but hard to give any answer before pinning down what we mean 
>>>>by consciousness.  For example Bruno, Julian Jaynes, and Daniel Dennett have 
>>>>explanations; but they explain somewhat different consciousnesses, or at least 
>>>>different aspects.
>>>
>>>
>>>Consciousness is the hardest thing to explain but the easiest thing to understand, if it's your own 
>>>consciousness at issue. I think we can go a long way discussing it assuming that we do know what 
>>>we are talking about even though we can't explain it. The question I ask is, why did people evolve 
>>>with this consciousness thing, whatever it is? The answer must be, I think, that it is a necessary 
>>>side-effect of the sort of neural complexity that underpins our behaviour. If it were not, and it 
>>>were possible that beings could behave exactly like humans and not be conscious, then it would 
>>>have been wasteful of nature to have provided us with consciousness. 
>>
>>This is not necessarily so.  First, evolution is constrained by what goes before. 
>>Its engineering solutions often seem rube-goldberg, e.g. backward retina in mammals. 
> 
> 
> Sure, but vision itself would not have evolved unnecessarily.
> 
> 
>>  Second, there is selection against some evolved feature only to the extent it has a 
>>(net) cost.  For example, Jaynes explanation of consciousness conforms to these two 
>>criteria.  I think that any species that evolves intelligence comparable to ours will 
>>be conscious for reasons somewhat like Jaynes theory.  They will be social and this 
>>combined with intelligence will make language a good evolutionary move.  Once they 
>>have language, remembering what has happened, in order to communicate and plan, in 
>>symbolic terms will be a easy and natural evolvement.  Whether that leads to hearing 
>>your own narrative in your head, as Jaynes supposes, is questionable; but it would be 
>>consistent with evolution. It takes advantage of existing structure and functions to 
>>realize a useful new function.
> 
> 
> Agreed. So consciousness is either there for a reason or it's a necessary side-effect of the sort 
> of brains we have and the way we have evolved. It's still theoretically possible that if the latter 
> is the case, we might have been unconscious if we had evolved completely different kinds of 
> brains, but similar behaviour - although I think it unlikely.
>  
> 
>>>This does not necessarily 
>>>mean that computers can be conscious: maybe if we had evolved with electronic circuits in our 
>>>heads rather than neurons consciousness would not have been a necessary side-effect. 
>>
>>But my point is that this may come down to what we would mean by a computer being 
>>conscious.  Bruno has an answer in terms of what the computer can prove.  Jaynes (and 
>>probably John McCarthy) would say a computer is conscious if it creates a narrative 
>>of its experience which it can access as memory.
> 
> 
> Maybe this is a copout, but I just don't think it is even logically possible to explain what consciousness 
> *is* unless you have it. 
Not being *logically* possible means entailing a contradiction - I doubt that.  But 
anyway you do have it and you think I do because of the way we interact.  So if you 
interacted the same way with a computer and you further found out that the computer 
was a neural network that had learned through interaction with people over a period 
of years, you'd probably infer that the computer was conscious - at least you 
wouldn't be sure it wasn't.
>It's like the problem of explaining vision to a blind man: he might be the world's 
> greatest scientific expert on it but still have zero idea of what it is like to see - and that's even though 
> he shares most of the rest of his cognitive structure with other humans, and can understand analogies 
> using other sensations. Knowing what sort of program a conscious computer would have to run to be 
> conscious, what the purpose of consciousness is, and so on, does not help me to understand what the 
> computer would be experiencing, except by analogy with what I myself experience. 
But that's true of everything.  Suppose we knew a lot more about brains and we 
created an intelligent computer using brain-like functional architecture and it acted 
like a conscious human being, then I'd say we understood its consciousness better 
than we understand quantum field theory or global economics.
Brent Meeker
> 
> Stathis Papaioannou
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---
Received on Tue Sep 12 2006 - 00:47:10 PDT