Le 02-janv.-07, à 13:59, Stathis Papaioannou a écrit :
>
>
> Mark Peaty writes:
>
>> SP: ' In the end, what is "right" is an irreducible personal belief,
>> which you can try to change by appeal to emotions or by example, but
>> not by appeal to logic or empirical facts. And in fact I feel much
>> safer that way: if someone honestly believed that he knew what was
>> "right" as surely as he knew 2+2=4, he would be a very dangerous
>> person. Religious fanatics are not dangerous because they want to do
>> evil, but because they want to do good. '
>> MP: I agree with this, saving only that, on a 'numbers' basis, there
>> are those whose personal evolution takes them beyond the dynamic of
>> 'good' or 'evil' into the domain of power for its own sake. This
>> entails complete loss of empathic ability and I think it could be
>> argued that such a person is 'legislating' himself out of the human
>> species.
>> MP: I think a key point with 'irreducible personal belief' is that
>> the persons in question need to acknowledge the beliefs as such and
>> take responsibility for them. I believe we have to point this out,
>> whenever we get the opportunity, because generally most people are
>> reluctant to engage in analysis of their own beliefs, in public
>> anyway. I think part of the reason for this is the cultural climate
>> [meme-scape?] in which Belief in a G/god/s or uncritical Faith are
>> still held to be perfectly respectable. This cultural climate is what
>> Richard Dawkins and Daniel Dennet have been criticising in recent
>> books and articles.
>> SP: 'I am not entirely convinced that comp is true'
>> MP: At the moment I am satisfied that 'comp' is NOT true, certainly
>> in any format that asserts that 'integers' are all that is needed.
>> 'Quantum' is one thing, but 'digital' is quite another :-) The main
>> problem [fact I would prefer to say] is that existence is irreducible
>> whereas numbers or Number be dependent upon something/s existing.
>
> I have fallen into sometimes using the term "comp" as short for
> "computationalism" as something picked up from Bruno. On the face of
> it, computationalism seems quite sensible: the best theory of
> consciousness and the most promising candidate for producing
> artificial intelligence/consciousness (if they are the same thing: see
> below). Assuming comp, Bruno goes through 8 steps in his Universal
> Dovetailer Argument (UDA), eg. in this paper:
> http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHAL.htm
>
> All of the steps are relatively straightforward until step 8,
I am glad to hear that.
> which invokes an argument discovered by Bruno and Tim Maudlin
> demonstrating that there is a problem with the theory that the mental
> supervenes on the physical. It seems that to be consistent you have to
> allow either that any computation, including the supposedly conscious
> ones, supervenes on any physical activity,
I'm afraid this does not follow from Maudlin (1989) or me (1988). It is
more related to the Putnam, Chalmers, Mallah (in the list)
implementation problem. Maudlin shows that if comp is true and if
physical supervenience is true then consciousness supervenes on no
physical activity at all. From this absurdity he derives a problem for
comp. Having comp as main hypothesis, I derive from such absurdity the
difficulty of maintaining the physical supervenience theory. But even
with just quantum mechanics, the notion of physical supervenience is
not entirely clear.
> or that computations do not supervene on physical activity at all but
> are complete in themselves, consciousness included, by virtue of their
> status as Platonic objects. Bruno concludes that the latter is the
> case, but Maudlin appears to take both possibilities as obviously
> absurd and thus presents the paper as a problem with computationalism
> itself.
Well, if you read carefully Maudlin, he concludes that throwing out
comp does not solve his problem. He is aware that the problem is more
related to physical supervenience than with comp. What is strange, with
Maudlin, is that he wrote an excellent book on Bell's inequality and he
seems aware that "matter" is not an easy concept too (so I don't
understand why he feels so sorry for abandoning physical supervenience,
when such a concept is not even clear once you understand that "quantum
matter" is not well defined.
>> MP: Why are we not zombies? The answer is in the fact of
>> self-referencing. In our case [as hominids] there are peculiarities
>> of construction and function arisen from our evolutionary history,
>> but there is nothing in principle to deny self-awareness from a
>> silicon-electronic entity that embodied sufficient details within a
>> model of self in the world. The existence of such a model would
>> constitute its mind, broadly speaking, and the updating of the model
>> of self in the world would be the experience of self awareness. What
>> it would be like TO BE the updating of such a model of self in the
>> world is something we will probably have to wait awhile to be told
>> :-)
>
> It seems reasonable to theorise that if an entity could behave like a
> conscious being, it must be a conscious being.
It is the no-zombie theory. One question is: "could behave like" for
how long? Now this question makes sense only for those who take the
physical supervenience for granted. But then with comp, accepting my
argument + Maudlin (say), there is no problem at all: consciousness of
one individual supervene on an infinity of immaterial computations,
never on anything singularized, by Matter or anything else. Matter's
role consists in "assembling" coherent dream so that consciousness can
manifest themselves relatively to other consciousness. The "essence" of
matter relies in the possibility of inter-subjective constructions.
> However, the theory does not have the strength of logical necessity.
> It is quite possible that if nature had electronic circuits to play
> with, beings displaying intelligent behaviour similar to our own may
> have evolved, but lacking consciousness. This need not lead to the
> usual criticism: in that case, how can I be sure my fellow humans are
> conscious? My fellow humans not only behave like me, they have a
> biological brain like me. We would have to invoke magic to explain how
> God has breathed consciousness into one person but not another, but
> there is no such theoretical problem if the other person turns out to
> be a robot. My personal view is that if a computer simply learned to
> copy my behaviour by studying me closely if it were conscious it would
> probably be differently conscious. If the computer attempted to copy
> me by emulating my neuronal activity I would be more confident that it
> was conscious in the same way I am, although I would not be 100%
> certain. But if I were copied in a teleportation experiment to a
> similar tolerance level as occurs in normal moment to moment living,
> it would be absurd to seriously contemplate that the copy was a zombie
> or differed significantly in his experiences from the original.
Yes, ok.
Bruno
http://iridia.ulb.ac.be/~marchal/
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Wed Jan 03 2007 - 08:47:47 PST