RE: The Meaning of Life

From: Stathis Papaioannou <stathispapaioannou.domain.name.hidden>
Date: Tue, 2 Jan 2007 23:59:52 +1100

Mark Peaty writes:

> SP: ' In the end, what is "right" is an irreducible personal belief, which you can try to change by appeal to emotions or by example, but not by appeal to logic or empirical facts. And in fact I feel much safer that way: if someone honestly believed that he knew what was "right" as surely as he knew 2+2=4, he would be a very dangerous person. Religious fanatics are not dangerous because they want to do evil, but because they want to do good. '
> MP: I agree with this, saving only that, on a 'numbers' basis, there are those whose personal evolution takes them beyond the dynamic of 'good' or 'evil' into the domain of power for its own sake. This entails complete loss of empathic ability and I think it could be argued that such a person is 'legislating' himself out of the human species.
> MP: I think a key point with 'irreducible personal belief' is that the persons in question need to acknowledge the beliefs as such and take responsibility for them. I believe we have to point this out, whenever we get the opportunity, because generally most people are reluctant to engage in analysis of their own beliefs, in public anyway. I think part of the reason for this is the cultural climate [meme-scape?] in which Belief in a G/god/s or uncritical Faith are still held to be perfectly respectable. This cultural climate is what Richard Dawkins and Daniel Dennet have been criticising in recent books and articles.
> SP: 'I am not entirely convinced that comp is true'
> MP: At the moment I am satisfied that 'comp' is NOT true, certainly in any format that asserts that 'integers' are all that is needed. 'Quantum' is one thing, but 'digital' is quite another :-) The main problem [fact I would prefer to say] is that existence is irreducible whereas numbers or Number be dependent upon something/s existing.

I have fallen into sometimes using the term "comp" as short for "computationalism" as something picked up from Bruno. On the face of it, computationalism seems quite sensible: the best theory of consciousness and the most promising candidate for producing artificial intelligence/consciousness (if they are the same thing: see below). Assuming comp, Bruno goes through 8 steps in his Universal Dovetailer Argument (UDA), eg. in this paper:

http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHAL.htm

All of the steps are relatively straightforward until step 8, which invokes an argument discovered by Bruno and Tim Maudlin demonstrating that there is a problem with the theory that the mental supervenes on the physical. It seems that to be consistent you have to allow either that any computation, including the supposedly conscious ones, supervenes on any physical activity, or that computations do not supervene on physical activity at all but are complete in themselves, consciousness included, by virtue of their status as Platonic objects. Bruno concludes that the latter is the case, but Maudlin appears to take both possibilities as obviously absurd and thus presents the paper as a problem with computationalism itself.

> MP: Why are we not zombies? The answer is in the fact of self-referencing. In our case [as hominids] there are peculiarities of construction and function arisen from our evolutionary history, but there is nothing in principle to deny self-awareness from a silicon-electronic entity that embodied sufficient details within a model of self in the world. The existence of such a model would constitute its mind, broadly speaking, and the updating of the model of self in the world would be the experience of self awareness. What it would be like TO BE the updating of such a model of self in the world is something we will probably have to wait awhile to be told :-)

It seems reasonable to theorise that if an entity could behave like a conscious being, it must be a conscious being. However, the theory does not have the strength of logical necessity. It is quite possible that if nature had electronic circuits to play with, beings displaying intelligent behaviour similar to our own may have evolved, but lacking consciousness. This need not lead to the usual criticism: in that case, how can I be sure my fellow humans are conscious? My fellow humans not only behave like me, they have a biological brain like me. We would have to invoke magic to explain how God has breathed consciousness into one person but not another, but there is no such theoretical problem if the other person turns out to be a robot. My personal view is that if a computer simply learned to copy my behaviour by studying me closely if it were conscious it would probably be differently conscious. If the computer attempted to copy me by emulating my neuronal activity I would be more confident that it was conscious in the same way I am, although I would not be 100% certain. But if I were copied in a teleportation experiment to a similar tolerance level as occurs in normal moment to moment living, it would be absurd to seriously contemplate that the copy was a zombie or differed significantly in his experiences from the original.

Stathis Papaioannou
_________________________________________________________________
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~---------~--~----~------------~-------~--~----~
 You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Tue Jan 02 2007 - 08:00:09 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:12 PST