Re: The Meaning of Life

From: Bruno Marchal <marchal.domain.name.hidden>
Date: Mon, 8 Jan 2007 15:13:12 +0100

Le 08-janv.-07, à 14:27, Stathis Papaioannou a écrit :

>
>
> Bruno Marchal writes:
>
>> Le 07-janv.-07, à 19:21, Brent Meeker a écrit :
>> > And does it even have to be very good? Suppose it made a sloppy
>> copy > of me that left out 90% of my memories - would it still be
>> "me"? How > much fidelity is required for Bruno's argument? I think
>> not much.
>> The argument does not depend at all of the level of fidelity. Indeed
>> I make clear (as much as possible) that comp is equivalent to the
>> belief there is a level of substitution of myself (3-person) such
>> that I (1-person) survive a functional substitution done at that
>> level. Then I show no machine can know what is her level of
>> substitution (and thus has to bet or guess about it).
>> This is also the reason why comp is not jeopardized by the idea that
>> the environment is needed: just put the environment in the definition
>> of my "generalized brain".
>> Imagine someone who say that his brain is the entire galaxy,
>> described at the level of all interacting quantum strings. This can
>> be captured by giant (to say the least) but finite, rational complex
>> matrices. Of course the thought experiment with the "yes doctor" will
>> look very non-realist, but *in fine*, all what is needed (for the
>> reversal) is that the Universal Dovetailer get through the state of
>> my generalized brain, and the UD will get it even if my "state" is
>> the state of the whole galaxy, or more.
>> If it happens that my state is the galaxy state AND that the galaxy
>> state cannot be captured in a finite ('even giant) way(*), then we
>> are just out of the scope of the comp- reasoning. This is possible
>> because comp may be wrong.
>
> This is right, and it is perhaps a consequence of comp that
> computationalists did not brgain on. If the functional equivalent of
> my brain has to interact with the environment in the same way that I
> do then that puts a constraint what sort of machine it can be, as well
> as necessitating of course that it be an actual physical machine. For
> example, if as part of asserting my status as a conscious being I
> decide to lift my hand in the air when I see a red ball, then my
> functional replacement must (at least) have photoreceptors which send
> a signal to a central processor which then sends a motor signal to its
> hand. If it fails the red ball test, then it isn't functionally
> equivalent to me.
> However, what if you put the red ball, the hand and the whole
> environment inside the central processor? You program in data which
> tells it is seeing a red ball, it sends a signal to what it thinks is
> its hand, and it receives visual and proprioceptive data telling it it
> has successfully raised the hand. Given that this self-contained
> machine was derived from a known computer architecture with known
> sensors and effectors, we would know what it was thinking by
> eavesdropping on its internal processes. But if we didn't have this
> knowledge, is there any way, even in theory, that we could figure it
> out? The answer in general is "no": without benefit of environmental
> interaction, or an instruction manual, there is no way to assign
> meaning to the workings of a machine and there is no way to know
> anything about its consciousness.


Up to here I do agree.




> The corollary of this is that under the right interpretation a machine
> could have any meaning or any consciousness.


I don't think that this corollary follows. Unless you are postulating a
"physical world" having some special property (and then the question
is: what are your axiom for that physical realm, and where does those
axioms come from). Even in classical physics, where a point can move
along "all real numbers", I don't see any reason such move can
represent any computation. Of course I consider a computation as being
something non physical, and essentially discrete, at the start. The
physical and continuous aspect comes from the fact that any computation
is "embedded" into an infinity of "parallel" computations (those
generated by the UD).
Brent says that the "evil problem" is a problem only for those who
postulate an omniscient and omnipotent good god. I believe that somehow
a large part of the "mind/body" problem comes from our (instinctive)
assumption of a basic (primitive) physical reality.



> You can't avoid the above problem without making changes to (standard)
> computationalism. You can drop computationalism altogether and say
> that the brain + environment is not Turing emulable. Or, as Bruno has
> suggested, you can keep computationalism and drop the physical
> supervenience criterion.

I am OK with this ('course).

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~---------~--~----~------------~-------~--~----~
 You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Mon Jan 08 2007 - 09:13:36 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:13 PST