Bruno Marchal writes:
> >> Le 07-janv.-07, à 19:21, Brent Meeker a écrit :
> >> > And does it even have to be very good? Suppose it made a sloppy
> >> copy > of me that left out 90% of my memories - would it still be
> >> "me"? How > much fidelity is required for Bruno's argument? I think
> >> not much.
> >> The argument does not depend at all of the level of fidelity. Indeed
> >> I make clear (as much as possible) that comp is equivalent to the
> >> belief there is a level of substitution of myself (3-person) such
> >> that I (1-person) survive a functional substitution done at that
> >> level. Then I show no machine can know what is her level of
> >> substitution (and thus has to bet or guess about it).
> >> This is also the reason why comp is not jeopardized by the idea that
> >> the environment is needed: just put the environment in the definition
> >> of my "generalized brain".
> >> Imagine someone who say that his brain is the entire galaxy,
> >> described at the level of all interacting quantum strings. This can
> >> be captured by giant (to say the least) but finite, rational complex
> >> matrices. Of course the thought experiment with the "yes doctor" will
> >> look very non-realist, but *in fine*, all what is needed (for the
> >> reversal) is that the Universal Dovetailer get through the state of
> >> my generalized brain, and the UD will get it even if my "state" is
> >> the state of the whole galaxy, or more.
> >> If it happens that my state is the galaxy state AND that the galaxy
> >> state cannot be captured in a finite ('even giant) way(*), then we
> >> are just out of the scope of the comp- reasoning. This is possible
> >> because comp may be wrong.
> >
> > This is right, and it is perhaps a consequence of comp that
> > computationalists did not brgain on. If the functional equivalent of
> > my brain has to interact with the environment in the same way that I
> > do then that puts a constraint what sort of machine it can be, as well
> > as necessitating of course that it be an actual physical machine. For
> > example, if as part of asserting my status as a conscious being I
> > decide to lift my hand in the air when I see a red ball, then my
> > functional replacement must (at least) have photoreceptors which send
> > a signal to a central processor which then sends a motor signal to its
> > hand. If it fails the red ball test, then it isn't functionally
> > equivalent to me.
> > However, what if you put the red ball, the hand and the whole
> > environment inside the central processor? You program in data which
> > tells it is seeing a red ball, it sends a signal to what it thinks is
> > its hand, and it receives visual and proprioceptive data telling it it
> > has successfully raised the hand. Given that this self-contained
> > machine was derived from a known computer architecture with known
> > sensors and effectors, we would know what it was thinking by
> > eavesdropping on its internal processes. But if we didn't have this
> > knowledge, is there any way, even in theory, that we could figure it
> > out? The answer in general is "no": without benefit of environmental
> > interaction, or an instruction manual, there is no way to assign
> > meaning to the workings of a machine and there is no way to know
> > anything about its consciousness.
>
>
> Up to here I do agree.
>
>
>
>
> > The corollary of this is that under the right interpretation a machine
> > could have any meaning or any consciousness.
>
>
> I don't think that this corollary follows. Unless you are postulating a
> "physical world" having some special property (and then the question
> is: what are your axiom for that physical realm, and where does those
> axioms come from). Even in classical physics, where a point can move
> along "all real numbers", I don't see any reason such move can
> represent any computation. Of course I consider a computation as being
> something non physical, and essentially discrete, at the start. The
> physical and continuous aspect comes from the fact that any computation
> is "embedded" into an infinity of "parallel" computations (those
> generated by the UD).
> Brent says that the "evil problem" is a problem only for those who
> postulate an omniscient and omnipotent good god. I believe that somehow
> a large part of the "mind/body" problem comes from our (instinctive)
> assumption of a basic (primitive) physical reality.
Yes, but you're assuming here that which I (you?) set out to prove. In your UDA
you do not explicitly start out with "there is no physical world" but arrive at this
as a conclusion. Consider the following steps:
1. There appears to be a physical world
2. Some of the substructures in this world appear to be conscious, namely brains
3. The third person behaviour of these brains can be copied by an appropriate digital
computer
4. The first person experience of these brains would also thereby be copied by
such a computer
5. The first person experience of the computer would remain unchanged if the third
person behaviour were made part of the program
6. But this would mean there is no way to attach meaning or consciousness to the
self-contained computer - it could be thinking of a red ball, blue ball, or no ball
This is an unexpected result, which can be resolved several ways:
7. (3) is incorrect, and we need not worry about the rest
8. (4) is incorrect, and we need not worry about the rest
9. (5) is incorrect, and we need not worry about the rest
[(7) or (8) would mean that there is something non-computational about the brain; (9)
would mean there is something non-computational about a computer interacting with its
environment, which seems to me even less plausible.]
But if (3) to (5) are all correct, that leaves (6) as correct, which implies that consciousness
is decoupled from physical activity. This would mean that in those cases where we do
associate consciousness with particular physical activity, such as in brains or computers,
it is not really the physical activity which is "causing" the consciousness. I think of it as
analogous to a computer doing arithmetic: it is not "causing" the arithmetic, but is harnessing
a mathematical truth to some physical task.
If consciousness is decoupled from physical activity, this means that our conscious experience
would be unchanged if the apparent physical activity of our brain were not really there. This
would make all the evidence of our senses on which we base the existence of a physical world
illusory: we would have these same experiences if the physical world suddenly disappeared or
never existed in the first place. We can keep (1) and (2) because I qualified them with the word
"appears", but the necessity of a separate physical reality accounting for this appearance goes.
> > You can't avoid the above problem without making changes to (standard)
> > computationalism. You can drop computationalism altogether and say
> > that the brain + environment is not Turing emulable. Or, as Bruno has
> > suggested, you can keep computationalism and drop the physical
> > supervenience criterion.
>
> I am OK with this ('course).
>
> Bruno
>
>
> http://iridia.ulb.ac.be/~marchal/
Stathis Papaioannou
_________________________________________________________________
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Mon Jan 08 2007 - 17:36:44 PST