Re: MGA 1

From: Bruno Marchal <marchal.domain.name.hidden>
Date: Thu, 20 Nov 2008 19:52:14 +0100

On 20 Nov 2008, at 08:23, Kory Heath wrote:

>
>
> On Nov 18, 2008, at 11:52 AM, Bruno Marchal wrote:
>> The last question (of MGA 1) is: was Alice, in this case, a zombie
>> during the exam?
>
> Of course, my personal answer would take into account the fact that I
> already have a problem with the materialist's idea of "matter". But I
> think we're supposed to be considering the question in the context of
> mechanism and materialism. So I'll ask, what should a mechanist-
> materialist say about the state of Alice's consciousness during the
> exam?
>
> Maybe I'm jumping ahead, but I think this thought experiment creates a
> dilemma for the mechanist-materialist (which I think is Bruno's
> point). In contrast to many of the other responses in this thread, I
> don't think the mechanist-materialist should believe that Alice is
> conscious in the case when every gate has stopped functioning (but
> cosmic rays are randomly causing them to flip in the exact same way
> that they would have flipped if they were functioning). Alice is in
> that case functionally identical to a random-number generator. It
> shouldn't matter at all whether these cosmic rays are striking the
> broken gates in her head, or if the gates in her head are completely
> inert and the rays are striking the neurons in (say) her arms and her
> spinal chord, still causing her body to behave exactly as it would
> have without the breakdown. I agree with Telmo Menezes that the
> mechanist-materialist shouldn't view Alice as conscious in the latter
> case. But I don't think it's any different than the former case.


I am afraid you are already too much suspect of the contradictory
nature of MEC+MAT.
Take the reasoning has a game. Try to keep both MEC and MAT, the game
consists in showing the more clearly as possible what will go wrong.
The goal is to help the other to understand, or to find an error
(fatal or fixable: in both case we learn).


>
>
> It sounds like many people are under the impression that mechanism-
> materialism, with it's rejection of zombies, is committed to the view
> that Lucky Alice must be conscious, because she's behaviorally
> indistinguishable from the Alice with the correctly-functioning brain.
>
> But, in the sense that matters, Lucky Alice is *not* behaviorally
> indistinguishable from fully-functional Alice.

You mean the ALICE of Telmo's solution of MGA 1bis, I guess. The
original Alice, well I mean the one in MGA 1, is functionally
identical at the right level of description (actually she has already
digital brain). The physical instantiation of a computation is
completely realized. No neurons can "know" that the info (correct and
at the right places) does not come from the relevant neurons, but from
a lucky beam.



> For the mechanist-
> materialist, everything physical counts as "behavior". And there is a
> clear physical difference between the two Alices, which would be
> physically discoverable by a nearby scientist with the proper
> instruments.

But the physical difference does not play a role. If you invoke it,
how could you accept saying yes to a doctor, who introduce bigger
difference?

>
>
> Lets imagine that, during the time that Alice's brain is broken but
> "luckily" acting as though it wasn't due to cosmic rays, someone
> throws a ball at Alice's head, and she ("luckily") ducks out of the
> way. The mechanist-materialist may be happy to agree that she did
> indeed "duck out of the way", since that's just a description of what
> her body did.

OK, for both ALICE of Telmo's solution of MGA 1bis, and ALICE MGA 1.


> But the mechanist-materialist can (and must) claim that
> Lucky Alice did not in fact respond to the ball at all.

Consciously or privately? Certainly not for ALICE MGA 1bis. But why
not for ALICE MGA 1? Please remember to try to naively, or candidly
enough, keep both MECH and MAT in mind. You are already reasoning
like if we were concluding some definitive things, biut we are just
trying to build an argument. In the end, you will say: I knew it, but
the point is helping the others to "know" it too. Many here have
already the good intuition I think. The point is to make that
intuition the most communicable possible.



> And that
> statement can be translated into pure physics-talk. The movements of
> Alice's body in this case are being caused by the cosmic rays. They
> are causally disconnected from the movements of the ball (except in
> the incidental way that the ball might be having some causal effect on
> the cosmic rays).


More on this after MGA 2. Hopefully tomorrow.



> When Alice's brain is working properly, her act of
> ducking *is* causally connected to the movement of the ball. And this
> kind of causal connection is an important part of what the mechanist-
> materialist means by "consciousness".

Careful: such kind of causality needs ... MAT.



>
>
> Dennett is able to - and in fact must - say that Alice is not
> conscious when all of her brain-gates are broken but very luckily
> being flipped by cosmic rays. When Dennett says that someone is
> conscious, he is referring precisely to these behavioral competences
> that can be described in physical terms.

You see.



> He means that this collection
> of physical stuff we call Alice really is responding to her immediate
> environment (like the ball), observing things, collecting data, etc.
> In that very objective sense, Lucky Alice is not responding to the
> ball at all. She's not conscious by Dennett's physicalist definition
> of consciousness. But she's also not a zombie, because she is behaving
> differently than fully-functional Alice. You just have to be able to
> have the proper instruments to know it.
>
> If you still think that Dennett would claim that Lucky Alice is a
> zombie, take a look at this quote from http://ase.tufts.edu/cogstud/papers/zombic.htm
> : "Just remember, by definition, a zombie behaves indistinguishably
> from a conscious being–in all possible tests, including not only
> answers to questions [as in the Turing test] but psychophysical tests,
> neurophysiological tests–all tests that any 'third-person' science can
> devise." Lucky Alice does *not* behave indistinguishably from a
> conscious being in all possible tests.

By definition, I would say, she does. Of course, this makes sense,
with MECH only above the substitution level. But at that level, a
neurophysiologist looking in the detail would see the neurons doing
their job. Only, he will also see, some neurons breaking down, and
then being fixed, not by an internal biological fixing mechanism (like
it occurs all the time in biological system, but by a lucky beam, but
despite this, and thanks to this, the brain of Alice (MGA 1) does the
entire normal usual work. If not, you introduce a kind of magic, which
if existing, would prevent me to say yes to any doctor.


> The proper third-person test
> examining her logic gates would show that she is not responding to her
> immediate environment at all. Dennett should claim that she's a non-
> conscious non-zombie.
>
> Nevertheless, I think Bruno's thought experiment causes a problem for
> the mechanist-materialist, as it is supposed to. If we believe that
> the fully-functional Alice is conscious and the random-gate-brain
> Alice is not conscious, what happens when we start turning Alice's
> functioning brain-gates one-at-a-time into random brain gates (and
> they luckily keep flipping the way they would have)? Alice's deep
> behavior changes - she gradually stops responding to her environment,
> although her outward behavior makes it look like she still does - but
> clearly there's nothing within Alice "noticing" the change. We
> certainly can't imagine (as Searle wants to) that Alice is internally
> feeling her consciousness slip away, but is powerless to cry out, etc.
>
> It's tempting to say that this argument simply shows us that Lucky
> Alice must be conscious after all, but that's just the other horn of
> the dilemma. The mechanist-materialist can only talk about
> consciousness in computational / physical terms. For Dennett, if you
> say that Alice is "aware", you must be able to translate this into
> mechanistic terms. And I can't see any mechanistic sense in which
> Lucky Alice can be said to be "aware" of anything.

Alice MGA 1 can be said to be aware in the roiginal mechanist sense.
When she thought "Oh the math problem is easy", she trigged the right
memories in her brain, with the correct physical activity, even if
just luckily in that case.



>
>
> I prefer to just say that Bruno's thought experiment shows that
> there's something wrong with mechanism-materialism, but it's not
> obvious (yet) what the solution is.


And things will even be more confusing after MGA 2, but that's the
goal. MEC + MAT should give a contradiction, we will extract some
weirder and weirder proposition until the contradiction will be
utterly clear. OK?

Bruno

http://iridia.ulb.ac.be/~marchal/




--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Thu Nov 20 2008 - 13:52:23 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:15 PST