Re: MGA 1

From: Brent Meeker <meekerdb.domain.name.hidden>
Date: Fri, 21 Nov 2008 10:19:21 -0800

Kory Heath wrote:
>
> On Nov 20, 2008, at 10:52 AM, Bruno Marchal wrote:
>> I am afraid you are already too much suspect of the contradictory
>> nature of MEC+MAT.
>> Take the reasoning has a game. Try to keep both MEC and MAT, the game
>> consists in showing the more clearly as possible what will go wrong.
>
> I understand what you're saying, and I accept the rules of the game. I
> *am* trying to keep both MEC and MAT. But it seems as though we differ
> on how we understand MEC and MAT, because in my understanding,
> mechanist-materialists should say that Bruno's Lucky Alice is not
> conscious (for the same reason that Telmo's Lucky Alice is not
> conscious).
>
>> You mean the ALICE of Telmo's solution of MGA 1bis, I guess. The
>> original Alice, well I mean the one in MGA 1, is functionally
>> identical at the right level of description (actually she has already
>> digital brain). The physical instantiation of a computation is
>> completely realized. No neurons can "know" that the info (correct and
>> at the right places) does not come from the relevant neurons, but from
>> a lucky beam.
>
> I agree that the neurons don't "know" or "care" where their inputs are
> coming from. They just get their inputs, perform their computations,
> and send their outputs. But when it comes to the functional, physical
> behavior of Alice's whole brain, the mechanist-materialist is
> certainly allowed (indeed, forced) to talk about where each neuron's
> input is coming from. That's a part of the computational picture.
>
> I see the point that you're making. Each neuron receives some input,
> performs some computation, and then produces some output. We're
> imagining that every neuron has been disconnected from its inputs, but
> that cosmic rays have luckily produced the exact same input that the
> previously connected neurons would have produced. You're arguing that
> since every neuron is performing the exact same computations that it
> would have performed anyway, the two situations are computationally
> identical.
>
> But I don't think that's correct. I think that plain old, garden
> variety mechanism-materialism has an easy way of saying that Lucky
> Alice's brain, viewed as a whole system, is not performing the same
> computations that fully-functioning Alice's brain is. None of the
> neurons in Lucky Alice's brain are even causally connected to each
> other. That's a pretty big computational difference!
>
> I am arguing, in essence, that for the mechanist-materialist,
> "causality" is an important aspect of computation and consciousness.
> Maybe your goal is to show that there's something deeply wrong with
> that idea, or with the idea of "causality" itself. But we're supposed
> to be starting from a foundation of MEC and MAT.
>
> Are you saying that the mechanist-materialist *does* say that Lucky
> Alice is conscious, or only that the mechanist-materialist *should*
> say it? Because if you're saying the latter, then I'm "playing the
> game" better than you are! I'm pretty sure that Dennett (and the other
> mechanist-materialists I've read) would say that Lucky Alice is not
> conscious, and for them, they have a perfectly straightforward way of
> explaining what they *mean* when they say that she's not conscious.
> They mean (among other things) that the actions of her neurons are not
> being affected at all by the paper lying in front of her on the table,
> or the ball flying at her head. For Dennett, it's practically a non-
> sequitur to say that she's conscious of a ball that's not affecting
> her brain.
>
>> But the physical difference does not play a role.
>
> It depends on what you mean by "play a role". You're right that the
> physical difference (very luckily) didn't change what the neurons did.
> It just so happens that the neurons did exactly what they were going
> to do anyway. But the *cause* of why the neurons did what they did is
> totally different. The action of each individual neuron was caused by
> cosmic rays rather than by neighboring neurons. You seem to be asking,
> "Why should this difference play any role in whether or not Alice was
> conscious?" But for the mechanist-materialist, the difference is
> primary. Those kinds of causal connections are a fundamental part of
> what they *mean* when they say that something is conscious.
>
>> If you invoke it,
>> how could you accept saying yes to a doctor, who introduce bigger
>> difference?
>
> Do you mean the "teleportation doctor", who makes a copy of me,
> destroys me, and then reconstructs me somewhere else using the copied
> information? That case is not problematic in the way that Lucky Alice
> is, because there is an unbroken causal chain between the "new" me and
> the "old" me. What's problematic about Lucky Alice is the fact that
> her ducking out of the way of the ball (the movements of her eyes, the
> look of surprise, etc.) has nothing to do with the ball, and yet
> somehow she's still supposed to be conscious of the ball.
>
> A much closer analogy to Lucky Alice would be if the doctor
> accidentally destroys me without making the copy, turns on the
> receiving teleporter in desperation, and then the exact copy that
> would have appeared anyway steps out, because (luckily!) cosmic rays
> hit the receiver's mechanisms in just the right way. I actually find
> this thought experiment more persuasive than Lucky Alice (although I'm
> sure some will argue that they're identical). At the very least, the
> mechanist-materialist has to say that the resulting Lucky Kory is
> conscious. I think it's also clear that Lucky Kory's consciousness
> must be exactly what it would have been if the teleportation had
> worked correctly. This does in fact lead me to feel that maybe
> causality shouldn't have any bearing on consciousness after all.
>
> However, the materialist-mechanist still has some grounds to say that
> there's something interestingly different about Lucky Kory than
> Original Kory. It is a physical fact of the matter that Lucky Kory is
> not causally connected to Pre-Teleportation Kory. When someone asks
> Lucky Kory, "Why do you tie your shoes that way?", and Lucky Kory
> says, "Because of something I learned when I was ten years old", Lucky
> Kory's statement is quite literally false. Lucky Kory ties his shoes
> that way because of some cosmic rays. I actually don't know what the
> standard mechanist-materialist way of viewing this situation is. But
> it does seem to suggest that maybe breaks in the causal chain
> shouldn't affect consciousness after all.
>
> And of course, we can turn the screws in the usual way. If we can do
> Lucky Teleportation once, we can do it once a day, and then once an
> hour, and then once a second, and so on, until eventually we just have
> nothing but random numbers, and if those random numbers happen to look
> like Kory, aren't they just as conscious as Lucky Kory was? But this
> doesn't convince me (yet) that Lucky Alice should be viewed as
> conscious after all. It just convinces me (again) that there's
> something weird about the mechanistic-materialist view of
> consciousness. Or about the materialist's view of "causality".
>
>>> But the mechanist-materialist can (and must) claim that
>>> Lucky Alice did not in fact respond to the ball at all.
>> Consciously or privately?
>
> Physically! By the definition of the thought experiment, it is a
> physical fact that no neuron in Alice's head responded to the ball (in
> the indirect way that they normally would have if she were wired
> correctly). Whether or not she had a conscious experience of a ball is
> a different question.
>
>>> When Alice's brain is working properly, her act of
>>> ducking *is* causally connected to the movement of the ball. And this
>>> kind of causal connection is an important part of what the mechanist-
>>> materialist means by "consciousness".
>> Careful: such kind of causality needs ... MAT.
>
> Yes, of course. But we're *supposed* to be considering the question in
> the context of MAT.
>
>> But at that level, a
>> neurophysiologist looking in the detail would see the neurons doing
>> their job. Only, he will also see, some neurons breaking down, and
>> then being fixed, not by an internal biological fixing mechanism (like
>> it occurs all the time in biological system, but by a lucky beam, but
>> despite this, and thanks to this, the brain of Alice (MGA 1) does the
>> entire normal usual work.
>
> What do you mean by "fixed"? If the cosmic rays "fix" the neurons so
> that they are able to respond to the input of their neighboring
> neurons as they're supposed to, then I've misunderstood the thought
> experiment. But if you mean that the cosmic rays "fix" the neurons by
> (very luckily) sending them the same inputs that they would have
> received from their neighboring neurons, then I don't agree that the
> neurophysiologist looking at the details would conclude that the
> neurons are doing their job, or that the brain of Alice MGA 1 is doing
> its entire normal usual work. He would conclude that the brain is not
> physically reacting to the pencil or the paper or the ball at all. For
> a mechanist, how can a person be aware of a ball if not a single
> neuron in her head is physically reacting to that ball?
>
>>> The mechanist-materialist can only talk about
>>> consciousness in computational / physical terms. For Dennett, if you
>>> say that Alice is "aware", you must be able to translate this into
>>> mechanistic terms. And I can't see any mechanistic sense in which
>>> Lucky Alice can be said to be "aware" of anything.
>> Alice MGA 1 can be said to be aware in the roiginal mechanist sense.
>> When she thought "Oh the math problem is easy", she trigged the right
>> memories in her brain, with the correct physical activity, even if
>> just luckily in that case.
>
> Memory is notoriously confusing, so lets keep talking about the ball.
> What can a mechanist possibly mean by saying that Lucky Alice was
> aware of the ball? By the definition of the thought experiment (unless
> I've misunderstood it), every single neuron in Lucky Alice's brain is
> being triggered by cosmic rays rather than by neighboring neurons. Not
> a single action of any neuron (and therefore, not a single movement of
> her body) has anything to do with the movement of the ball. All we can
> say is that the neurons are (very improbably) being triggered in the
> exact same way that they *would* have been triggered if they were
> wired up correctly, and they were actually responding (indirectly) to
> the light on her retinas, etc.
>
> So what would it mean to say that, nevertheless, Lucky Alice is aware
> of the ball? The only sense I can make of this is that, since each
> individual neuron is doing exactly what it would have done anyway, the
> same "experience" (qualia, whatever) results (or supervenes, or
> whatever). But that's exactly the view of consciousness that Dennett
> (the archetypical mechanist-materialist) has spent a lifetime arguing
> against. For him, that would be a very magical view of consciousness.
> For him, the "experience" of being aware of the ball, "deciding" to
> duck, etc., is simply what it feels like to be a collection of neurons
> responding to that ball. When he says, "This collection of neurons is
> aware of that ball", he is saying, by definition, that that ball is
> having causal effects on those neurons. (And not just the causal
> effects that any physical object has on any nearby physical object.)


Just to make things more confusing ;-) We should keep in mind that in current
theories of physics the direction of time's arrow, and hence of "causality", is
a mere statistical phenomena and at a fundamental level all physical processes
are reversible - along with their causal order. In physics, causality just
means no action-at-a distance.

It seems to me that the conundrums of these thought experiments about zombies
are muddled by invoking possibilities that are so improbable as to be
impossible. It's like considering playing craps with loaded dice that always
come up sixes and then asking, "Suppose cosmic rays always happened to come down
and strike the dies so that they came up randomly; would playing with them be a
fair game?" "Being a fair game" is an abstract concept, going beyond a
particular sequence of events, and so is "coming up randomly". So any answer to
this question has to fudge the difference between impossible and improbable.

Questions about zombies seem to have the same character when you hypothesize
their behavior is driven by cosmic rays or random number generators. You're
saying suppose something happened that is so improbable that it's impossible, do
you now agree that it's possible or not? If I say it's impossible, you answer
that ex hypothesi, it could happen. If I say it's possible, you can add to the
example to make it more and more improbable, e.g. Alice dodges a ball AND she
composes a concerto while playing tennis.

Brent


>
>> And things will even be more confusing after MGA 2, but that's the
>> goal. MEC + MAT should give a contradiction, we will extract some
>> weirder and weirder proposition until the contradiction will be
>> utterly clear. OK?
>
> Of course I'm entirely on board with the spirit of your thought
> experiment. You think MECH and MAT implies that Lucky Alice is
> conscious, but I don't think it does. I'm not sure how important that
> difference is. It seems substantial. But I can also predict where
> you're going with your thought experiment, and it's the exact same
> place I go. So by all means, continue on to MGA 2, and we'll see what
> happens.
>
> -- Kory
>
>
> >
>


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Fri Nov 21 2008 - 13:19:31 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:15 PST