On 25 Nov 2008, at 15:49, Kory Heath wrote:
>
>
> On Nov 25, 2008, at 2:55 AM, Bruno Marchal wrote:
>> So you agree that MGA 1 does show that Lucky Alice is conscious
>> (logically).
>
> I think I have a less rigorous view of the argument than you do. You
> want the argument to have the rigor of a mathematical proof.
Yes. But it is applied mathematics, in a difficult domain (psychology/
theology and foundation of physics).
There is a minimum of common sense and candidness which is asked for.
The proof is rigorous in the way it should give to anyone the feeling
that it could be entirely formalized in some intensional mathematics,
S4 with quantifiers, or in the modal variant of G and G*. This is
eventually the purpose of the interview of the lobian machine (using
Theaetetus epistemological definition). But this is normally not
needed for "conscious english speaking being with enough common sense
and some interest in the matter".
> You say
> "Let's start with the mechanist-materialist assumption that Fully-
> Functional Alice is conscious. We can replace her neurons one-by-one
> with random neurons
They are random in the sense that ALL strings are random. They are not
random in Kolmogorov sense for example. MGA 2 should make this clear.
> that just happen to do what the fully-functional
> ones were going to do.
It is not random for that very reason. It is luckiness in MGA 1, and
the record of computations in MGA 2.
> By definition none of her exterior or interior
> behavior changes.
I never use those terms in this context, except in comp jokes like
"the brain is in the brain". It is dangerous because interior/exterior
can refer both to the in-the skull/outside-the-skull, and objective/
subjective.
I just use the fact that you say "yes" to a doctor "qua
computatio" (with or without MAT).
> Therefore, the resulting Lucky Alice must be exactly
> as conscious as Fully-Functional Alice."
>
> To me, this argument doesn't have the full rigor of a mathematical
> proof, because it's not entirely clear what the mechanist-materialists
> really mean when they say that Fully-Functional Alice is conscious,
Consciousness does not need to be defined more precisely than it is
needed for saying "yes" to the doctor qua computatio, like a
naturalist could say "yes" for an artificial heart.
Consciousness and (primitive) Matter don't need to be defined more
precisely than needed to understand the physical supervenience thesis.
Despite term like "existence of a primitive physical universe" or the
very general "supervenience" term itself.
You could have perhaps still a problem with the definitions or with
the hypotheses?
>
> and it's not clear whether or not they would agree that "none of her
> exterior or interior behavior changes (in any way that's relevant)".
> There *is* an objective physical difference between Fully-Functional
> Alice and Lucky Alice - it's precisely the (discoverable, physical)
> fact that her neurons are all being stimulated by cosmic rays rather
> than by each other.
There is an objective difference between very young Alice with her
"biological brain" and very young Alice the day after the digital
graft. But taking both MEC and MAT together, you cannot use that
difference. If you want use that difference, you have to make change
to MEC and/or to MAT. You can always be confused by the reasoning in a
way which pushes you to (re)consider MEC or MAT, and to interpret them
more vaguely so that those changes are made possible. But then we
learn nothing "clear" from the reasoning. We learn if we do the same,
but precisely.
> I don't see why the mechanist-materialists are
> logically disallowed from incorporating that kind of physical
> difference into their notion of consciousness.
In our setting, it means that the neuron/logic gates have some form of
prescience.
>
>
> Of course, in practice, Lucky Alice presents a conundrum for such
> mechanist-materialists. But it's not obvious to me that the conundrum
> is unanswerable for them, because the whole notion of "consciousness"
> in this context seems so vague.
No, what could be vague is the idea of linking consciousness with
matter, but that is the point of the reasoning. If we keep comp, we
have to (re)define the general notion of matter.
> Bostrom's views about fractional
> "quantities" of experience are a case in point.
If that was true, why would you say "yes" to the doctor without
knowing the thickness of the artificial axons?
How can you be sure your consciousness will not half diminish when the
doctor proposes to you the new cheaper brain which use thinner fibers,
or half the number of redundant security fibers (thanks to a progress
in security software)?
I would no more dare to say "yes" to the doctor if I could loose a
fraction of my consciousness and become a partial zombie.
> He clearly takes a
> mechanist-materialist view of consciousness,
Many believes in naturalism. At least, its move shows that he is aware
of the difficulty of the mind body problem. But he has to modify comp
deeply, for making its move meaningful.
If anything physical/geometrical about the neurons is needed, let the
digital machine take into account that physical/geometrical feature.
This means, let the level be refined. But once the level is correctly
choose, comp forces us to abstract from the functioning of the
elementary boolean gates.
> and he believes that a
> grid of randomly-flipping bits cannot be conscious,
(I am ok with that. I mean, this will remain true both with comp and
NON MAT).
> no matter what it
> does. He would argue that, during Fully-Functional Alice's slide into
> Lucky Alice, her subjective quality of consciousness doesn't change,
> but her "quantity" of consciousness gradually reduces until it becomes
> zero. That seems weird to me, but I don't see how to "logically prove"
> that it's wrong. All I have are messy philosophical arguments and
> thought experiments - what Dennett calls "intuition pumps".
Because I would have to say NO to the doctor who proposes me a digital
neural net with "infinitesimally" or very thin but solid fibers". I
would become a zombie if Bostrom is right. Bostrom does not use the
digital MEC hypothesis.
>
>
> That being said, I'm happy to proceed as if our hypothetical
> mechanist-
> materialists have accepted the force of your argument as a logical
> proof. Yes, they claim, given the assumptions of our mechanism-
> materialism, if Fully-Functional Alice is conscious, Lucky Alice must
> *necessarily* also be conscious. If the laser-graph is conscious, then
> the movie of it must *necessarily* be conscious. What's the problem
> (they ask)? On to MGA 3.
Hmmm.... (asap). Still disentangling MGA 3 and MGA 4 ...
Bruno
http://iridia.ulb.ac.be/~marchal/
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Tue Nov 25 2008 - 13:54:52 PST