Hi Pete,
Le 28-mars-06, à 20:27, Pete Carlton a écrit :
> Hi -
>
>> The problem of consciousness is the problem of conscious thought. I
>> can
>> imagine there is no thinker, but I can't imagine there is no
>> consciousness, and that is the problem.
>
> I think it's not really a problem; it is only a problem if you are
> committed to a *certain* conception of consciousness (as irreducible,
> ineffable, intrinsic, etc).
I agree. Now I am indeed committed to the personal, but I think
universal, fact that consciousness is ineffable in the sense that "I"
cannot give a pure third person proof that "I" am conscious. Thought
experiments based on the comp hyp can indeed explain that, as far as
someone is conscious, he/she cannot communicate that fact in any
objective way. But now, *that* "meta-fact" can be explained in some
objective way.
> I of course am not going to convince you to think of consciousness in
> a Dennettian framework, but just be aware that there is a perfectly
> respectable and mainstream philosophical line that dismisses things
> like Chalmers's "hard problem" as an artifact of bad
> conceptualizing.
I would say there is about 1500 years of attempt to dismiss the first
person. In general that dismissing comes from a category error. From
the fact that, indeed, the scientific discourse (third person,
objective) cannot make the use of first person knowledge, some people
claim that no scientific discourse can be made on consciousness. But
you can always build a third person theory about anything provided you
give sufficiently clear definitions, postulates, reasonable inference
rules, consistency world-views (logician's model), etc. A third person
discourse on the notion of first person discourse is quite possible.
> What the implications of this view are for your
> scheme, I can't say yet. My hunch is that it would make things much
> easier to take "intrinsic consciousness" out of the picture and
> replace it with hugely complex behavioral dispositions, etc.
This is dismissing the hard problem. The question is really how to find
a relation between hugely complex behavioral dispositions and
consciousness.
Now I think consciousness is a "logical child" of "consistency" or
"expectation of some possible truth or reality", and logic can explain
why consistent machines (or more general entities) are bewildered by
their true but "unpostulatable" (if I can say) self-consistency. This
can justify many of the "meta-facts" described above.
>
> Anyway, I had another question: are you trying to *identify* one
> person with one simple (Turing, Lobian, etc) machine?
> I think this
> is a mistake - what if the best way to output your behavior is using
> a collection of millions or billions of machines, some of which are
> broken, some of which mess up the other machines, some of which are
> not found in your brain but are instead in your environment, or in
> other people.. will you say that this can all be reduced to one
> machine? But then why identify *that* machine with one person?
Ah Ah ! This is an excellent remark, except maybe it shows you have
not study my work---don't be sorry, nobody is perfect :-)
But the main point is really that: if comp is true then the mind body
relation is not one-one. You can, for all practical purpose, attach a
consciousness to some appearance of a "digital machine" (not
necessarily a material one though), but no digital machine can attach
its own consciousness to any particular machine token, but only to an
abstract machine type having a continuum of (2^aleph_0) "token
incarnations" appearing in the universal deployment (like in
Everett-Deutsch QM).
Also, in comp as I present it, if some part of some
neighborhood-environment needs to be emulated for my consciousness to
proceed, then I put that part, by definition, in the "generalized
brain", which is the part of the universe which needs to be duplicated
for *me* being duplicated. The thought experiments I use are simpler
when one assumes that the brain is the traditional biological one
inside the skull, but that supplementary assumption can be eliminated
once the universal dovetailer (or the universal wave function) is
invoked. Even if the whole Milky Way *is* my brain, once it is
Turing-emulable, you can understand that sooner or later the Universal
Dovetailer will simulated it (infinitely often)---and then remember
that the first person is not aware of any simulation delays.
But still, bravo (if you don't have read my work) because the fact that
comp forces the mind-body relation to be NOT one-one, is a key feature:
to an apparent (complex) body you can associate a mind (and this is
almost just a rule of politeness), but to a mind-state, you can only
associate the infinity of computational possible "body-histories" going
through that state. And then the comp mind-body problem reduces
partially to the mathematical justification of the rarity of Harry
Potter Magics or other Wonderland sort of White Rabbits. And if too
much white rabbits are shown remaining, a case against comp is done,
but incompleteness assures such a task is not an obvious one.
Bruno
http://iridia.ulb.ac.be/~marchal/
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---
Received on Wed Mar 29 2006 - 05:11:26 PST