Re: More on qualia of consciousness and occam's razor
"Pete Carlton" <pmcarlton.domain.name.hidden> wrote:
> [snip]
> > Earlier there were posts about
> > whether SAS-like patterns in a cellular automaton would really be
> > conscious or not. It seems like this question is asking, "I can see
> > how the thing behaves, but what I want to know is, are the lights
> > 'turned on inside' or not?". But we already know that there are no
> > lights -- so what is the question really asking?
You take the "box/brain" analogy to literally. If I rephrase the question
as "I can see how the thing behaves, but what I want to know is 'is there
consciousness there?' . Would you "still" say "But we already know
that there are no consciousness -- so what the question is really
asking?".
Well my remark adds nothing in the sense that Eric Cavalcanti
succeeds apparently to pinpoint the contradiction in Pete's post
(through the use of Frank Jackson's colorblind Mary experiment).
Nice piece of dialog. Actually I do think that the box/brain analogy
is not so bad, once we agree to choose another "topology"
for the information space, but for this I need the modal theory of
knowledge S4 ...
Well, the box/brain analogy *does* lead to wrong statements, and
indeed it occurs again in Eric Cavalcanti and Stathis Papaioannou
later reply in that thread! Look:
Stathis Papaioannou wrote (and Eric Cavalcanti did assess it)
>Actually, you probably _could_ drive your brain into "seeing red" if you
>knew exactly what physical processes occur in the brain in response to a
>red stimulus, and if you had appropriate neural interface equipment. Such
>capabilities do not currently exist - not even close - but the idea is the
>basis of many SF stories (eg. William Gibson, Greg Egan). The same sort of
>thing frequently occurs naturally: the definition of a hallucination is a
>perception without the stimulus that would normally give rise to that
>perception. The point is, even if you knew in perfect detail what happens
>in your brain when you see red, you would not actually
>know/feel/experience the qualia unless you ran the software on your own
>hardware.
Of course I mainly agree with Stathis here, and with Eric's
assessment, but Stathis formulates it in just the way which
makes people abusing the box analogy. Indeed, the only way
to actually know/feel/experience the qualia is to "run" the right
software, which really *defines* the owner.
The choice of hardware makes no difference. The owner
of the hardware makes no difference. This is because the owner
is really defined by the (running) software.
To be even more exact, there is eventually
no need for running the software, because eventually the box
itself is a construction of the mind, and is defined by the
(possible) software/owner.
That illustrates also that you cannot see "blue" as someone else
sees "blue" by running the [someone else's software] on
"your hardware", because if you run [someone else's software]
on your hardware, you will just duplicate that someone-else,
and your hardware will become [someone else's hardware!]
And *you* will just disappear (locally).
Bruno
Received on Tue Feb 17 2004 - 06:31:06 PST
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:09 PST