>
> Russell Standish wrote:
> >
> > Chris, this is a well thought out reponse, and it persuades me that
> > the difference between conciousness and nonconciousness could be as
> > little as the "inert block of wood", precisely because it is a
> > physically different system. It actually reminds me of the quantum 2
> > split experiment. An interference pattern is seen, or not seen
> > according to whether a detector placed at one of the slits is switched
> > on or not.
> >
> > Maudlin's argument is important, but perhaps the conclusion is not so
> > "reductio ad absurdum" as initally thought.
> >
> > Cheers
>
> Thanks for the feedback! I like your analogy too, that makes
> sense.
>
>
> --
> Chris Maloney
> http://www.chrismaloney.com
>
> "Knowledge is good"
> -- Emil Faber
>
>
>
Thinking about this some more, I realise this is exactly what is going
on. Consider Olympia from a MWI point of view. For the vast majority
of worlds containing Olympia, Karas (I believe that is what the
correcting machinery is called) is active, handling the
counterfactuals. Only on one world line (of measure zero!) is Karas
inactive, and Olympia is simply replaying the previously recorded
data.
Now consider what happens when Karas is turned off, or prevented from
operating. Then, in all world lines is Olympia simply a replay
device. From the MWI point of view, the simple inert piece of wood is
not so innocuous. It changes the systems dynamics completely.
Now this has bearing on a supposition I have argued earlier - that
conciousness requires free will, and the only way to have free will is
via the MWI picture. In this context, a Turing machine can never be
concious, because it follows a preprogrammed path, without free
will. Note this is not the same as saying comp is false, unless you
strictly define computers to be Turing machines. My suspicion is that
adding a genuine random number generator to the machine may be
sufficient to endow the architecture with free will, however, of
course the question is unresolved.
What does this all mean for your thesis Bruno? Alas I didn't follow
your argument (not because it was written in French - which I have no
problem with, rather because I was not familiar with the modal logic
you employed, and haven't raised enough enthusiasm to follow up the
references). Could it be implying that you have too restrictive
definitions of both comp and sup-phys?
Quote from Bruno follows:
> This seems rather magical to me. If only because, for a
> computationalist,
> the only role of the inert block (during the particular execution) is
> to
> explain why the machine WOULD have give a correct answer in case the
> inputs WOULD have been different.
> This mean that you don't associate consciousness with a particular
> physical computation but with the entire set of possible computations.
> But that is exactly what I do ..., and what I mean by the abandon
> of physical supervenience.
> A singular brain's activity becomes an invention of the mind.
Could it mean that you are defining sup-phys to be supervenience on
the one track classical physics, rather than on the MWI style quantum
physics?
Cheers
----------------------------------------------------------------------------
Dr. Russell Standish Director
High Performance Computing Support Unit,
University of NSW Phone 9385 6967
Sydney 2052 Fax 9385 6965
Australia R.Standish.domain.name.hidden
Room 2075, Red Centre
http://parallel.hpc.unsw.edu.au/rks
----------------------------------------------------------------------------
Received on Sun Jul 25 1999 - 19:08:57 PDT