I appreciate that there are genuine problems in the theory of computation as
applied to intelligent and/or conscious minds. However, we know that
intelligent and conscious minds do in fact exist, running on biological
hardware. The situation is a bit like seeing an aeroplane in the sky then
trying to figure out the physics of heavier than air flight; if you prove
that it's impossible, then there has to be something wrong with your proof.
If it does turn out that the brain is not Turing emulable, what are the
implications of this? Could we still build a conscious machine with
appropriate soldering and coding, or would we have to surrender to dualism/
an immaterial soul/ Roger Penrose or what?
--Stathis Papaioannou
>From: "Stephen Paul King" <stephenk1.domain.name.hidden>
>To: <Fabric-of-Reality.domain.name.hidden>
>CC: <everything-list.domain.name.hidden>
>Subject: Re: Olympia's Beautiful and Profound Mind
>Date: Sat, 14 May 2005 20:41:04 -0400
>
>Dear Lee,
>
> Let me use your post to continue our offline conversation here for the
>benefit of all.
>
> The idea of a computation, is it well or not-well founded? Usually TMs
>and other finite (or infinite!) state machines are assume to have a well
>founded set of states such that there are no "circularities" nor infinite
>sequences in their specifications. See:
>
>http://www.answers.com/topic/non-well-founded-set-theory
>
>
> One of the interesting features that arises when we consider if it is
>possible to faithfully represent the 1st person experiences of the world
>-"being in the world" as Sartre wrote - in terms of computationally
>generated simulations is that circularities arise almost everywhere.
>
> Jon Barwise, Peter Wegner and others have pointed out that the usual
>notions of computation fail to properly take into consideration the
>necessity to deal with this issue and have been operating in a state of
>Denial about a crucial aspect of the notion of conscious awareness: how can
>an a priori specifiable computation contain an internal representational
>model of itself that is dependent on its choices and interactions with
>"others", when these others are not specified within the computation?
>
>http://www.informatik.uni-trier.de/~ley/db/indices/a-tree/b/Barwise:Jon.html
>http://www.cs.brown.edu/people/pw/
>
> Another aspect of this is the problem of concurrency.
>
>http://www.cs.auckland.ac.nz/compsci340s2c/lectures/lecture10.pdf
>http://boole.stanford.edu/abstracts.html
>
> I am sure that I am being a fooling tyro is this post. ;-)
>
>Kindest regards,
>
>Stephen
>
>----- Original Message ----- From: "Lee Corbin" <lcorbin.domain.name.hidden>
>To: <Fabric-of-Reality.domain.name.hidden>
>Cc: <everything-list.domain.name.hidden>
>Sent: Saturday, May 14, 2005 2:00 AM
>Subject: RE: Olympia's Beautiful and Profound Mind
>
>
>>Hal writes
>>
>>>We had some discussion of Maudlin's paper on the everything-list in 1999.
>>>I summarized the paper at http://www.escribe.com/science/theory/m898.html
>>>.
>>>Subsequent discussion under the thread title "implementation" followed
>>>...
>>>I suggested a flaw in Maudlin's argument at
>>>http://www.escribe.com/science/theory/m1010.html with followup
>>>http://www.escribe.com/science/theory/m1015.html .
>>>
>>>In a nutshell, my point was that Maudlin fails to show that physical
>>>supervenience (that is, the principle that whether a system is
>>>conscious or not depends solely on the physical activity of the system)
>>>is inconsistent with computationalism.
>>
>>It seemed to me that he made a leap at the end.
>>
>>>(In fact, I argued that the new computation is very plausibly conscious,
>>>but that doesn't even matter, because it is sufficient to consider that
>>>it might be, in order to see that Maudlin's argument doesn't go through.
>>>To repair his argument it would be necessary to prove that the altered
>>>computation is unconscious.)
>>
>>I know that Hal participated in a discussion on Extropians in 2002 or 2003
>>concerning Giant Look-Up Tables. I'm surprised that either in the course
>>of those discussions he didn't mention Maudlin's argument, or that I have
>>forgotten it.
>>
>>Doesn't it all seem of a piece? We have, again, an entity that either
>>does not compute its subsequent states, (or as Jesse Mazer points out,
>>does so in a way that looks suspiciously like a recording of an actual
>>prior calculation).
>>
>>The GLUT was a device that seemed to me to do the same thing, that is,
>>portray subsequent states without engaging in bonafide computations.
>>
>>Is all this really the same underlying issue, or not?
>>
>>Lee
>>
>>
>>
>>
>>
>>------------------------ Yahoo! Groups Sponsor --------------------~-->
>>In low income neighborhoods, 84% do not own computers.
>>At Network for Good, help bridge the Digital Divide!
>>http://us.click.yahoo.com/S.QlOD/3MnJAA/Zx0JAA/pyIolB/TM
>>--------------------------------------------------------------------~->
>>
>>
>>Yahoo! Groups Links
>>
>><*> To visit your group on the web, go to:
>> http://groups.yahoo.com/group/Fabric-of-Reality/
>>
>><*> To unsubscribe from this group, send an email to:
>> Fabric-of-Reality-unsubscribe.domain.name.hidden
>>
>><*> Your use of Yahoo! Groups is subject to:
>> http://docs.yahoo.com/info/terms/
>>
>>
>>
>
_________________________________________________________________
Sell your car for $9 on carpoint.com.au
http://www.carpoint.com.au/sellyourcar
Received on Sun May 15 2005 - 08:42:13 PDT