RE: Maudlin's Demon (Argument)

From: Stathis Papaioannou <stathispapaioannou.domain.name.hidden>
Date: Fri, 6 Oct 2006 22:09:09 +1000

John,

I should have been more precise with the terms "copy" and "emulate".
What I was asking is whether a robot which experiences something while
it is shovelling coal (this of course assumes that a robot can have experiences)
would experience the same thing if it were fed input to all its sensors exactly
the same as if it were doing its job normally, such that it was not aware the
inputs were in fact a sham. It seems to me that if the answer is "no" the robot
would need to have some mysterious extra-computational knowledge of the
world, which I find very difficult to conceptualise if we are talking about a standard
digital computer. It is easier to conceptualise that such non-computational effects
may be at play in a biological brain, which would then be an argument against
computationalism.

Stathis Papaioannou

> Stathis:
> let me skip the quoted texts and ask a particular question.
> ----- Original Message -----
> From: "Stathis Papaioannou" <stathispapaioannou.domain.name.hidden>
> Sent: Wednesday, October 04, 2006 11:41 PM
> Subject: RE: Maudlin's Demon (Argument)
> You wrote:
> Do you believe it is possible to copy a particular consciousness by
> emulating it, along
> with sham inputs (i.e. in virtual reality), on a general purpose computer?
> Or do you believe
> a coal-shovelling robot could only have the coal-shovelling experience by
> actually shovelling
> coal?
>
> Stathis Papaioannou
> ---------------------------------
> My question is about 'copy' and 'emulate'.
>
> Are we considering 'copying' the model and its content (in which case the
> coal shoveling robot last sentence applies) or do we include the
> interconnections unlimited in "experience", beyond the particular model we
> talk about?
> If we go "all the way" and include all input from the unlimited totality
> that may 'format' or 'complete' the model-experience, then we re-create the
> 'real thing' and it is not a copy. If we restrict our copying to the aspect
> in question (model) then we copy only that aspect and should not draw
> conclusions on the total.
>
> Can we 'emulate' totality? I don't think so. Can we copy the total,
> unlimited wholeness? I don't think so.
> What I feel is a restriction to "think" within a model and draw conclusions
> from it towards beyond it.
> Which looks to me like a category-mistake.
>
> John Mikes

_________________________________________________________________
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---
Received on Fri Oct 06 2006 - 08:09:29 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:12 PST