Re: Searles' Fundamental Error

From: John M <jamikes.domain.name.hidden>
Date: Mon, 19 Feb 2007 08:50:47 -0500

Pls see after Jason's remark
John
  ----- Original Message -----
  From: Jason
  To: Everything List
  Sent: Monday, February 19, 2007 3:42 AM
  Subject: Re: Searles' Fundamental Error

  On Feb 18, 5:46 pm, "Stathis Papaioannou" <stath....domain.name.hidden> wrote:
> On 2/18/07, Mark Peaty <mpe....domain.name.hidden> wrote:
>
> My main problem with Comp is that it needs several unprovable assumptions to
>
> > be accepted.
  I believe that to say that some special substrate is needed for
  consciousness, be it chemical reactions or anything else, is
  subscribing to an epiphenominal view. For example, there should be no
  difference in behavior between a brain that operates chemically and
  one which has its chemical reactions simulated on a computer; however
  if it is the chemicals themselves that are responsible for
  consciousness, this consciousness can have no effect on the brain
  because the net result will be identical whether the brain is
  simulated or not. To me, epiphenominalism is a logical contradiction,
  because if consciousness has no effect on the mind, we wouldn't wonder
  about the mind-body problem because the mystery of consciousness would
  have no way of communicating itself to the brain. Therefore, I don't
  see how anything external to the functioning of the brain could be
  responsible for consciousness.

  Jason
  -------------------------------
  JM:
  I think you are in a limitation and draw conclusions from this limited model to beyond it.
  Whatever we can 'simulate' is from within the up-to-date knowledge base: our cognitive inventory. That is OK - and the way how humanity developed over the eras of the epistemic enrichment since dawn. Topics are added and views change as we learn more.
  We are not (yet?) at the end with omniscience.

  So our today's simulation is valid only to the extent of today's level of knowables. Nobody can include the yet unknown into a simulation. (see the remark of Stathis: "> You can't prove that a machine will be conscious in the same way you are.")

  If you insist of considering "the brain", it is OK with me (I go further in my views into a total interconnection) but from even the brain you can include into your simulation only what was learnt about it to date.
  The computer cannot go beyond it either.
   The brain does.
  So our model-simulation is just that: a limited model.
  Are we ready for surprizes?

  John M

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Mon Feb 19 2007 - 08:57:28 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:13 PST