Re: Can mind be a computation if physics is fundamental?

From: Quentin Anciaux <allcolor.domain.name.hidden>
Date: Thu, 13 Aug 2009 10:31:01 +0200

2009/8/13 Colin Hales <c.hales.domain.name.hidden>:
>
>
> Quentin Anciaux wrote:
>
> 2009/8/12 Colin Hales <c.hales.domain.name.hidden>:
>
>
> My motivation to kill COMP is purely aimed at bring a halt to the delusion
> of the AGI community that Turing-computing will ever create a mind. They are
> throwing away $millions based on a false belief. Their expectations need to
> be scientifically defined for a change. I have no particular interest in
> disturbing any belief systems here except insofar as they contribute to the
> delusion that COMP is true.
>
> 'nuff said. This is another minor battle in an ongoing campaign. :-)
>
> Colin
>
>
> You want so much COMP to be false that you've forget in the way that
> your argument is flawed from the start... You start with, AI can't do
> science to conclude that... tada... AI can't do science. It's absurd.
>
> Quentin
>
>
>
>
> It is a 'reductio ad absudum' argument.
>
> My argument does not start with AI can't do science.
>
> It starts with the simple posit that if COMP is true then all differences
> between a COMP world (AC) and the natural world (NC) should be zero under
> all circumstances and the AC/NC distinction would be false. That is the
> natural result of unconditional universality of COMP yes?
>
> OK.
>
> This posit is not an assumption that AC cannot be a scientist.
>
> The rationale is that if I can find one and only one circumstance
> consistent/sustaining that difference, then the posit of the universal truth
> of COMP is falsified. The AC/NC distinction is upheld:
> .
> I looked and found one place where the difference is viable, a difference
> that only goes away if you project a human viewpoint into the 'artificial
> scientist' ( i.e. valid only by additional assumptions).....that position is
> that the NC artificial scientist cannot ever debate COMP as an option. Not
> because it can't construct the statements of debate, but because it will
> never be able to detect a world in which COMP is false, because in that
> world the informal systems involved can fake all evidence and lead the COMP
> scientist by the nose anywhere they want. If the real world is a place where
> informal systems exist, those informal systems can subvert/fake all COMP
> statements, no matter what they are and the COMP scientist will never know.
> It can be 100% right, think it's right and actually not be connected to the
> actual reality of it. A world in which COMP is false can never verify that
> it is. Do not confuse this 'ability to be fooled' with an inability to
> formulate statements which deal with inconsistency.
>
> The place where we get an informal system is in the human brain,

Here is where you *assume* AI can't do science by assuming human is an
informal system... (and therefore different from an AI) where is the
demonstration of this ?

Because the fact is, if COMP is true, consciousness is digitalisable
and can "run" on a computer hardware, therefore consciousness is not
an informal system (it's program+data). As you did not demonstrate
that (that's the whole point), you did not demonstrate anything


> which can
> 'symbolically cohere and explore' any/all formal systems. I specifically
> chose the human brain of a scientist, the workings of which were used to
> generate the 'law of nature' running the artificial (COMP) scientist (who
> must also be convinced COMP is true in order to bother at all!). I can see
> how, as a human, I could 100% fake the apparent world that the COMP entity
> examines COMP-ly and it will never know. (The same way that a brilliant
> virtual reality could 100% fool a human and we'd never know. A virtual
> reality that fools us humans is not necessarily made of computation  either.
> )
>
> I am not saying humans are magical. I am saying that humans do not operate
> formally like COMP.... and that 'formally handling inconsistency' is not the
> same thing as 'delivering inconsistency by being an informal system'. BTW I
> mean informal in the Godellian sense...simultaneous inconsistency and
> incompleteness.
>
> This is a highly self referential situation. Resist the temptation to assume
> that a COMP/NC scientist construction of statements capturing inconsistency
> is equivalent to dealing the intrinsic inconsistency of the human brain
> kind. Also reject the notion that the brain is computing of the COMP
> (Turing)  type. This is not the case.
>
> You might also be interested in
> Bringsjord, S. 1999. The Zombie Attack on the Computational Conception of
> Mind. Philosophy and Phenomenological Research LIX:41-69.
> ....He ends with....."In the end, then, the zombie attack proves lethal:
> computationalism is dead."
>
> It's a formal modal logic argument to the same end as mine.... in the end,
> they are actually the same argument. It's just not obvious. I like mine
> better because it has the Godellian approach. The informality issue has some
> elaboration here:
> Cabanero, L. L. and Small, C. G. 2009. Intentionality and Computationalism:
> A Diagonal Argument. Mind and Matter 7:81-90.
> Also here:
> Fetzer, J. H. 2001. Computers and Cognition: Why Minds are Not Machines
> Kluwer Academic Publishers.
>
> I am hoping that between these and a few others, the issue is sealed. I know
> it'll take a while for the true believers to come around. It's not such a
> big deal ... except when $$$ + wasted time promulgates bad science and
> magical thinking in the form of a kind a 'fashion preference' based on
> presumptions that the natural world is obliged to operate according to
> human-constructed 'isms.
>
> If I look at the natural world and it tells me COMP is true then I will use
> that stance scientifically.
> If I look at the natural world and it tells me COMP is false then I will use
> that stance scientifically.
>
> I have no desire for one or the other. I desire merely truth, as best I can
> assemble it, scientifically.
>
> I hope this sorts it out.  I am done for now. Stuff to do. If anyone wants
> the cited papers email me offlist.
>
> cheers
> colin
>
>
> >
>



-- 
All those moments will be lost in time, like tears in rain.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Thu Aug 13 2009 - 10:31:01 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:16 PST