Brent Meeker wrote:
> Colin Hales wrote:
>
>> Brent Meeker wrote:
>>
>>> Colin Hales wrote:
>>>
>>>
>>>> Man this is a tin of worms! I have just done a 30 page detailed
>>>> refutation of computationalism.
>>>> It's going through peer review at the moment.
>>>>
>>>> The basic problem that most people fall foul of is the conflation of
>>>> 'physics-as-computation' with the type of computation that is being
>>>> carried out in a Turing machine (a standard computer). In the paper I
>>>> drew an artificial distinction between them. I called the former NATURAL
>>>> COMPUTATION (NC) and the latter ARTIFICIAL COMPUTATION (AC). The idea is
>>>> that if COMP is true then there is no distinction between AC and NC. The
>>>> distinction should fail.
>>>>
>>>> I found one an one only situation/place where AC and NC part company.
>>>> Call this situation X.
>>>>
>>>> If COMP is false in this one place X it is false as a general claim. I
>>>> also found 02 downstream (consequential) failures that ultimately get
>>>> their truth-basis from X, so they are a little weaker as formal
>>>> arguments against COMP.
>>>>
>>>> *FACT*: Humans make propositions that are fundamentally of an informal
>>>> nature. That is, the utterances of a human can be inconsistent and form
>>>> an fundamentally incomplete set (we don't 'know everything'). The
>>>> quintessential definition of a scientist is a 'correctable liar'. When a
>>>> hypothesis is uttered it has the status indistinguishable of a lie.
>>>> Humans can participate in the universe in ways which can (apparently)
>>>> violate any law of nature. Humans must be able to 'violate' laws of
>>>> nature in the process of accessing new/novel formal systems to describe
>>>> the unknown natural world. Look at the world. It is not hard to see how
>>>> humans exemplify an informal system. All over the world are quite normal
>>>> (non-pathologically affected) humans with the same sensory systems and
>>>> mental capacities. Yet all manner of ignorance and fervently held
>>>> contradictory belief systems are ‘rationally’ adopted.
>>>> ===================
>>>> COMP fails when:
>>>> a) You assume COMP is true and build an artificial (AC/computer)
>>>> scientist <Sa> and expect <Sa> to be able to carry out authentic
>>>> original science on the a-priori unknown....identically to humans. To do
>>>> this you use a human-originated formal model (law of nature) ts to do
>>>> this.... your computer 'computes ts, you EMBODY the computer in a
>>>> suitable robotic form and then expect it to do science like humans. If
>>>> COMP is true then the human scientist and the robot scientist should be
>>>> indistinguishable.
>>>>
>>>> b) You then discover that it is a fundamental impossibility that <Sa> be
>>>> able to debate/propose that COMP is a law of nature.
>>>>
>>>> c) Humans can debate/propose that COMP is a law of nature.
>>>>
>>>> BECAUSE: (b) <> (c) they are distinguishable. NC and AC are different
>>>> THEREFORE: ts cannot be the 'law of nature' for a scientist.
>>>> THEREFORE: COMP is false in the special case of (b)
>>>> THEREFORE: COMP is false as a general claim.
>>>>
>>>> (b) is not a claim of truth or falsehood. It is a claim that the very
>>>> idea of <Sa> ever proposing COMP (= doubting that COMP is true) is
>>>> impossible. This is because it is a formal system trying, with a fixed,
>>>> formal set of rules (even self modifying according to yet more rules) to
>>>> construct statements that are the product of an informal system (a human
>>>> scientist). The very idea of this is a contradiction in terms.
>>>>
>>>>
>>> I don't see it. I can write a simple computer program that constructs statements which
>>> are a subset of those produced by humans (or any other system). Bruno's UD produces *all*
>>> such statements. So where's the contradiction?
>>>
>>>
>>>
>> Yes you can generate all such statements. /But then what*/*so what?
>> /*
>> *Please re-read the scenario....This situation is very very specific:
>>
>> 1) Embodied situated robot scientist <Sa> is doing science on the
>> 'natural world'.
>>
>> 2) As a COMP artificial scientist <Sa>, you are software. A formal
>> system *ts* computes you.
>>
>> 3) All you ever do is categorise patterns and cross-correlate patterns
>> in massive streams of numbers that arrive from your '/robot scientist
>> suit/'.
>>
>> 4) <Sa> is a SCIENTIST. The entirety of the existence of <Sa> involves
>> dealing with streams of numbers that are the result of an encounter with
>> the radically unknown, which <Sa> is trying to find a 'universal
>> abstraction' for = 'a law of nature'.
>>
>> 5) There is no 'out there in an environment' for <Sa>. There is only an
>> abstraction (a category called) "out there". You cannot project any kind
>> of human 'experience' into <Sa>. REASON: If COMP is true, then
>> computation (of abstract symbol manipulation of formal *ts*) is all COMP
>> <Sa> needs to be a scientist. <Sa> can only be imagined as operating 'in
>> the dark'.(I spent a whole section on ensuring this spurious projection
>> does not occur in the reader of my paper!)
>>
>> 6) *ts* has been assumed possible by assuming COMP is true.
>>
>> 7) The paper is a reductio ad absurdum proof that COMP is false.
>>
>> 8) The contradiction that I use is that the human and the COMP scientist
>> are different (when if COMP is true they should be the same). The
>> difference is that a human can postulate COMP is true and be WRONG. _The
>> COMP-Sa cannot do this_....because it can never know when it is wrong!
>> Humans are an INFORMAL system. Informal systems can break rules.
>>
>> Broken rules do NOT come labeled as broken.
>> Faked authentic rules do not come labeled as forgeries.
>>
>> <Sa> cannot cope with either. The aberrant behaviour of <Sa> is not that
>> it can't in-principle deal with it. _It's that there is not way of <sa>
>> knowing that it is a possibility_. If you try and 'fix it' by
>> pre-programming what all forgeries or broken rule look like....well you
>> can see that is just plain never gonna work.
>>
>> Get it?
>>
>
> Nope. It's just an assertion that informal systems can do something formal systems can't
> - which as lawyers say is a fact not in evidence.
>
> Brent
>
>
Eh?
I wrote a whole para in my original post labelled FACT.
*
What planet do you live on?*
On the planet I live on It is not hard to see how humans exemplify an
informal system. All over the world are quite normal (non-pathologically
affected) humans with the same sensory systems and mental capacities.
Yet all manner of ignorance and fervently held contradictory belief
systems are ‘rationally’ adopted. That very same brain material, with a
bit of added evidential rigor, becomes a scientist.
Scientists are rationally WRONG in completely free, correctable ways
that a formal system cannot match. The formal system can be equally
wrong....but it CANNOT correct itself like a human. When you try and get
a formal (Turing) machine to behave as per (specifically) a human
scientist _you fail_ for that reason.
1) You have a planet load of evidence of an informal system (human
scientists)
2) You have COMP being true critically dependent on a formal system
being able to do what humans do.
3) It can't do 01 very specific thing... be WRONG in the way a human can
(in the specific fashion cited)
I didn't "assert", I "measured".
"fact not in evidence" be damned! Open your eyes.
Colin
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Fri Aug 07 2009 - 14:52:08 PDT