Colin Hales wrote:
> Hello again Jesse,
> I am going to assume that by trashing computationalism that Marc Geddes
> has enough ammo to vitiate Eleizer's various predilections.... so... to
> that end...
>
> Your various comments (see below) have a common thread of the form "I
> see no reason why you can't ..X.". So let's focus on reasons why you
> can't ...X. These are numerous and visible in real - empirically
> verifiable physics...let's look at it from a dynamics point of view. In
> saying 'you can see no reason....' would mean that if you chose a
> computationalist abstraction level (you mentioned atoms) that you would
> claim the resultant agent able to demonstrate scientific behaviour
> indistinguishable from a human.
>
> I would claim that to be categorically false and testably so. OK.
> Firstly call the computationalist artificial scientist, COMP_S. Call the
> human scientist HUMAN_S. Call computationalism COMP. This saves a lot of
> writing! The test regime:
>
> HUMAN_S constructs laws of nature tn using the human faculty for
> observation (call it P) delivered by real atoms in the brain of HUMAN_S.
> If COMP_S and HUMAN_S are to be indistinguishable then the state
> dynamics (state vector space) of COMP_S must be as sophisticated,
> accessible as HUMAN_S and ALSO /convergent on the same outcomes as those
> of HUMAN_S/. Our test is that they both converge on a law of nature tn,
> say. Note: tn is a abstracted statement of an underlying generalisation
> in respect of the distal external natural world (such as tn = ta, a
> model of an atom). Yes? That is what we do... the portability of laws of
> nature tn proves that we have rendered such abstractions invariant to
> the belief dynamics of any particular scientist.. Yes?
>
> HUMAN_S constructs a model of atoms a 'law of nature' = ta. Using that
> model ta we then implement a sophisticated computational version of
> HUMAN_S at the level of the model: atoms. We assemble an atomic-level
> model replica of HUMAN_S. We run the computation on a host COMP
> substrate. This becomes our COMP_S. We expect the two to be identical to
> the extent of delivering indistinguishable scientific behaviour. We
> embody COMP_S with IO as sophisticated as a human and wire it up....If
> the computationalist position holds, by definition, the dynamics of
> COMP_S must be (a) complex enough and (b) have access to sufficient
> disambiguated information to construct tn indistinguishably from HUMAN_S.
>
> If computationalism is true then given the same circumstance of original
> knowledge paucity (which can be tested), A demand for a scientific
> outcome should result in state-vector dynamics adaptation resulting in
> the delivery of the same tn (also testable), which we demand shall be
> radically novel.... If they are really equivalent this should happen.
> This is the basic position (I don't want to write it out again!)
> ================================================
>
> I would claim the state trajectory of COMP_S to be fatally impoverished
> by the model ta. (abstracted atoms). That is, the state-trajectory of
> COMP_S would fail to consistently converge on a new law of nature tn and
> would demonstrate instability (chaotic behaviour). Just like ungrounded
> power supplly voltage drift about, a symbolically ungrounded COMP_S
> will epistemically drift about.
>
> Indeed I would hold that would be the case no matter what the
> abstraction level: sub-atomic, sub-sub atomic , sub sub sub atomic
> ...... etc ... the result would be identical. Remember: there's no such
> 'thing' as atoms...these are an abstraction - of a particular level of
> the organisational hierarchy of nature. .... also note ... so-called'
> ab-initio quantum mechanics of the entire HUMAN_S would also fail
> because QM is likewise just an abstraction of reality, not reality. COMP
> would claim that the laws of nature describing atoms behave identically
> to atoms. The model ensemble of ta atoms should be capable of expressing
> all the emergent properties of an ensemble of real atoms. This already
> makes COMP a self-referential question-begging outcome. HUMAN_S is our
> observer, made of real atoms. COMP assumes that P is delivered by
> computing ta when there is no such 'thing' as atoms! Atoms are an
> abstraction of a thing, not a thing. Furthermore, all the orighinal
> atoms of HUMAN_S have been replaced with the atoms of the COMP_S substrate.
>
> What is NOT in law of nature ta is the relationship between the
> abstraction ta and all the other atoms in the distal world outside
> COMP_S. (beyond the IO boundary). Assume you supplied all the data about
> all the atoms in the environment of the original human HUMAN_S used to
> construct and initialise COMP_S. You know all these relationships at the
> moment you measured all the atoms in HUMAN_S to get you model
> established. However, after initialisation, when you run the COMP_S, all
> relationships of the model with the distal world (those intrinsic to the
> atoms which the model replaced) are GONE ....
I can't tell from your exposition whether you are assuming that the external
world is modelled along with the comp-s or whether the comp-s is provided with
sensory mechanisms so as to interact with the world.
>the instant the
> abstraction happens, from that moment on you know NOTHING about the
> current state of the distal environment...all you have is IO
> measurements.
And that's all a human scientist has too - IO measurements by his senses.
>You cannot claim that the model includes all those
> relationships because you are doing SCIENCE and you cannot a-priori know
> these....
>
> That is, the very thing you mention below - interaction between
> component parts - cannot be claimed to be 100% complete because all the
> relationships with the distal natural world are GONE - the relationship
> of the original atoms with space and everything else has been replaced
> by a model inside a totally different substrate where the relationships
> are abstracted and cannot even be guessed. They are GONE. You can't
> replace them because you are doing science and you don't know where the
> items are, nor do you know their nature - for you are doing science to
> find that out! The IO is fundamentally degenerately related to the
> distal world and there is no supervision.
>
> RE: No free lunch theorem (NFL)
> The scenario around which NFL is constructed is functionally
> indistinguishable to COMP. Your "machine learning" mission is to choose
> functions to match measurements, not match laws of nature to
> observations. The former happens at the periphery (in data). The latter
> happens in the phenomenal consciousness of an appropriately endowed
> scientist who has a view of the origination of the measurements and can
> then contextualise them into a law of nature. There is no a-priori way
> of distinguishing a 'law of nature' describing the origination of
> measurements from the measurements themselves. It is impossible to
> construct such a thing from the peripheral measurements alone....QM
> degeneracy prohibits that. A function that predicts the behaviour of
> data is *not a law of nature*. The data in COMP_S arrives, just as it
> arrives in the NFL scenario... without context - no amount of IO cross
> correlation restores access to the distal real world- remember the
> artificial scientist is demanded to be fundamentally unsupervised
> 'learning' of tn.
>
> *Game over!*
Don't think so. I think all your argument shows is that some form of embodiment
is needed for intelligence. But that doesn't show that computationalism is
false, only that it's impractical - one has to know to much to duplicate human
intelligence because it includes duplicating so many relations. This may be
true - or it may not. But it doesn't show COMP-S is impossible in logical or
nomological sense.
Brent
> ======================================
> Note that thios does not mean I hold that an "artificial scientist" is
> impossible. Far from it: it is my mission in life (artificial general
> intelligence). I hold that it will not happen with ABSTRACT computation.
> I aim to build chips with all the REAL molecular electrodynamics in them
> where everything important to cognition is conserved... It's what my
> science is about. I am not building a COMP_S. I am building an
> INORGANIC_S with all the physics in it. Such a creature does NOT do
> abstract computation (manipulate abstract symbols) Yes there are
> manipulation of the abstraction called the 'action potential'... but
> this is < 50% of reality.
>
> So once again I reiterate: COMP is FALSE, but not in the way you think.
> It's false as a general claim because it can't simulate a scientist in
> the act of doing an original scientific act on the a-priori unknown.
> Indeed the use of the word 'simulate' is an oxymoron. You can't simulate
> an original scientific act. If an artefact does the science then there
> is no simulating going on - the act is REAL SCIENCE. Conversely if you
> get an artefact to simulate a scientific act then it cannot be original
> (you must have all the knowledge a-priori) ... and therefore, when faced
> with radical novelty the same agent will fail because ultimately it
> relied on human supervision - humans defined what novelty looks like.
>
> This position is not a trivial/simplistic position. It clarifies a great
> deal in an unexpected way. Throughout this whole discourse it has been
> the assumption that you can 'simulate' everything. This is almost
> true.... 99.9999999% true... except for this one special
> circumstance.... simulating an original scientific act... where
> simulation is merely meaningless/useless, not wrong!
>
> A final nuance.
>
> In claiming COMP to be false now I make the claim based merely on the
> balance of probabilities - after proper critical argument in respect of
> design choices. Maybe one day when we've built INORGANIC_S with the
> full electrodynamics of real brain material and acquired more knowledge
> of the possible roles of abstract computation in such an entity - maybe
> then we'll be in a better position to entertain 100% COMP_S. I doubt it
> but I'm willing to entertain the possibiltiy d from a vantage point of
> HINDSIGHT, not assumption.....What I claim is that right now the COMP
> assumption is a critically inferior choice for very practical,
> empirically testable reasons. As such COMP is to be eschewed as an AGI
> design choice. yes COMP can be used to model brain behaviour and the
> science will be useful... but "COGNITION is COMP" is an invalid stance.
> It was 50 years ago and it daily grows more erroneous.
>
> RE: Uploading? IMHO this will be possible with my chips, but impossible
> with purely COMP chips. It will depend on the existence of imaging
> systems with sub-molecular-level spatial resolution and on a
> time-resolution of the order of nano-seconds worst case.. In the interim
> it may be better to replace your brain with my chips...slowly...and then
> the rest of the hardware - slowly... you'd end up 100% inorganic, but
> you would NOT be a COMP entity. This is more doable in the shorter term.
>
> So I can think of multiple reasons 'why you can't...X'......Thanks for
> forcing me to verbalise the argument...in yet another way...
>
> regards,
>
> Colin Hales
>
>
> ======================================================
> Jesse Mazer wrote:
>> Colin Hales wrote:
>>
>>
>>> Hi!
>>> Assumptions assumption assumptions....take a look: You said:
>>>
>>> "Why would you say that? Computer simulations can certainly produce results you didn't already know about, just look at genetic algorithms."
>>>
>>> OK. here's the rub... "You didn't already know about...".
>>> Just exactly 'who' (the 'you') is 'knowing' in this statement?
>>> You automatically put an external observer outside my statement.
>>>
>>
>> Of course, I was talking about the humans running the program, which I assumed is what you meant by "you" in the statement "If you could compute a scientist you would already know everything!" If there is no fundamental barrier to simple computer programs like genetic algorithms coming up with results we didn't expect or know about in advance, I see no fundamental reason why you couldn't have vastly more complex computer programs simulating entire human brains, and these programs would act just like regular biological brains, coming up with ideas that neither external observers watching them nor they themselves (assuming they are conscious just like us) knew about in advance.
>>
>>
>>> My observer is the knower. There is no other knower: The scientist who gets to know is the person I am talking about! There's nobody else around who gets to decide what is known... you put that into my story where there is none.
>>>
>>
>> Like I said, when you wrote "If you could compute a scientist you would already know everything", I assumed the "you" referred to a person watching the program run, not to the program itself. But if you want to eliminate this and just have one conscious being, I see no reason why the program itself couldn't be conscious, and couldn't creatively invent knew ideas it didn't know before they occurred to it, just like a biological human scientist can do.
>>
>>
>>> A genetic algorithm (that is, a specific kind of computationalist manipulation of abstract symbols) cannot be a scientist. Even the 'no free lunch' theorem, proves that without me adding anything....
>>>
>>
>> No it doesn't. The free lunch program only applies when you sum over all possible fitness landscapes, most of which would look completely random (i.e. nearby points on the landscape are no more likely to have nearby fitness values than are distant points--see the diagram of a random fitness landscape in section 5.3 of the article at http://www.talkreason.org/articles/choc_nfl.cfm#nflt ), whereas if you're dealing with the subclass of relatively smooth fitness landscapes that describe virtually all the sorts of problems we're interested in (where being close to an optimal solution is likely to be better than being far from it), then genetic algorithms can certainly do a lot better than most other types of algorithms.
>>
>> Anyway, I didn't say that a genetic algorithm can "be a scientist", just that if "you" are a human observer watching it run, it can come up with things that you didn't already know. I think a very detailed simulation of a human brain at the synaptic level, of the kind that is meant when people discuss "mind uploading" (see http://en.wikipedia.org/wiki/Mind_uploading ) should in principle be capable of displaying all the same abilities as the biological brain it's a simulation of, including scientific abilities. Anyone who believes in scientific reductionism--that the behavior of complex systems is ultimately due to the sum of interactions of all its parts, which interact in lawlike ways--should grant that this sort of thing must be possible *in principle*, whether or not we are ever actually able to achieve it as a technical matter.
>>
>>
>>> but just to seal the lid on it....I would defy any computationalist artefact based on abstract symbol manipulation to come up with a "law of nature" ...
>>>
>>
>> I take it you reject the idea that the brain is an "artefact" whose large-scale behavior ultimately boils down to the interaction of all its constituent atoms, which interact according to laws which can be approximated arbitrarily well by a computer simulation? (if space and time are really continuous the approximation can never be perfect, but it can be arbitrarily close)
>>
>>
>
> >
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Tue Sep 02 2008 - 03:58:06 PDT