RE: computer pain

From: Stathis Papaioannou <stathispapaioannou.domain.name.hidden>
Date: Sat, 16 Dec 2006 20:44:09 +1100

Colin Hales writes:
> Stathis wrote:
> I can understand that, for example, a computer simulation of a storm is
> not a storm, because only a storm is a storm and will get you wet. But
> perhaps counterintuitively, a model of a brain can be closer to the real
> thing than a model of a storm. We don't normally see inside a person's
> head, we just observe his behaviour. There could be anything in there - a
> brain, a computer, the Wizard of Oz - and as long as it pulled the
> person's strings so that he behaved like any other person, up to and
> including doing scientific research, we would never know the difference.
>
> Now, we know that living brains can pull the strings to produce normal
> human behaviour (and consciousness in the process, but let's look at the
> external behaviour for now). We also know that brains follow the laws of
> physics: chemistry, Maxwell's equations, and so on. Maybe we don't
> *understand* electrical fields in the sense that it may feel like
> something to be an electrical field, or in some other as yet unspecified
> sense, but we understand them well enough to predict their physical effect
> on matter. Hence, although it would be an enormous task to gather the
> relevant information and crunch the numbers in real time, it should be
> possible to predict the electrical impulses that come out of the skull to
> travel down the spinal cord and cranial nerves and ultimately pull the
> strings that make a person behave like a person. If we can do that, it
> should be possible to place the machinery which does the predicting inside
> the skull interfaced with the periphery so as to take the brain's place,
> and no-one would know the difference because it would behave just like the
> original.
>
> At which step above have I made a mistake?
>
> Stathis Papaioannou
>
> -----------------------
> I'd say it's here...
>
> "and no-one would know the difference because it would behave just like
> the original"
>
> But for a subtle reason.
>
> The artefact has to be able to cope with exquisite novelty like we do.
> Models cannot do this because as a designer you have been forced to define
> a model that constrains all possible novelty to be that which fits your
> model for _learning_. Therein lies the fundamental flaw. Yes... at a given
> level of knowledge you can define how to learn new things within the
> knowledge framework. But when it comes to something exquisitely novel, all
> that will happen is that it'll be interpreted into the parameters of how
> you told it to learn things... this will impact in a way the artefact
> cannot handle. It will behave differently and probably poorly.
>
> It's the zombie thing all over again.
>
> It's not _knowledge_ that matters. it's _learning_ new knowledge. That's
> what functionalism fails to handle. Being grounded in a phenomenal
> representation of the world outside is the only way to handle arbitrary
> levels of novelty. No phenomenal representation? = You are "model-bound"
> and grounded, in effect, in the phenomenal representation of your
> model-builders, who are forced to predefine all novelty handling in an "I
> don't know that" functional module. Something you cannot do without
> knowing everything a-priori! If you already know that you are god so why
> are you bothering?
>
> Say you bring an artefact X into existence. X may behave exactly like a
> human Y in all the problem domains you used to define you model. Then you
> expose both to novelty nobody has seen, including you.... and that is
> where the two will differ. The human Y will do better every time. You
> can't program qualia. You have to have them and you can't do without them
> in a 'general intelligence' context.
I understand your conclusion, that a model of a brain won't be able to handle
novelty like a real brain, but I am trying to understand the nuts and bolts of
how the model is going to fail. For example, you can say that perpetual motion
machines are impossible because they disobey the first or second law of
thermodynamics, but you can also look at a particular design of such a machine
and point out where the moving parts are going to slow down due to friction.
So, you have the brain and the model of the brain, and you present them both
with the same novel situation, say an auditory stimulus. They both process the
stimulus and produce a response in the form of efferent impulses which move
the vocal cords and produce speech; but the brain says something clever while
the computer declares that it is lost for words. The obvious explanation is that
the computer model is not good enough, and maybe a better model would
perform better, but I think you would say that *no* model, no matter how good,
could match the brain.
Now, we agree that the brain contains matter which follows the laws of physics.
Before the novel stimulus is applied the brain is in configuration x. The stimulus
essentially adds energy to the brain in a very specific way, and as a result of this
the brain undergoes a very complex sequence of physical changes, ending up in
configuration y, in the process outputting energy in a very specific way which
causes the vocal cords to move. The important point is, in the transformations
x->y the various parts of the brain are just working like parts of an elaborate
Rube Goldberg mechanism. There can be no surprises, because that would be
magic: two positively charged entities suddenly start attracting each other, or
the hammer hits the pendulum and no momentum is transferred. If there is magic -
actually worse than that, unpredictable magic - then it won't be possible to model
the brain or the Rube Goldberg machine. But, barring magic, it should be possible
to predict the physical state transitions x->y and hence you will know what the
motor output to the vocal cords will be and what the vocal response to the novel
stimulus will be.
Classical chaos and quantum uncertainty may make it difficult or impossible to
predict what a particular brain will do on a particular day, but they should not be
a theoretical impediment to modelling a generic brain which behaves in an
acceptably brain-like manner. Only unpredictable magical effects would prevent
that.
Stathis Papaiaonnou
_________________________________________________________________
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~---------~--~----~------------~-------~--~----~
 You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sat Dec 16 2006 - 04:44:28 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:12 PST