RE: computer pain

From: Stathis Papaioannou <stathispapaioannou.domain.name.hidden>
Date: Sun, 17 Dec 2006 17:47:30 +1100

Colin Hales writes:
> > I understand your conclusion, that a model of a brain
> > won't be able to handle novelty like a real brain,
> > but I am trying to understand the nuts and
> > bolts of how the model is going to fail. For
> > example, you can say that perpetual motion
> > machines are impossible because they disobey
> > the first or second law of thermodynamics,
> > but you can also look at a particular design of such a
> > machine > and point out where the moving parts are going
> > to slow down due to friction.
> >
> > So, you have the brain and the model of the brain,
> > and you present them both with the same novel situation,
> > say an auditory stimulus. They both process the
> > stimulus and produce a response in the form of efferent
> > impulses which move the vocal cords and produce speech;
> > but the brain says something clever while the computer
> > declares that it is lost for words. The obvious explanation
> > is that the computer model is not good enough, and maybe
> > a better model would perform better, but I think you would
> > say that *no* model, no matter how good, could match the brain.
> >
> > Now, we agree that the brain contains matter which
> > follows the laws of physics.
> > Before the novel stimulus is applied the brain
> > is in configuration x. The stimulus essentially adds
> > energy to the brain in a very specific way, and as a
> > result of this the brain undergoes a very complex sequence
> > of physical changes, ending up in
> > configuration y, in the process outputting energy
> > in a very specific way which causes the vocal cords to move.
> > The important point is, in the transformations
> > x->y the various parts of the brain are just working
> > like parts of an elaborate Rube Goldberg mechanism.
> > There can be no surprises, because that would be
> > magic: two positively charged entities suddenly
> > start attracting each other, or
> > the hammer hits the pendulum and no momentum
> > is transferred. If there is magic -
> > actually worse than that, unpredictable magic -
> > then it won't be possible to model
> > the brain or the Rube Goldberg machine. But, barring magic,
> > it should be possible to predict the physical state
> > transitions x->y and hence you will know
> > what the motor output to the vocal cords will be and
> > what the vocal response to the
> > novel stimulus will be.
> >
> > Classical chaos and quantum uncertainty may make it
> > difficult or impossible to
> > predict what a particular brain will do on a
> > particular day, but they should not be a theoretical
> > impediment to modelling a generic brain which behaves in an
> > acceptably brain-like manner. Only unpredictable magical
> > effects would prevent that.
> >
> > Stathis Papaiaonnou
>
> I get where you're coming from. The problem is, what I am going to say
> will, in your eyes, put the reason into the class of 'magic'. I am quite
> used to it, and don't find it magical at all....
>
> The problem is that the distal objects that are the subject about which
> the brain is informing itself, are literally, physically involved in the
> process. You can't model them, because you don't know what they are. All
> you have is sensory measurements and they are local and
> ambiguous....that's why you are doing the 'qualia dance' with EM fields -
> to 'cohere' with the external world. This non-locality is the same
> non-locality observed in QM and makes gravity 'action at a distance'
> possible. ..... I've been thinking about this for so long I actually have
> the reverse problem now...I find 'locality' really weird! I find 'extent'
> really hard to fathom. The non-locality is also predicted as the solution
> to the 'unity' issue.
>
> The empirical testing to verify this non-locality is the real target of my
> eventual experimentation. My model and the real chips will behave
> differently, it is predicted, because of the involvement of the 'external
> world' that is not available to the model.
>
> I hope to be able to 'switch off' the qualia whilst holding eveything else
> the same. The effects on subsequent learning will be indicative of the
> involvement of the qualia in learning. What the external world 'looks
> like' in the brain is 'virtual circuits' - average EM channels (regions of
> low potential that are like a temporary 'wire') down which chemistry can
> flow to alter synaptic weights and rearrange channel positions/rafting in
> the membrane and so on.
>
> So I guess my proclaimations about models are all contingent on my own
> view of things...and I could be wrong. Only time will tell. I have good
> physical grounds to doubt that modelling can work and I have a way of
> testing it. So at least it can be resolved some day.
I'm not sure of the details of your experiments, but wouldn't the most direct
way to prove what you are saying be to isolate just that physical process
which cannot be modelled? For example, if it is EM fields, set up an appropriately
brain-like configuration of EM fields, introduce some environmental input, then
show that the response of the fields deviates from what Maxwell's equations
would predict.
Stathis Papaioannou
_________________________________________________________________
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~---------~--~----~------------~-------~--~----~
 You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sun Dec 17 2006 - 01:47:47 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:12 PST