RE: computer pain

From: Stathis Papaioannou <stathispapaioannou.domain.name.hidden>
Date: Fri, 15 Dec 2006 22:15:37 +1100

Colin,
I can understand that, for example, a computer simulation of a storm is not a storm,
because only a storm is a storm and will get you wet. But perhaps counterintuitively,
a model of a brain can be closer to the real thing than a model of a storm. We don't
normally see inside a person's head, we just observe his behaviour. There could be
anything in there - a brain, a computer, the Wizard of Oz - and as long as it pulled
the person's strings so that he behaved like any other person, up to and including
doing scientific research, we would never know the difference. Now, we know that
living brains can pull the strings to produce normal human behaviour (and
consciousness in the process, but let's look at the external behaviour for now). We
also know that brains follow the laws of physics: chemistry, Maxwell's equations,
and so on. Maybe we don't *understand* electrical fields in the sense that it may
feel like something to be an electrical field, or in some other as yet unspecified sense,
but we understand them well enough to predict their physical effect on matter. Hence,
although it would be an enormous task to gather the relevant information and crunch
the numbers in real time, it should be possible to predict the electrical impulses that
come out of the skull to travel down the spinal cord and cranial nerves and ultimately
pull the strings that make a person behave like a person. If we can do that, it should
be possible to place the machinery which does the predicting inside the skull interfaced
with the periphery so as to take the brain's place, and no-one would know the difference
because it would behave just like the original.
At which step above have I made a mistake?
Stathis Papaioannou
Colin Hales writes:
> > So you are saying the special something which causes
> > consciousness and which functionalism has ignored
> > is the electric field around the neuron/astrocyte.
> > But electric fields were well understood even a
> > hundred years ago, weren't they? Why couldn't
> > a neuron be simulated by something like a SPICE model?
> > Even if there is some new
> > physics involved, once the equations are worked
> > out then either with pencil and
> > paper or with the aid of a computer you should
> > be able to model the neuron: given
> > starting parameters, work out what it is going to
> > do in future. Do you disagree that this would be possible?
> >
> > Stathis Papaioannou
>
> Yes. I disagree.
>
> The problem is in the statement:
>
> > But electric fields were well understood even a
> > hundred years ago, weren't they?
>
> NO! they are _not_ understood (explained) they are only described. The
> descriptions do not say what an electric field is. They do not predict an
> electric field. They do not say WHY maxwells equations are what they are.
> There is no real explanation! No true 'understanding'.
>
> Nothing - I repeat - NOTHING is explained by science at this stage. All
> there is is a whole bunch of mathematical models describing how things
> 'appear' (eg quantum mechanics). This is not 'what they are'. Making wave
> its arms about like a model does not create "what they are". If there are
> properties innate to the 'stuff' involved in a situation X they waving
> stuff around like the model of situation X does not does not implement
> those properties.
>
> This is a fundamental blockage in thinking. Everybody in physics and maths
> thinks that equations drive things. Bollocks. They merely describe.
>
> I've just spent a month writing about this very thing. It's making me very
> grumpy and frustrated that something 300 years old and really obvious
> still hasn't sunk in. The universe is NOT made of model/descriptions of
> its appearances!
>
> It's made of something that, in the right circumstances, delivers
> appearances(to a suitably equipped agent made of it) and it behaves like
> it does within those appearances when you look with the appearance
> generator thus implemented (a brain). Models of the appearance are just
> models of appearances! They are very predictive but are completely devoid
> of all causality. Making a machine run as per the models won't do it.
>
> It's doesn't mean we can't achieve what we want in an artefact (pain) - it
> just means that functionalist dreaming isn't enough.
>
> I found this today:
> "The Explicit Animal" Raymond Tallis. He goes through the issues really
> well and trashes functionalism properly.
>
> My preoccupation with electric fields is that they have correlated
> perfectly with everything I have thrown at them for 5 years and they
> predict everything. The trick is to understand the kind of universe that
> expresses something that looks like electric fields run by Maxwells
> equations - NOT to run models according to maxwell's equations.
>
> cheers,
>
> colin
>
>
>
>
>
>
>
>
> >
_________________________________________________________________
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d

--~--~---------~--~----~------------~-------~--~----~
 You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---

Received on Fri Dec 15 2006 - 06:15:55 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:12 PST