Re: Statistical Measure, does it matter?

From: Stathis Papaioannou <stathisp.domain.name.hidden>
Date: Sun, 1 Apr 2007 13:48:55 +1000

On 3/31/07, John Mikes <jamikes.domain.name.hidden> wrote:

The non-standard part of Bruno's comp, as I see it, is to accept that
> > computation can lead to thought but to reject the physical supervenience
> > theory, i.e. that computation requires certain physical processes to
> > take place in order to "happen". But that question aside, computationalism
> > depends on the idea that (a) it is *in principle* possible to reproduce all
> > the physical processes in our brain using a computer, to an arbitrary degree
> > of precision, and (b) such a reproduction, yielding by definition a
> > functionally identical brain, also yields a functionally identical mind -
> > i.e., as opposed to a zombie. Roger Penrose says that (a) is false; John
> > Searle and religious people say that even if (a) is true, (b) is false. I
> > tend to think that (a) and (b) are both true, but I am not completely sure.
> >
> Here we go:
> i.e. that computation requires certain physical processes to take place in
> order to "happen".
> What else can we 'imagine'? Ideationa -- of whom? Mine? Yours? I wouild
> not recite "of the unioverse" because HOW do we have access to a conscious
> process of the totality with our limited mind?
>

I don't quite understand this question.

...(a) it is *in principle* possible to reproduce all the physical processes
> in our brain using a computer,...
> In principle EVERYTHING is possible. Look at the discussions on this list.
>
>

My "in principle" is somewhat narrower: I meant physically possible, given
the laws of our universe, without recourse to multiverses etc.

...(b) such a reproduction, yielding by definition a functionally identical
> brain,...
> If a 'model' is identical in all respects ("functionally") it is not a
> model, it is the THING itself. So we are in this case playing withg words.
> NOTHING can be completely identical in this world, because everything is the
> product of ALL the actual circumstances co- functioning in the construction
> of the 'thing' (process). And ALL the circumstances do not ever repeat
> themselves identically: it would be a merrygoround world loop what we so far
> did not experience. We can find similarity in ALL aspects we observe, but
> that does not include the complete totality. We like to call such similarity
> an 'identity'..
> .So I do not argue against your finding a) and b) possible, but does it
> make sense?
>

If we could model a hurricane on a computer the simulation would not destroy
houses, but if the model were good enough it would tell us which houses a
real hurricane would destroy. Similarly, if we could model a brain, we would
be in a position to know how a person would behave in a given situation. We
could use the computer model to control the person's muscles and no-one
would realise he wasn't a "real" person, i.e. we would have at least a
zombie.

2. Replaced? meaning one takes out that goo of neurons, proteins and other
> > > tissue-stuff with its blood suply and replace the cavity (no matter how
> > > bigger or smaller) by a (watch it): *digital* computer, "appropriately
> > > configured" and electric flow in it. For the quale-details see the par #1.
> > >
> > > Each neuron is made up of macromolecules in a watery medium. The
> > macromolecules follow the laws of physics: there are equations which
> > describe how phospholipid bilayers form membranes, how proteins embedded in
> > these membranes change conformation when ligands bind to them, how these
> > conformation changes result in ion fluxes and changes in transmembrane
> > potential, and so on. So if you ignore for the moment the technical
> > difficulties involved in working all this out and implementing it, it should
> > be possible to program a computer to behave just like a biological brain,
> > receiving afferent impulses from sense organs and sending afferent impulses
> > to muscles, which would result in a being whose behaviour is
> > indistinguishable from that of an intact human. The only way around this
> > conclusion is if the behaviour of the brain depends on physical processes
> > which are not computable, like Penrose's postulated quantum gravity effects.
> > This is possible, but there isn't really any good evidence supporting it, as
> > far as I'm aware.
> >
>
> 3-29 insert s:
> ...The macromolecules follow the laws of physics:...
> NONONONONO! Certain experiences with macromolecules are described in our
> incompletge views as being described by certain (statistical?
> probabilistic?) findings in the physical domain. Macro- or
> nonmacromolecules, atoms, their parts, show behavior in our 'slanted',
> 'partial'. observation which have been matched to calculations drawn upon
> similarly era-restricted observational explanatory calculations (physics). I
> did not work with atoms or molecules, when I made my macromo;leculs and
> their applications. I worked with masses that behaved. Then I put them into
> a reductionist analysis abd tried to 'match' the numerical data to those in
> the books. I made 'bilayers', 'ligands'. Indeed I got responses which I
> described as performing as expected. And got the patents.
>

But you wouldn't have been granted the patents if your experiments were not
repeatable. You don't even need "science": a precise description of what
happens when you mix substance A with substance B under physical conditions
C will suffice. Similarly, for each part of the brain, a precise description
of what happens when, for example, a certain neurotransmitter is released
into a certain synapse, will allow you eventually to predict how the whole
brain will behave, amazingly difficult though that task would be.

...it should be possible to program a computer to behave just like a
> biological brain,...
> How does a 'biological brain' work? we - so far - extracted some
> behavioral deductions into the model we have about the goo-in-the-skull and
> call it a "total" - even with those unknown parts which come from outside
> the matter-reactions we have access to - or even from so far undiscovered
> aspects/factors. And I would not call it "biological" which is a limited
> model of the functionality. One difficulty is the baggage in the words we
> are restrict to in our historically developed language.
>

With all the scrutiny, there would be some evidence of behaviour in the
brain that biochemistry cannot explain. There isn't, any more than there is
for any other organ in the body. Certainly there are many things in biology
that we can't yet explain, but the sheer complexity of the biochemical
processes makes this inevitable. I can't explain exactly how my computer
works, but I that doesn't mean it must contain magical processes.

... it should be possible ...
> Amen. It should. I would be happy. (But, alas, it isn't)
> ... indistinguishable from that of an intact human....
> looks to me like the right sentence, just let it be known by what
> model-characteristics (qualia) do we distinguish? Or 'can' we distinguish. In
> what respect? First: Do we KNOW all details of an 'infact human' full
> process? I think we know only a part of it, the oneS which ARE accessible to
> our presently applied observational power and knowledge base. Just compare
> to the 'infact' human as described in 1000AD or 3000BC. etc. - not in
> 2300AD.
>

You build your model of the brain, then you test it to see if it behaves
like a real brain. If it doesn't, you go back and try to refine your model.
When you can't tell the difference between the model and the real brain you
have succeeded.

...if you ignore for the moment the technical difficulties involved in
> working all this out and implementing it,
> in other words it is impossible, but we can think about it. No, in my
> practical thinking only the possible (everything though it may be) is
> usable, not input of which we presume that it is not feasible (possible?).
> The difficulties are not 'technical', they are ontological. Not fitting into
> circumstances we 'even suppose' .
>

Do you acknowledge that there is a difference between the physically
impossible, and the merely practically impossible?

> 3. "you" - and who should that be? can we separate our living
> > > brain (I mean with all its functionality) from 'YOU', the self, the
> > > person, or call it the simulacron of yourself? What's left? Is there "me"
> > > and "my brain"? As I like to call it: the brain is the 'tool' of my
> > > mind, mind is pretty unidentified, but - is close to my-self, some call it
> > > life, some consciousness, - those items we like to argue about because none
> > > of us knows what we are talking about (some DO THINK they know, but only
> > > something and for themselves).
> > >
> > >
> > I find it hard to define consciousness, but I know what it is, and so
> > does everyone who has it.
> >
>
> And everyone (not really) knows it personalized and differently. Scholars:
> slanted to their theoretical needs, others maybe to their emotions.
>
> 4. "feel" ----????---- who/what? the transistors?
> > > (Let me repeat: I am not talking about Transistor Stathis).
> > >
> > >
> > You could equally well ask, do the proteins/ phospholipids/ nucleic
> > acids etc. feel? Apparently, they do. If your brain stops working or is
> > seriously damaged, you stop feeling.
> >
>
> I am not speaking about how 'parts' feel, rather how the complexity acts.
> The brain is a tool cooperating in our mental factor, so if the tool is
> damaged certain activities of it may be missing, or perform inadequately.
>

That's right; but are you suggesting that in addition to the physical parts,
there are other parts? That is, that even though a brain is perfectly intact
physically and seems to be functioning normally to an outside observer, it
might yet not be conscious because it lacks some non-physical component,
such as a soul? There is no evidence for this.

> *-SP:
> > > Bruno goes on to show that this entails there is no separate physical
> > > reality by means of the UDA, but we can still talk about computationalism -
> > > the predominant theory in cognitive science - without discussing the UDA.
> > > And in any case, the ideas Brent and I have been discussing are still
> > > relevant if computationalism is wrong and (again a separate matter) there is
> > > only one universe.
> > > Stathis Papaioannou-*
> > >
> > > <JM>
> > > Yes, "we today" KNOW about only 1 universe. But we believe in a
> > > physical reality what we 'feel', 'live it' and hold as our 'truth' as well.
> > > Even those 'more advanced' minds saying they don't believe in it, cry out
> > > (OMIGOD!) when "Dr. Johnson's stone" hurts their toe in the shoe.
> > >
> > > I like to draw comparisons between "what we know today" and what we
> > > knew 1000, 3000, or 5000 years ago and ask: what will we 'know' just 500
> > > years ahead in the future by a continuing epistemic enrichment? (If humanity
> > > survives that long).
> > > Please, readers, just list the answers alphabetically.
> > >
> > >
> > I don't know the answer. Maybe next year there will be some discovery
> > which will have us all laughing at the idea that computers can be conscious,
> > but at present we can only go on the information available to us, and try to
> > keep an open mind.
> >
>
> Stathis, I may cross my fingers, but would not hold my breath. IMO we
> 'know' a little part, the unknown may be the essential and overwhelming.
> Good luck to humanity to become smarter before extinct.
>

Stathis Papaioannou

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sat Mar 31 2007 - 23:49:06 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:13 PST