Re: Statistical Measure, does it matter?

From: John Mikes <jamikes.domain.name.hidden>
Date: Fri, 30 Mar 2007 17:45:37 -0400

Dear Stathis, sorry for the delay, I had to 'save' most of this response and
finish it later. Of course that will show in the inadequacy of the last
part, a second guess never matches.-
*
I tried to direct that overgrown discussion back to Earth, you went up the
clouds again.
Let me, please, interject in ItALICS into the copy os OUR post below.
John

On 3/29/07, Stathis Papaioannou <stathisp.domain.name.hidden> wrote:
>
>
>
> On 3/29/07, John M <jamikes.domain.name.hidden > wrote:
> >
> > Stathis:
> > let me keep only your reply-part and ask my question(s):
> >
> > ----- Original Message -----
> > *From:* Stathis Papaioannou <stathisp.domain.name.hidden>
> > *To:* everything-list.domain.name.hidden
> > *Sent:* Sunday, March 25, 2007 7:34 PM
> > *Subject:* Re: Statistical Measure, does it matter?
> >
> >
> >
> >
> > On 3/25/07, Mark Peaty <mpeaty.domain.name.hidden > wrote:
> > >
> > > SKIP - Sorry, Mark, this goes to Stathis, who wrote:
> > >
> > > *-SP:
> > >
> > Standard computationalism is just the theory that your brain could be
> > replaced with an appropriately configured digital computer and you would not
> > only act the same, you would also feel the same. - *
> >
> > <JM>
> > I am not implying that you accept it, just scribble down my remarks to
> > the topic - in accordance maybe with your opinion.
> >
> > 1. Standard? meaning our embryonic-level (first model) 0-1 binary
> > digital mechanism? Do we really believe that our "human complexity" is that
> > simplistic and ends at the inner surface of our skull? Even there (locally
> > restricted) we know only a bit of what our "thinking mind" is capable
> > of/doing. Some of these features are reproduced into binary digital
> > churnings and that is the standard. A robot of limited capabilities (maybe
> > if in certain aspects even exceeding the limits of our human activity
> > details).
> > I think 'comp' as Bruno uses the word and compares it to a L-machine is
> > not like such 'standard': it may be "analogous", or, if digital: of
> > unlimited variance (infinitary, not only binary), and not even simulable in
> > our today's epistemy.
> >
> > The non-standard part of Bruno's comp, as I see it, is to accept that
> computation can lead to thought but to reject the physical supervenience
> theory, i.e. that computation requires certain physical processes to take
> place in order to "happen". But that question aside, computationalism
> depends on the idea that (a) it is *in principle* possible to reproduce all
> the physical processes in our brain using a computer, to an arbitrary degree
> of precision, and (b) such a reproduction, yielding by definition a
> functionally identical brain, also yields a functionally identical mind -
> i.e., as opposed to a zombie. Roger Penrose says that (a) is false; John
> Searle and religious people say that even if (a) is true, (b) is false. I
> tend to think that (a) and (b) are both true, but I am not completely sure.
>
Here we go:
i.e. that computation requires certain physical processes to take place in
order to "happen".
What else can we 'imagine'? Ideationa -- of whom? Mine? Yours? I wouild not
recite "of the unioverse" because HOW do we have access to a conscious
process of the totality with our limited mind?
*
...(a) it is *in principle* possible to reproduce all the physical processes
in our brain using a computer,...
In principle EVERYTHING is possible. Look at the discussions on this list.
...(b) such a reproduction, yielding by definition a functionally identical
brain,...
If a 'model' is identical in all respects ("functionally") it is not a
model, it is the THING itself. So we are in this case playing withg words.
NOTHING can be completely identical in this world, because everything is the
product of ALL the actual circumstances co- functioning in the construction
of the 'thing' (process). And ALL the circumstances do not ever repeat
themselves identically: it would be a merrygoround world loop what we so far
did not experience. We can find similarity in ALL aspects we observe, but
that does not include the complete totality. We like to call such similarity
an 'identity'..
.So I do not argue against your finding a) and b) possible, but does it make
sense?
*******************

> 2. Replaced? meaning one takes out that goo of neurons, proteins and other
> > tissue-stuff with its blood suply and replace the cavity (no matter how
> > bigger or smaller) by a (watch it): *digital* computer, "appropriately
> > configured" and electric flow in it. For the quale-details see the par #1.
> >
> > Each neuron is made up of macromolecules in a watery medium. The
> macromolecules follow the laws of physics: there are equations which
> describe how phospholipid bilayers form membranes, how proteins embedded in
> these membranes change conformation when ligands bind to them, how these
> conformation changes result in ion fluxes and changes in transmembrane
> potential, and so on. So if you ignore for the moment the technical
> difficulties involved in working all this out and implementing it, it should
> be possible to program a computer to behave just like a biological brain,
> receiving afferent impulses from sense organs and sending afferent impulses
> to muscles, which would result in a being whose behaviour is
> indistinguishable from that of an intact human. The only way around this
> conclusion is if the behaviour of the brain depends on physical processes
> which are not computable, like Penrose's postulated quantum gravity effects.
> This is possible, but there isn't really any good evidence supporting it, as
> far as I'm aware.
>

3-29 insert s:
...The macromolecules follow the laws of physics:...
NONONONONO! Certain experiences with macromolecules are described in our
incompletge views as being described by certain (statistical?
probabilistic?) findings in the physical domain. Macro- or
nonmacromolecules, atoms, their parts, show behavior in our 'slanted',
'partial'. observation which have been matched to calculations drawn upon
similarly era-restricted observational explanatory calculations (physics). I
did not work with atoms or molecules, when I made my macromo;leculs and
their applications. I worked with masses that behaved. Then I put them into
a reductionist analysis abd tried to 'match' the numerical data to those in
the books. I made 'bilayers', 'ligands'. Indeed I got responses which I
described as performing as expected. And got the patents.
*
...it should be possible to program a computer to behave just like a
biological brain,...
How does a 'biological brain' work? we - so far - extracted some behavioral
deductions into the model we have about the goo-in-the-skull and call it a
"total" - even with those unknown parts which come from outside the
matter-reactions we have access to - or even from so far undiscovered
aspects/factors. And I would not call it "biological" which is a limited
model of the functionality. One difficulty is the baggage in the words we
are restrict to in our historically developed language.
... it should be possible ...
Amen. It should. I would be happy. (But, alas, it isn't)
... indistinguishable from that of an intact human....
looks to me like the right sentence, just let it be known by what
model-characteristics (qualia) do we distinguish? Or 'can' we distinguish. In
what respect? First: Do we KNOW all details of an 'infact human' full
process? I think we know only a part of it, the oneS which ARE accessible to
our presently applied observational power and knowledge base. Just compare
to the 'infact' human as described in 1000AD or 3000BC. etc. - not in
2300AD.
...if you ignore for the moment the technical difficulties involved in
working all this out and implementing it,
in other words it is impossible, but we can think about it. No, in my
practical thinking only the possible (everything though it may be) is
usable, not input of which we presume that it is not feasible (possible?).
The difficulties are not 'technical', they are ontological. Not fitting into
circumstances we 'even suppose' .



3. "you" - and who should that be? can we separate our living
> > brain (I mean with all its functionality) from 'YOU', the self, the
> > person, or call it the simulacron of yourself? What's left? Is there "me"
> > and "my brain"? As I like to call it: the brain is the 'tool' of my
> > mind, mind is pretty unidentified, but - is close to my-self, some call it
> > life, some consciousness, - those items we like to argue about because none
> > of us knows what we are talking about (some DO THINK they know, but only
> > something and for themselves).
> >
> >
> I find it hard to define consciousness, but I know what it is, and so does
> everyone who has it.
>

And everyone (not really) knows it personalized and differently. Scholars:
slanted to their theoretical needs, others maybe to their emotions.

 4. "feel" ----????---- who/what? the transistors?
> > (Let me repeat: I am not talking about Transistor Stathis).
> >
> >
> You could equally well ask, do the proteins/ phospholipids/ nucleic acids
> etc. feel? Apparently, they do. If your brain stops working or is
> seriously damaged, you stop feeling.
>

I am not speaking about how 'parts' feel, rather how the complexity acts.
The brain is a tool cooperating in our mental factor, so if the tool is
damaged certain activities of it may be missing, or perform inadequately.

*-SP:
> > Bruno goes on to show that this entails there is no separate physical
> > reality by means of the UDA, but we can still talk about computationalism -
> > the predominant theory in cognitive science - without discussing the UDA.
> > And in any case, the ideas Brent and I have been discussing are still
> > relevant if computationalism is wrong and (again a separate matter) there is
> > only one universe.
> > Stathis Papaioannou-*
> >
> > <JM>
> > Yes, "we today" KNOW about only 1 universe. But we believe in a physical
> > reality what we 'feel', 'live it' and hold as our 'truth' as well. Even
> > those 'more advanced' minds saying they don't believe in it, cry out
> > (OMIGOD!) when "Dr. Johnson's stone" hurts their toe in the shoe.
> >
> > I like to draw comparisons between "what we know today" and what we knew
> > 1000, 3000, or 5000 years ago and ask: what will we 'know' just 500 years
> > ahead in the future by a continuing epistemic enrichment? (If humanity
> > survives that long).
> > Please, readers, just list the answers alphabetically.
> >
> >
> I don't know the answer. Maybe next year there will be some discovery
> which will have us all laughing at the idea that computers can be conscious,
> but at present we can only go on the information available to us, and try to
> keep an open mind.
>

Stathis, I may cross my fingers, but would not hold my breath. IMO we 'know'
a little part, the unknown may be the essential and overwhelming. Good luck
to humanity to become smarter before extinct.

Stathis Papaioannou
>
> John Mike
>
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Fri Mar 30 2007 - 17:46:03 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:13 PST