Page 2 of Yurtsever (relates to Schmidhubert II implies FTL communications)

From: Osher Doctorow <osher.domain.name.hidden>
Date: Thu, 5 Sep 2002 18:38:25 -0700

From: Osher Doctorow osher.domain.name.hidden, Thurs. Sept. 5, 2002 6:17PM

I have now read page 2 of Yurtsever, having previous read page 1, and I must
confess that his style does not quite have the clarity of my style - his is
more like the clarity of Sigmund Freud's style : > ) However, I am happy
to see that he recognized the role of Godel's incompleteness theorem on his
page 1.

On page 2, Yurtsever put the cart before the horse in a sense by telling us
what would happen if his theory turns out to be correct, but since he plans
to prove it in pages 3ff, he can be forgiven for that. I notice in
connection with his last 2 paragraphs of page 2, which run over into the
first 2 paragraphs of page 3, that he seems to agree with Sir Roger Penrose
and me (independently - I have never met Sir Roger) that brain activity
cannot be faithfully simulated on a digital computer. Sir Roger, by the
way, like me (I have been told) rather dislikes computers and does not (or
at least when last I heard about it) even answer email on computers. I am
slightly different in that I both write and answer email, but I rather
dislike digital computers although I will defend to the death their right to
have their own opinions. : > ). I have not yet decided about quantum
computers, analog computers, molecular computers, laser/light computers,
etc. My argument about brain activity is far simpler than Sir Roger's - I
derive it from mathematical fuzzy multivalued logics and their
probability-statistics and proximity function-geometry-topology analogs,
which does not make use of randomness as incompressibility or even computer
randomness at all.

Speaking of randomness, I pointed out that incompressibility randomness is
only one interpretation of randomness. To those of us who grew up and
spent at least half of our lives in the non-computer world (or at least, the
not heavily computerized world), probability and statistics vs computer
viewpoints are not quite the same thing. When somebody in one of my
statistics classes tells me that something is random, I tend to be slightly
put off. You see, everything is random in a sense in
probability-statistics. Even the non-random world so-called is random,
only the probability of the random part is near or at zero - which,
strangely enough, does not mean impossible or the null set.

Let me clarify the latter. The probability of an impossible event, like
the probability of the null set, is zero. But an uncountably number of
things have probability zero. In n-dimensional Euclidean space or even
spaces that are rather similar to it, any n-k dimensional subset (k = 1, 2,
3, ..., n - 1) has probability zero provided that a continuous random
variable has a distribution on that space or on a volume of space containing
the events in question. The proof is the same as the corresponding proof
for Lebesgue measure. Moreover, the same is true for time, not such space,
since an event that occurs at only one point in time has dimension 0 in
time, and so has dimension one less than the time dimension of 1, and so the
above result holds. So in 3-dimensional Euclidean-like space or 3+1
Euclidean-like spacetime, points, strings, planes, plane figures or their
approximations laminae, curves, lines, line segments, curve segments,
2-dimensional surfaces of 3-dimensional objects (e.g., the surface of the
human brain, the surface of a human being which is usually skin, the surface
of an organ, the surface of the earth, etc.), they all have probability 0
under the rather general assumption that a continuous random variable has a
distribution on space(-time), e.g., the Gaussian/normal distribution.

The events at or near probability zero, and likewise processes of those
characteristics, are RARE EVENTS/PROCESSES (RARE EVENTS for short).

Now that I have started elaborating, I will conclude with one other note of
caution. In what might look like an Old Testament prohibition, I should say
that *ALL IS NOT IN CONCATENATED STRINGS OF SYMBOLS.* In fact, it might be
more accurate to say that almost nothing is in strings, but that might be
misunderstood, so I restrain myself. In my theory, which I refer to as
Rare Event Theory (RET), I distinguish between SYNTAX and SEMANTICS. Of
course, computer people do that too, and computational linguists. But when
push comes to shove, they mostly regard information as SYNTAX. In order
not to confuse myself with computer people or computational linguists, I
distinguish between information, which is syntactic, and KNOWLEDGE, which is
semantic in the usual dictionary sense of MEANING - what symbols and words
and propositions and sentences MEAN. I am not at all sure that
incompressibility captures KNOWLEDGE so much as SYNTAX. However, we will
let that pass for now, except for the slight detail that Knowledge, Memory,
and Rare Events appear to coincide - although part of it is a well-motivated
and well-indicated conjecture. In any case, I will continue to the
substance of page 3 rather than interrupt myself or you further. For those
who are interested, I refer readers to my contributions to
superstringtheory.com, my paper in B. N. Kursunuglu et al (2000), to G. 't
Hooft's Holographic Principle, and to Statistical Learning Theory by Vapnik
(2000) for starters.

Osher Doctorow
Received on Thu Sep 05 2002 - 18:50:35 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST