Re: The universe consists of patterns of arrangement of 0's and 1's?

From: Tim May <tcmay.domain.name.hidden>
Date: Sat, 30 Nov 2002 11:45:23 -0800

On Friday, November 29, 2002, at 02:44 AM, Marchal Bruno wrote:

> Stephen Paul King <stephenk1.domain.name.hidden> wrote:
>
>> I agree completely with that aspect of Bruno's thesis. ;-) It is the
>> assumption that the 0's and 1's can exist without some substrate that
>> bothers me. If we insist on making such an assuption, how can we even
>> have a
>> notion of distinguishability between a 0 and a 1?.
>> To me, its analogous to claiming that Mody Dick "exists" but there
>> does
>> not exists any copies of it. If we are going to claim that "all
>> possible
>> computations" exists, then why is it problematic to imagine that "all
>> possible implementations of computations" exists as well.
>
> But then you need to explain what "implemention" are. Computer
> scientist
> have no problem with this. There are nice mathematical formulation of
> it.
> Tim would say that an implementation is basically a functor between
> categories.
> You seen to want a material preeminent level, but this is more a source
> of difficulty than an explanation. What is that level?

Bruno is right that I would emphasize the mathematics over the "COMP"
aspects. Computations are kinds of mathematics: mappings, iterations,
theorem provings, even topological operations of various kinds. Not all
mathematics is easily implemented on computers, but the principle is
clear.

I suppose I am partly a Platonist, in that I believe there's more to
mathematics than merely symbol manipulation (the Formalist) school.
Computers are exciting because they give us another way to make real
(or reify) the abstractions of mathematics. I believe, for example,
that categories (e.g., HILB or VECT) are in some sense "real," that we
can send our minds and our computers as robot explorers into these
"scapes" of Platonia, into the ideosphere, into noespace, or whatever.
(Sorry for waxing poetic...)

There is a sense in which the Platonist point of view is consistent
with the Chaitin/Wolfram notion that mathematics will become largely
explorational. Arguably, this has been what mathematics has _always_
been, that the process of discovering truths is not about proving
theorems from postulates, at least not exclusively. Even geometry got
its start not from considering abstractions out of a pure ideosphere,
but from issues of measuring the earth (geo-metry), of building
pyramids, of dividing farmlands, of measuring grain storage, and so on.
Later mathematics was also guided at least partly by the practical,
whether the study of differential equations or elliptic functions. Of
all of the possibly-provable truths, laid out like stepping stones in a
vast marsh of as-yet-unproved and possibly-unprovable truths, which
stepping stones are followed, and are laid in the marsh as new proofs
are obtained, is often shaped by engineering and physics
considerations. Even in the purest areas of mathematics, such as number
theory. The Chaitin argument that computers will be used increasingly
to explore this landscape is, I think, certainly correct. (Personally,
I am tremendously excited to think about what future versions of
Mathematica, for example, will look like when running on computers 100
or 1000 times faster than my current Mac and running with immersive VR
graphics systems. At Moore's Law rates of progress, I'll have this here
in my home within the next 10-15 years or so. This is my main interest,
more so than speculating on whether the universe runs on a computer or
not. But Everything issues touch on this...)

OK, so which is it, really, Platonism or Formalism? Paul Taylor makes a
good case in "Practical Foundations of Mathematics" that category
theory in general and topos theory in particular provide the
unification of these two points of view. Mathematical objects live in a
universe of categories, with certain rules for moving between
categories, and that various universes exist as toposes. We as humans
can manipulate these rules, learn how these objects behave, and thus
explore these spaces.

Now whether it makes sense (or "is really the case") to say that
Reality is some kind of computer program is not all clear to me. Like
many others, I have problems with the notion that reality is a program
running on some kind of metacomputer. Perhaps computation is woven into
the fabric of spacetime at a deep enough level, and perhaps there are
alternative "state machine" rules which could be imagined in other
universes (or even in different parts of our universe, e.g., changes in
rules at very high energies, or near singularities, etc.

I'm not--at this time--much engaged by the "universe as a computer
program" idea. A useful hypothesis to have--the
Zuse/Fredkin/Lloyd/Schmidhuber/Wolfram/etc. thesis, in its various
forms--but a long, long way from being established as the most
believable hypothesis. To me, at least.

(I think Egan gives us a fairly plausible, fictional timeline for
figuring this stuff out: a workable TOE by the middle of this century,
i.e., within our lifetimes. That is, a theory which unifies relativity
and QM, and which is presumably also brings in QED, QCD, etc. Perhaps
involving a mixture of string/brane theory, spin foams and loop
gravity, etc. Lee Smolin has some plausible speculations about how
these areas may come together over the next several decades. This TOE
is of course not expected to be truly a theory of everything, as we all
know: the phrase TOE is mostly about the unification of the two major
classes of theories noted above.

Then perhaps several centuries of very little progress, as the energies
to get to the Planck energy are enormous (e.g., compressing a mass
about equal to a cell to a size 20 orders of magnitude smaller than a
proton). Egan plausibly describes an accelerator the length of a chunk
of the solar system, using the most advanced "PASER" (the solid-state
lasing accelerators proposed recently), to accelerate particles to the
energies where discrepancies in models (computer programs??) might show
up. In one of his novels ("Diaspora") he has this happening a few
thousand years from now. This sounds about "right" to me. (I'll be
happy to give some of my reasons for "pessimism" on this timetable if
there's any real interest.)

Of course, breakthroughs in mathematics may provide major new clues,
which is where I put my efforts.)

I take the "Everything" ideas in the broader sense, a la Egan's "all
topologies model," a la the "universes as toposes" (topoi) area of
study, etc. My focus is more on logic and the connections between
topology, algebra, and logic. It may be that we learn that at the
Planck scale (approx. 10^-35 m) the causal sets are best modeled as
computer-like iterations of the spin graphs. But this is a long way
from saying consciousness arises from the COMP hypothesis, so on this
topic I am silent. As Wittgenstein said, "Whereof one cannot speak, one
must remain silent." Bluntly, don't talk if you have nothing to say.

Which is why I have little to say about the COMP hypothesis. I'll be
excited if evidence mounts that there's something to it. If the COMP
hypothesis has engineering implications, e.g., affects the design of AI
systems, this will be cool.


>
>> Could we not recover 1-uncertainty from the Kochen-Specker
>> theorem of QM itself?
>
> Probably so.
>

This seems to be assuming the conclusion. Gleason's Theorem and
Kochen-Specker are about the properties of Hilbert spaces. But the
reason we use the Hilbert space formulation for quantum mechanics, as
opposed to just using classical state spaces, is because the Hilbert
space formulation (largely of Von Neumann) gave us the "correct"
noncommutation, uncertainty principle, Pauli exclusion principle, etc.,
things which were consistent with the observed properties of simple
atoms, slit experiments, etc. In other words, the
Planck/Einstein/Heisenberg/Schrodinger/Bohr/etc. results and successful
models (e.g., of the atom) gave us the Hilbert space formulation, which
Gleason, Bell, Kochen, Specker, etc. then proved theorems about.

I don't think it would be kosher to assume reality has aspects of the
category HILB and then use theorems about Hilbert spaces to then prove
the Uncertainty Principle.

(My apologies if this was not what was intended by "recover
1-uncertainty.")

This is a good example, by the way, of how the physics applications of
Hilbert spaces incentivized mathematicians to study Hilbert spaces in
ways they probably would not have had Hilbert spaces just been another
of many abstract spaces. Gleason had many interests in pure math, so he
probably would have proved his theorem regardless, but Bell, Kochen,
and Specker probably would not have had QM issues not been of such
interest.




--Tim May
Received on Sat Nov 30 2002 - 14:50:41 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST