Einstein, Podolsky and Rosen (1935) suggested hidden variables unexplained
by standard quantum mechanics, and postulated local interaction only. In
1964 Bell introduced an inequality whose violation was thought to provide
evidence against _local_ hidden variable theories. Violation tests are
inconclusive as yet - see chain of papers starting maybe with Aspect et
al, PRL 49, p. 91 and 1804, 1982; and Franson's counter argument: PR D, p
2529, 1985, and numerous more recent papers. Many physicists now believe
in nonlocality, but the issue is not settled yet.
Is Bell's inequality relevant to algorithmic TOEs? It is about locality
assumptions - but ATOEs do not even insist on locality. There are
computable universes whose light speed exceeds ours, or where there is no
light at all, where there is global interaction among distant particles,
or where there are no particles as we know them, etc. The point of ATOEs
is: as long as the computability assumptions hold, any complex universe
is unlikely, no matter whether Bell's inequality makes sense in it or not.
>From this point of view Bell's inequality and locality are not
the real issues. The issue is more general. It is determinism vs
nondeterminism. There is NO experimental data proving the universe
is truly nondeterministic. If it were, where would the unexplained
randomness come from? As pointed out by Saibal Mitra, non-crackpots
such as 't Hooft (physics Nobel 1999) endorse hidden variable
theory and determinism, e.g.:
http://xxx.lanl.gov/abs/hep-th/0105105
http://xxx.lanl.gov/abs/hep-th/0104219
Sure, there is a lot of data that at first glance suggests probabilistic
aspects of particle behavior, but many pseudorandom generators produce
data that match Gaussian or other well-known distributions. I think
nobody has ever bothered to systematically check existing data for
pseudorandomness. (Note that decorrelation does not imply independence.)
Suppose the history of some universe is indeed generated by a
deterministic computer program. In principle this would make everything
predictable for anybody who knows the program. This does not at all
imply, however, that an observer evolving within the universe could
predict exactly what is happening light years away, for several reasons:
1. He may never get full access to the current hidden variables, i.e.,
the current state of the program, because of:
1a. some sort of prewired uncertainty principle that holds for local
observers (but not for outsiders such as the Great Programmer).
1b. a maximal universe-specific speed that prevents the observer from
collecting precise information about distant parts of his universe.
2. Even if the observer had full access to the state, to make predictions
he would need a predictor built within his universe. In general, this
machine will run much slower than the universe itself, unable to predict
it in time. (But perhaps able to postdict parts of it.)
To illustrate 2: there are pseudorandom generators that run extremely
slowly but produce extremely convincing results such as the enumerable but
effectively random number Omega. The speed prior, however, suggests our
universe's pseudorandom generator is much faster than the Omega generator.
Juergen
http://www.idsia.ch/~juergen/
http://www.idsia.ch/~juergen/everything/html.html
http://www.idsia.ch/~juergen/toesv2/
Received on Wed Oct 03 2001 - 04:52:13 PDT