On Wednesday, September 4, 2002, at 02:44 PM, Hal Finney wrote:
> Tim May wrote:
>
>> In weaker forms of the MWI, where it's the early state of the Big Bang
>> (for example) which are splitting off into N universes, De Witt and
>> others have speculated (as early as around 1970) that we may
>> _possibly_
>> see some evidence consistent with the EWG interpretation but NOT
>> consistent with other interpretations.
>
> I'm not familiar with the details of this. But I know that much of
> the impetus for increased acceptance of MWI models comes from the
> cosmologists.
It was in DeWitt's article, "Quantum mechanics and reality," Physics
Today, September 1970, reprinted in the collection "The Many-Worlds
Interpretation of Quantum Mechanics," edited by Bryc DeWitt and Neill
Graham, 1973.
"Moreover a decision between the two interpretations may ultimately be
made on grounds other than direct laboratory experimentation. For
example, in the very early moments of the universe, during the
cosmological "Big Bang," the universal wave function may have possessed
an overall coherence as yet unimpaired by condensation into
non-interfering branches. Such initial coherence may have testable
implications for cosmology." (p. 165 of the reprint volume).
(Glad to see my memory hasn't failed me. DeWitt's article made a big
splash when it first got wide notice with that 1970 article. Around
that time, "Physics Today" was where we found many wild things. A
beautiful cover painting of a black hole, the first such graphic I'd
seen...perhaps it's scanned and on the Web someplace, as it was a
seminal image, from January 1970, if I remember correctly. And another
cover from around that era was of O'Neill's proposal for L-5 colonies
and powersats.)
>>
>> What's the problem here? I find it utterly plausible that we would be
>> in a universe where matter exists, where stars exist, where entropy
>> gradients exist, etc., and NOT in a universe where the physical
>> constants or structure of the universe makes life more difficult or
>> impossible (or where the densities and entropy gradients mean that
>> evolution of complex structures might take 100 billion years, or more,
>> instead of the billion or so years it apparently took).
>
> The problem is more formal, that if we abandon measurement as a special
> feature of the physics, there is no longer an axiom that says that
> probability is proportional to amplitude squared.
I'm not an expert on this. Jeffrey Bub, in "Interpreting the Quantum
World," 1997, cites several classes of resolutions of the "measurement
problem." He calls them the "For all practical purposes" (FAPP) model,
after Bell, the "change the linear dynamics" model, and the "modify the
orthodox Dirac-von Neumann interpretation principle." From what I can
tell, the Copenhagen interpretation is already a mixed state, so to
speak, of bits and pieces of Bohr's and Heisenberg's interpretations.
By the way, issues of observers and measurements are obviously fraught
with "Chinese boxes" types of problems. In the Schrodinger's Cat
pedantic example, if the "cat alive or cat dead" measurement is made at
the end of one hour by opening the sealed box, what if a video camera
had been also sealed inside the box, and had seen the cat breathe in
the cyanide gas at 10 minutes into the experiment? Does this imply the
"wave function collapsed" at the time of the measurement by the human
observers, at the one hour point, or at the time the video camera
unambiguously recorded the cat's death?
One could arrange a thought experiment involving literally a series of
boxes within boxes, each being opened at, say, one minute intervals
after the cyanide was released or not released. One set of observers
sees the cat either alive or dead at the end of the canonical one hour
period. But they are sealed inside a box. After one minute, their box
is opened, and the observers in the next-larger box then see the
"collapse of the wave function at the 61-minute point." After another
minute, their box is opened and a new set of observer sees "the
collapse of the wave function at the 62-minute point."
And so on. (I don't know if I'm just reinventing a thought experiment
someone developed many decades ago...it seems like a natural idea.)
Seen this way, the "collapse of the wave function" in the Schrodinger's
Cat thought experiment is seen as a problem of knowledge, not something
quasi-mystical about an instantaneous collapse of some psi-squared
function.
(More interesting are the delayed choice experiments.)
My hunch is that the answer will lie in some of the work Chris Isham is
doing (I've cited this in some past messages). What is characteristic
of quantum events is that "honest observers will never disagree on what
happened" (to paraphrase what Lee Smolin cites in his book, "Three
Roads...")
To an outside observer, the humans watching the box, the outcome is
unknowable until they open the box. Once they open the box, they and
everyone else observing will agree unambiguously as to the outcome. And
they will agree with what the video camera recorded.
The measurement or observation process is just acting as a subobject
classifier, with one of the key properties of such topos-theoretic
structures being that once a subobject classification has been made,
there's no going back. Heyting --> Boolean, where there is never a
situation where some observers report "alive" and others report "dead."
(Smolin makes this point about cosmology, but it also applies to
quantum cosmology, and perhaps to many other areas. John Baez has been
writing recently about "q-logic" being a superset of "logic." He may be
on to something, or he may just be giving a new name to some topos
theory notions...I need to spend more time thinking and learning.)
> The conventional formulation, as described for example at
> http://www.wikipedia.com/wiki/
> Mathematical_formulation_of_quantum_mechanics,
> has special axioms related to measurement. Systems evolve according to
> the Schrodinger equation except when they are being measured, when we
> get wave function collapse. MWI rejects the axioms related to
> observables
> and collapse. In the page above we can eliminate axiom 3 and probably
> axiom 2 as well. You are left with nothing but Hilbert space and the
> Schrodinger equation. It's a simpler set of axioms.
The Tegmark and Wheeler paper (in Sci Am, IIRC) talks about this issue.
I'm not very convinced that MWI is simpler in any meaningful way. But
I'll be thinking about it more.
Deutsch is pretty convincing in his argument that the double slit
experiment, a century old now, is compelling evidence that single
photons going through one or the other of two slits "must be"
interfering with the myriad of photons in all the nearby universes
where similar experiments are happening.
(Of course, the alternate interpretation is just the familar
wave-particle duality, with photons sometimes exhibiting particle-like
properties (as in the photoelectric effect) and other times exhibiting
wave-like properties (double slits). I have no particular problem
believing this is "just as simple" as postulating nearby universes. I
have a thought experiment on this which I'll put into a separate post,
after I sleep on it tonight.)
More convincing to me has always been the quantum computer situation.
if someone builds a QC which can factor a 1000-digit number, I'll be
convinced other nearby universes exist.
(Though even here there is a "wave-like" interpretation favored by
some, that "qubits" are handling the parallelism.)
>
>> ...
>> effects, etc.) are only known to, say, 20 decimal places (if that, of
>> course). Because the "actual" positions, masses, sphericities, static
>> charges, etc. are perhaps defined by 40-digit or even 200-digit
>> numbers, the Laplacian dream of a suffiicently powerful mind being
>> able
>> to know the future is dashed.
>
> This is true in practice, but I think there is still a significant
> difference between deterministic systems like classical physics or the
> MWI, and inherently non-deterministic systems like the conventional
> Copenhagen interpretation of QM. In the latter you have the difficult
> philosophical problem of explaining where all the information comes
> from.
I'll ponder this.
> This is what led Schmidhuber to suggest that quantum randomness is
> generated by a pseudo random number generator; otherwise you can't
> simulate the universe on a computer because computers can't create
> true random bits. The Copenhagen interpretation is fundamentally
> non-computable.
Maybe too much is made of randomness vs. pseudorandomness.
Here's an example where even in a universe with "no local randomness"
(no spontaneous decay, nothing not locally deterministic) unpredictable
bits can be generated. The example is impractical, slow, etc., but it
makes the point.
Imagine powerful telescopes aimed at two distant and very, very fuzzy
galaxies approximately at opposite ends of the observable universe. If
Galaxy A produces a supernova before Galaxy B does, call it a "1."
Otherwise, a "0."
This Galactic Random Number Generator, or GNRG, cannot be predicted or
influenced (outcome altered) by any being subject to the laws of
relativity. The two galaxies are "spacelike to the max," being
separated by perhaps 15 billion light years, yadda yadda. We on Earth
can see the supernovae if and when they occur, but neither galaxy can
see the other.
I can imagine many variants of this example, all cases where finite
beings or computers in the universe are limited in what they can know
by physics and cosmology constraints.
Now I am not saying that this has anything to do with quantum
randomness, just making the point that we know of no omniscient,
omnipresent, omnipotent beings which could see all galaxies and know
whether a "1" or a "0" would be recorded in this experiment.
Are folks here familiar with the "holographic" picture of information
flow near event horizons?
--Tim May
"Aren't cats Libertarian? They just want to be left alone.
I think our dog is a Democrat, as he is always looking for a handout"
--Unknown Usenet Poster
Received on Wed Sep 04 2002 - 23:10:41 PDT