## Tuesday, March 10, 2009 ... //

### Decoherence is a settled subject

After some sensible and creative texts, Marco Frasca decided to defend a wrong system of ideas and joined the anti-quantum zealots. ;-) A lot has been written about these things. So let me begin with some natural sources:

Original sources

Wojciech Zurek: Decoherence and the transition from quantum to classical (1991), 800+ cits, version updated in 2003, full text

Wojciech Zurek: Decoherence, einselection, and the quantum origins of the classical (2001), 550+ cits, full text

Robert Griffiths: Consistent histories and the interpretation of quantum mechanics (1984), 350+ cits, full text

Murray Gell-Mann and James Hartle (1993): Classical equations for quantum systems, 450+ cits, abstract

Largely reviews

Roland Omnes: Consistent interpretations of quantum mechanics (1992), 200+ cits, abstract

Luboš Motl: Entanglement and interpretations of quantum mechanics (2005), a course, full text

Some general conclusions & slogans

1. Quantum mechanics is valid everywhere, for small and large systems, for intelligent and unintelligent objects; and all quantities that were thought to be "real" classical observables in the ignorant era of classical physics become linear operators on the Hilbert space with their eigenstates, eigenvalues, and their probabilities that can be predicted from the amplitudes (but never deterministically); there is no segregation of contextual and real observables

2. Classical physics is always just an approximation, and can be derived to be a good one under certain circumstances

3. Only probabilities may be predicted by quantum mechanics (i.e. in the real world) and classical determinism only occurs when the probabilities become negligibly small everywhere except for a small vicinity of the "correct" classical history

4. The boundary between the quantum and classical realms occurs when the interference effects get suppressed; this loss of coherence (the loss of information about the relative phases of complex amplitudes) is the only effect that universally occurs to produce a limit that is well described by classical physics in our quantum world

5. The suppression of quantum interference is called decoherence, and is caused by the interactions with the environment (i.e. degrees of freedom that we can't and don't want to keep track of); in this process, the off-diagonal elements of the density matrix plummet and the diagonal entries may be interpreted as classical probabilities; in this regime when the interference is gone, Bell's inequalities (and other manifestations of the classical intuition) become approximately valid

6. Quantum mechanics fully determines where this boundary occurs, and the required inequalities depend on the physical system, its Hamiltonian, the density of the environment, the strength and speed of interactions, and many other things: the emergence of the classical limit is a dynamical question and there is no "universal" answer to the questions e.g. "how many atoms or how long time one needs for classical physics to emerge"; everyone should calculate or review at least five order-of-magnitude estimates of the "critical" quantities where the classical limit becomes valid, in order to see the huge diversity of these scales in different contexts; these calculated boundaries are obviously correct on theoretical grounds and in many cases, the quantum-classical transition can actually be observed (at the predicted place)

7. In a classical regime, the preferred basis vectors of the Hilbert space are those that can imprint themselves into the environment (in Zurek's jargon, these states pass the einselection which makes them immune against decoherence); bizarrely non-local Schrödinger's cat superpositions are not in this category, and one can show, e.g. in the consistent history framework, that they don't allow us to formulate consistent histories (for which the probabilities add as expected from logic); it is fully understood what's wrong with Schrödinger cat superpositions and the derivation of the preferred states depends on the Hamiltonian

8. There is no room for a physical collapse or, on the contrary, for an ad hoc privileged role of conscious observers; the wave functions only predict the probabilities but they can be calculated for any set of consistent histories, regardless of whether the systems look conscious, unconscious, macroscopic, or microscopic; the only "collapse" that occurs is the rapid diagonalization of the density matrix in the preferred basis by the interactions with the environment; however, the "unrealized" diagonal entries of the matrix (probabilities of outcomes that won't come true) are never "physically" set to zero because their interpretation always remains probabilistic, even when the classical approximation becomes acceptably accurate

9. There cannot be any deterministic description that would allow one to know the outcomes non-probabilistically, such as "pilot waves" or "hidden variables", not even in principle, and questions attempting to know "more" than what quantum mechanics predicts are unphysical; the Conway-Kochen Free will theorem is a way to prove that the microscopic outcomes can't be deterministically determined

10. From all practical points of view, Niels Bohr and his friends in the Copenhagen school were right on the money and decoherence may be interpreted as a justification, derivation, or a proof of their assumption that the classical intuition is fine for (mostly) large objects and quantum mechanics is crucial for (mostly) microscopic objects; they didn't know the modern derivation of decoherence but they understood its qualitative implications

11. Decoherence is a process with an inherent arrow of time that makes it analogous to friction, heat dissipation, and other thermodynamic processes with an arrow of time; the effects are related, the arrows inevitably agree with one another, and decoherence is as real as the other processes (that increase the entropy); the time-reversal asymmetry of decoherence is inevitable because the environment can't be assumed to be non-locally entangled with the system in the far past, but it can be shown to be correlated in the future because of the evolution (and one can't ever assume anything about the future or impose "final" boundary conditions by the very definition of the future which is yet to be seen)

12. On the other hand, consistent histories are just a particular convenient framework to formulate physical questions in a certain way; the only completely invariant consequence of this formalism is the Copenhagen school's postulate that physics can only calculate the probabilities, they follow the laws of quantum mechanics, and when decoherence is taken into account, to find both the quantum/classical boundary as well as the embedding of the classical limit within the full quantum theory, some questions about quantum systems follow the laws of classical probability theory (and may be legitimately asked) while others don't (and can't be asked)

#### snail feedback (6) :

I have a question (not directly related to decoherence) about your general view of QM.

In all your posts touching the subject of QM you insist on the "proven" fundamentally probabilistic nature of our universe and on the failure of various non-orthodox interpretations to explain the experimental data.

My question is how do you explain the experimental results (correctly predicted by QM) for a simple EPR experiment where both Alice and Bob measure the spin on the same axis.

We know for a fact that every such measurement will result in a perfect anticorrelation (or correlation if using PDC). Logically, I see only two possible explanations:

1. A non-local interaction of some sort (wave function might be a real, physical field, or some other non-local force might be at work)

2. A strict deterministic evolution of the entire system (including the so-called free choices of Alice and Bob) - hypothesis named superdeterminism by Bell.

As far as I could understand you reject both explanations and say that one must accept the pure probabilistic nature of QM. However, I don't see how this appeal to probabilities explains the experimental data. Can you explain your position on this matter, or you take the "shut up and calculate" approach?

Thank you,

Andrei Bocan

Dear Andrei, indeed, both "explanations" that you can see are wrong.

And indeed, one must accept the probabilistic interpretation of QM. It's interesting that you admit that you know this possible answer, but you still don't include it among the explanations that "you can see".

Do you become blinded right before you enumerate the answers that "you can see"? ;-)

I have no idea what you could possibly mean by the statement that the probabilistic framework of quantum mechanics doesn't explain the experimental data. It explains all of them. The probabilistic distributions always match the predictions of QM, and so does the very random character of all individual events. What we see is what quantum mechanics predicts.

An alternative theory would claim that the outcome of individual measurements is not random - but huge classes of such conceivably alternative theories have been proven impossible i.e. incompatible with the data when their logical consequences (such as Bell's inequalities) are derived. So it is not only about these theories' being uneconomic; they are literally excluded today.

I think that I am explaining my position about all these matters all the time. Why do you think that I am not? I think that it is because you may be a blinded zealot who simply doesn't want to "see" certain things.

After all those detailed and brutally indisputable explanations, "shut up and calculate" sounds far too optimistic because you don't seem to want to calculate anything. It may be more appropriate to say "shut up and f*ck off my blog". Thank you! ;-)

"I have no idea what you could possibly mean by the statement that the probabilistic framework of quantum mechanics doesn't explain the experimental data. It explains all of them."

There is a distinction between predicting something and explaining. I accept the fact that QM correctly predicts anything that can be tested.

"The probabilistic distributions always match the predictions of QM, and so does the very random character of all individual events. What we see is what quantum mechanics predicts."

Indeed, the predictions of QM are confirmed. But you didn't answer the question of how do you explain those predictions? Why does Alice's measurement change the probabilities at Bob's place (which can be far away) if the two regions cannot influence each other?

Dear Andrei, when it comes to common language, "explain" is something different because it can also mean what the teachers are supposed to do, while "predict" is what the analysts are doing.

But when we talk about the abilities of theories, "predict/retrodict" and "explain" is the very same thing, namely to be able to determine the character of some objects, events (past or future), or their outcomes, from a more limited, economic set of data and algorithms than the collection of all the data. The more efficient (and accurate and general) the explanatory theory = predictive framework is, the more satisfying it is.

That's what quantum mechanics is doing. With the right Hamiltonian and/or initial conditions, it is surely predicting and explaining all phenomena in the microscopic world.

Andrei: "But you didn't answer the question of how do you explain those predictions?"

LM: I have answered the question. The answer is that quantum mechanics explains itself, at least to everyone who is ready, willing, and able to listen.

Andrei: "Why does Alice's measurement change the probabilities at Bob's place (which can be far away) if the two regions cannot influence each other?"

LM: Alice's measurement doesn't change any probabilities, and surely doesn't produce any physical influence at a distance.

What actually happens is that two people observe spins and the only thing that can be determined in the real world, even in principle, is the probability of different outcomes. The outcomes of A,B are correlated in general, and QM explains and predicts the right formulae for the probabilities of any configuration of results at point A, and any results at point B.

If these probabilities are written in a table (rows for A outcomes, columns for B outcomes), it may happen that the result at A is already known. In that case and at this moment, only one row of the table remains relevant. But there has been no influence going from A to B. One only realized that a big part of a table is no longer useful. No superluminal signals.

Quite on the contrary, quantum field theory is easily shown to be local, in the sense that physical signals can only propagate by speeds slower than light, just like relativity requires, and quantities at spacelike separation anti/commute with each other (except for tiny corrections in the presence of black hole horizons).

There is absolutely no contradiction and no remote action here, it's been explained millions of times, but people like you are simply unwilling to listen, and you prefer to repeat the stupid crap about superluminal signals that don't exist.

LM:
What actually happens is that two people observe spins and the only thing that can be determined in the real world, even in principle, is the probability of different outcomes. The outcomes of A,B are correlated in general, and QM explains and predicts the right formulae for the probabilities of any configuration of results at point A, and any results at point B.

If these probabilities are written in a table (rows for A outcomes, columns for B outcomes), it may happen that the result at A is already known. In that case and at this moment, only one row of the table remains relevant. But there has been no influence going from A to B. One only realized that a big part of a table is no longer useful. No superluminal signals.

Andrei:

OK, I accept your description, I agree that this is what happens. You say that no more explanation is needed because "QM explain itself". OK, let's see how exactly this explanation goes. The wave function, that completely describes the entangled pair, is said to be collapsing upon measurement, producing in a pure probabilistic manner the experimental outcomes. I have absolutely no problem with this explanation. But such an explanation is logically incompatible with your rejection of the wave function as a real field and of the non-local collapse as a real, physical phenomenon. In other words, you cannot say that one row in the table is redundant because a non-existing wave function underwent a non-physical collapse. Therefore, in order to maintain logical consistency, you should provide an explanation of the experimental results based on something else than the bare QM formalism. Do you agree with this?

I would like to add that the reason I ask you these questions is because I respect you and your knowledge on the subject. It is not in my intention to try to attack you or support non-local theories (I also reject them). From your posts I understood quite well what you do reject, but a clear statement of what is your own interpretation is, I think still missing.

LM:
Quite on the contrary, quantum field theory is easily shown to be local, in the sense that physical signals can only propagate by speeds slower than light, just like relativity requires, and quantities at spacelike separation anti/commute with each other (except for tiny corrections in the presence of black hole horizons).

Andrei:
I know that it has been mathematically proven that any strong type of non-locality is ruled out by QM (no faster than light mass/energy transfer, no instantaneous messages).