Thursday, April 07, 2016 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Backreaction mostly right on quantum mechanics

Sabine Hossenfelder may have liked a serious enough endorsement concerning her comments on quantum mechanics and Bill Nye. And that may be a reason why she's gonna get some more for her

10 Essentials of Quantum Mechanics
I won't grade her because that's not what Gentlemen do to Ladies but if I were grading her, she would get a B today – a much better grade than the D of Mark Alford and Steve Hsu who believe that the EPR correlations are due to some kind of nonlocality – or at least that it's (pedagogically etc.) useful to invent new awkward phrases whose goal is to keep this fundamental misconception alive.

But let's return to the text at Backreaction (and Starts With a Bang). Let me discuss those ten assertions one by one (Alford's and Hsu's mistake is addressed in the item #4).

1. Everything is quantum.
Evaluation: Exactly.

This is a seemingly trivial point for a practicing physicist but an important one in these discussions and I think it's probably right that it appeared at the very top. The point is that you can't understand quantum mechanics as some "extreme rules" that only hold in some limited situations while most of the world follows "different laws". In principle, the whole Universe follows the quantum mechanical laws. However, "large enough" or "macroscopic" or otherwise special systems also approximately agree with laws of another type, classical physics, because classical physics and quantum mechanics become indistinguishable. That's why it was hard to notice the quantum effects and the people waited up to 1925. But quantum mechanics was born and it applies everywhere.

2. Quantization doesn’t necessarily imply discreteness
Evaluation: Exactly.

Linguistically, the terms "quantum theory" (1900) and "quantum mechanics" (1925) were chosen because some quantities that were continuous in classical physics – well, especially energy but also the angular momentum etc. – turned out to be discrete. Electromagnetic waves came in quanta (later named photons) and those quanta were also emitted or absorbed when the atoms were jumping from one energy level to another. Using the quantum mechanical jargon, some observables may or must have a discrete spectrum in quantum mechanics. \(E=N\hbar \omega\) for a group of photons of frequency \(\omega\). \(E=-E_0/n^2\) for the bound state of the hydrogen atom, and so on. This discreteness was a new aspect which is why it could have given the name to "quantum mechanics".

But other quantities may still have a continuous spectrum – or a mixed spectrum with a continuous and discrete portion (the hydrogen atom's energy is continuous – all positive eigenvalues are OK – for the situations when the atom is ionized). Even for continuous spectra (e.g. those of \(x\) and \(p\) in infinite space), quantum mechanics works and has implications which are generally new.

It's excellent that Hossenfelder now acknowledges that for the same reason, the task of understanding "quantum gravity" doesn't mean that observables have to have discrete spectra. Kashyap asked Hossenfelder:
Then, what is the motivation for trying to discretize space-time? Does it help in any other situation?
Sorry but the only motivation to discretize spacetime is the stupidity of the people who do so. Some physically unrealistic models of quantum gravity may display discretized geometry (noncommutative geometry is basically equivalent to the space divided to cells as a phase space; quantum foam has quantized areas, and so on) but the 3+1 spacetime dimensions we know have to admit a fundamental description that makes them continuous, otherwise the Lorentz invariance would be broken. In the following comment, Hossenfelder replied to Kashyap:
Yes, exactly. Discretization is a good way to tame infinities. As long as you sum up a finite amount of finite things, you will get a finite answer.
Sorry but this is naive bullšit (which I generously won't consider to the grading of Hossenfelder's "main point"). Discretization of the spacetime doesn't help one at all with the actual problem manifesting itself as the infinities, namely with the non-renormalizability. It's just a "cutoff" but all the infinities just redress themselves as the cutoff-dependence of all the terms, and it's the infinite collection of unknown values of these constants that is the actual problem. See e.g. the 2005 paper by Nicolai, Peeters, and Zamaklar (which is by the way one of the most cited papers about loop quantum gravity).

3. Entanglement not the same as superposition
Evaluation: ambiguous, point is foggy.

There are particular claims she wrote that are just wrong, e.g. the statement that "superpositions are not fundamental". I am sorry but the existence or reality or admissability of superposition states is absolutely fundamental in quantum mechanics – it's the universal "superposition postulate" that says that for any allowed states \(\ket\psi\) and \(\ket\chi\), the states \(a\ket\psi + b\ket\chi\) are also allowed for \(a,b\in \CC\). In other words, the fundamental space of pure states is a linear complex Hilbert space. This surely is fundamental in quantum mechanics, right? And a key axiom of linear spaces is that with all pairs of vectors, they also contain the sum – the superposition. So this axiom is undoubtedly fundamental in quantum mechanics.

The linear character of the space of states – and of the operators – is a "simplifying rule" repeatedly emphasized e.g. in Dirac's textbook – which guarantees that quantum mechanical theories are at least as constrained as their classical limits.

Entanglement is unambiguous, she writes. It's true in some sense – if we decide about a way to divide the system into subsystems, a choice that the notion of entanglement depends upon. And she's right that entanglement may be viewed "just as a correlation". On the other hand, the claim in the title that "entanglement isn't superposition" is murky and, from some viewpoint, totally wrong. An entangled state is "also" exactly the same thing as a superposition of two or more tensor products that can't be written as a single tensor product of states from the two subsystems. The entangled state is the same thing as\[

\ket\Psi = \sum_i c_i \ket{\alpha_i} \otimes \ket{\beta_i}

\] where the letters \(\alpha,\beta\) are reserved for the two subsystems, respectively, and the tensor product sign \(\otimes\) may be omitted. If it's needed to make at least two of the coefficients \(c_i\) nonzero to write \(\ket\Psi\) in this form, then the subsystems \(\alpha,\beta\) are entangled. The "entangled character of the state" is exactly the same thing as "the need to use a superposition to describe" the state. I am not sure whether she realizes that but her text seems to be "softly in conflict" with this possible definition of entanglement.

4. There is no spooky action at a distance
Evaluation: Exactly.

As Steve Hsu and his friend (not to mention Bill Nye and others) should finally notice, the correlations seen in the "entanglement" experiment are not due to any "action at a distance" that would take place at the moment of the measurement(s). It's a correlation implied by the laws of quantum mechanics applied to the initial state that has already existed (=was the most precise way for an observer to describe his state of knowledge) from the moment when the two subsystems (e.g. two EPR photons from a positronium) were in contact. If there were an "action at a distance", it would be spooky, indeed, because it would really conflict with Einstein's relativity, but no such action exists in Nature and, if one embraces the framework of quantum mechanics, no such action at a distance is needed.

An action at a distance would be needed in a classical model that would try to mimic the predictions of quantum mechanics. But such a model would need many other things and no such viable classical model really exists – and it's just fundamentally wrong to assume that to understand physics, one should search for a classical model. The fact that a classical model would have to send signals faster than light really means that all such models are basically in conflict with special relativity which is a huge problem for this class given the flawless evidence in favor of relativity.

5. It’s an active research area
Evaluation: It depends.

There is surely some interesting, serious, exciting contemporary research that is close to the "foundations of quantum mechanics", like the recent "entanglement minirevolution in quantum gravity" or some serious work in quantum computation. At the same moment, all the basic conceptual questions about the general rules underlying quantum mechanics have been understood since the 1920s and nothing whatever has changed about them.

6. Einstein didn’t deny it
Evaluation: Partially correct, partially wrong.

This point is one about the history and Hossenfelder's history is only partially correct. The reality is that at the beginning, Einstein did deny quantum mechanics in the strongest possible sense. It's referred to as the first stage of the post-revolution Einstein-Bohr debates. Einstein believed that the uncertainty principle was sharply wrong and demonstrably wrong and he looked for experiments, gedanken and real ones, that would demonstrate it. These proposals were shot down by Bohr.

After a long sequence of failures, Einstein softened his rhetoric and started to say that quantum mechanics described all these experiments correctly but it wasn't the deepest or most complete possible theory to do so. It was enough for him to lose almost all contacts with the best active theoretical physics research. Because he was interested primarily in the fundamental laws and he believed that the quantum laws weren't really fundamental (although he made no progress in constructing a hypothetical alternative theory that could also explain the phenomena normally explained by quantum mechanics), he was simply not interested in the research of these quantum laws.

Whether Einstein was a "denier" in this later stage depends on "what he was supposed to deny". He didn't deny that quantum mechanics was basically right to describe all these experiments. But he did deny that it's the more accurate and fundamental framework for physics than the classical or "realist" framework. Given the fact that at least today, we can more or less rigorously demonstrate that this opinion of Einstein was flawed, we should probably say that he was a denier of an important aspect of the quantum mechanical framework underlying physics.

7. It’s all about uncertainty
Evaluation: Exactly.

The refusal of \(x,p\) or other pairs (almost all pairs) of observables to commute with each other, \(xp-px\neq 0\), is really the source of all the differences between the quantum mechanical framework and its classical predecessor. As Feynman clearly stated at the end of the Chapter III/20 of his lectures, after he emphasized that the average values don't describe what's really going on:
Quantum mechanics has the essential difference that \(px\) is not equal to \(xp\). They differ by a little bit—by the small number \(i\hbar\). But the whole wondrous complications of interference, waves, and all, result from the little fact that \(xp-px\) is not quite zero.
The nonzero commutators are really the essence. The whole novelty and arts of quantum mechanics is about one's ability to make sense out of the formalism where observables refuse to commute. Once you know how to do it, you may derive all the interference, discrete spectra, entanglement, tunneling, and other things. And, as Hossenfelder correctly says, the uncertainty principle is fundamental – it's not something one could correctly dismiss as a technical flaw of the measurement apparatuses. It's an unavoidable truth independent of all the apparatuses. Just like the special relativity says that two observers in the state of relative motion usually can't agree on the simultaneity of two events (it's not just due to their sloppy measurement of time), quantum mechanics says that pairs of complementary observables can't have well-defined values at the same moment.

8. Quantum effects are not necessarily small...
Evaluation: Exactly.

In general, quantum mechanics makes predictions that are fundamentally different from the predictions of classical physics. For big systems, we may like to say that they are "small" because \(\hbar\) is much smaller than the angular momentum of objects we see with our naked eyes or than the action calculated for the evolution of macroscopic systems around us. But the qualitative or conceptual difference of quantum mechanics is always there. And that's why they may be manifested at macroscopic distances. The entanglement surely continues to exist even at huge separations.

And the ex-dean of my Alma Mater Prof Sedlák loved Hossenfelder's example as an explanation why he did physics of low temperatures: quantum effects manifest themselves at macroscopic pieces of materials. The other thing he told me immediately afterwards was that my classmate Ms M.Z. had nice legs. But you gave her a C, I responded. But that wasn't for the legs, he defended the grade. ;-)

It's all nice, you may derive the wave functions as "reinterpreting" the Cooper pairs' wave function as a classical field. But there's also something you should be aware of: If you have many Cooper pairs in the same state, in a Bose-Einstein condensate, their "shared" wave function really becomes something that is approximately a classical field. If you have many (\(N\)) of these bosons in the same state, the prediction for "their percentage that have a certain property" becomes rather accurately predictable – the relative error goes down like \(1/\sqrt{N}\) by the basic rules of statistics. In the \(N\to \infty\) limit, all these numbers become basically deterministic classical degrees of freedom without uncertainties and may be derived from new classical fields.

So it's also correct to say that a macroscopic piece of a superconductor is described by another classical field theory – but it's a classical field theory whose structure is very similar to the structure of wave functions in quantum mechanics and that may almost directly "derived" from these wave functions. So the field theory describing a piece of a superconductor is a different classical theory than one you could have started with. This is a "lab" or "mundane" example of the fact that a quantum mechanical theory may have many different classical limits.

9. ...but they dominate the small scales
Evaluation: Exactly.

At short enough distances, the importance of quantum mechanical effects is basically unavoidable. The deeper into matter you try to penetrate, the closer to the fundamental laws your interest is focusing onto, the more essential it is for you to learn quantum mechanics and take it very seriously as the "fundamental kind of a description".

10. Schrödinger’s cat is dead. Or alive. But not both.
Evaluation: Exactly.

The wave functions and their superpositions are often presented as a "clearly bizarre" and really "incompatible with our experience" prediction of quantum mechanics that when we check whether a cat is dead or alive by an observation, we will get some obscure mixture. We don't. When we look at a cat, it is almost always either clearly dead or clearly alive. And quantum mechanics predicts exactly that. It doesn't predict any "foggy mixed and undetermined" results of the observations. If quantum mechanics were predicting something totally different than what we observe, it would be a falsified theory and physicists would abandon it. But quantum mechanics predicts what's observed in all experiments which is why physicists haven't abandoned it.

The wave function \(0.6\ket{\rm alive}+0.8i\ket{\rm dead}\) describes a "dead and alive" superposition and you should notice that the terms are connected by the operator "plus". It means that the expression is a sum and the formalism of quantum mechanics says that it's interpreted analogously as the logical sum. And, make no mistake about it, the logical sum refers to the operator "OR". So the cat is either alive "OR" dead, with the given probability amplitudes. If you needed the dead and alive cat to exist at the same time, you would have to use the logical operator "AND" which is also called the logical product. Note that the probability of two independent events "A AND B" is \(P(AB)=P(A)P(B)\), also a product. In terms of the wave functions, you would need some product of wave functions, namely the tensor product. So the state\[

\ket{\rm dead} \otimes \ket{\rm alive}

\] describes a state of two cats (a non-entangled state because one tensor product is enough and there are no superpositions). One of these cats is dead and the other one is alive. That's the state in which the dead and alive cat(s) exist simultaneously. But in the superposition states, they just don't because the addition of wave functions means "OR", not "AND", because the addition of wave functions is sort of similar to the "addition of the square roots of probabilities", and the addition of probabilities clearly refers to "OR", too.

(Whenever the mixed bilinear terms in the probability amplitudes – those responsible for the quantum interference – drop out, the additive treatment of the probability amplitudes becomes equivalent to the additive treatment of the probabilities. Again, the addition of probabilities or amplitudes means "OR", not "AND". Also, I must emphasize that my usage of the words "OR" and "AND" doesn't mean that you should restore your belief in classical physics. It's still true that the properties of objects can't be described by "sharply true" or "sharply false" statements before you make the relevant measurement. When I say that "the superposition refers to OR", it means that the wave function is nothing else than the most complete package of information usable to predict the following observations, and the predictions resulting from the superposition say that either one OR another outcome takes place – where "one" and "another" would refer to the interpretation of the two individual terms in the superposition.)

When the observable is observed, e.g. when you look whether the cat is alive or dead, you unavoidably change the state of the physical system (the cat). The wave function collapses into an eigenstate of the observable you have just measured. This collapse isn't a change that should be explained by some additional "mechanism". It directly follows from the quantum mechanical postulate that \(c_i\) are the probability amplitudes and \(|c_i|^2\) are probabilities. Whenever an observer learns new information, his subjective probabilities abruptly change. In classical physics, the formula dictating this change is the Bayes formula. We don't ask what's "inside" this formula because it clearly follows from basic logic and probability calculus. The collapse of the wave function is exactly analogous to Bayes' formula but it's written using the objects allowed in quantum mechanics and in agreement with the new features of quantum mechanics that may be ultimately derived from the nonzero commutators, as we said (and the implications include all the interference and other effects).


Hossenfelder got 7 "totally correct" ratings, 3 "ambiguous" ones, and no "completely wrong" ones. This is better than what 90% of the people calling themselves "researchers in the quantum foundations" could ever achieve. ;-)

Add to Digg this Add to reddit

snail feedback (0) :