## Saturday, April 09, 2016 ... /////

### Mark Alford vs locality in quantum field theory

It's a Steve Hsu day today. I've mentioned his debates about genetic modification with P.Z. Myers. But let me mention something where Steve is on the wrong side. Five days ago, he promoted a completely wrong "pedagogic" paper by his friend Mark Alford,

Ghostly action at a distance: a non-technical explanation of the Bell inequality (arXiv, June 2015, zero citations)
The paper claims that locality is violated in the EPR experiments. (Alford has emitted lots of extra fog about hidden variables etc. in the discussion thread at Hsu's blog but I don't want to flood this blog post by all the mist. I want to focus on the locality.)

Alford uses the term "strong locality" or "local causality" – and you suspect that it could be one of the ill-advised conflations of "locality" and "realism". But if you look at his paper, you will see that the "strong locality" or "local causality" are defined to be nothing else than the ordinary locality or relativistic causality – it's enough to know the data in the past light cone of the region R to calculate probabilities in the region R (see the 2nd paragraph of the paper).

He says that this principle fails in quantum field theory but already in the first five or so quantum field theory lectures, students learn that and why this principle is obeyed in any relativistic quantum field theory. What's going on? Why is this guy writing so completely wrong things about locality?

Here, I won't pretend that I am a lonely warrior for the truth in the ocean of stupidity and dishonesty, something that you know from hundreds of other situations. I am a warrior but there are many others. Even Backreation emphasized that no superluminal influence occurs in the EPR experiments (principle #4) and she wrote similar things when she was correcting Bill Nye's nonsense.

Cristi Stoica, a guy calling himself a quantum foundation person, agreed. There can't be any faster-than-light-signalling.

Of course, there have been much more important people who have stressed this important point – so often confused in the debates about the "interpretations of quantum mechanics". (Much of the confusion was spread by John Bell himself. The theorem he proved was sold as a mixed package of a rudimentary result about a class of wrong theories of physics; and lots of incorrect would-be philosophical claims added to the mix.) In his Caltech lectures, Feynman exploited the example of a decaying positronium to show why quantum mechanics predicts the high correlations in many measurements of polarizations – and doesn't need to violate the relativistic restrictions on the speed of influences.

In the 1990s, Murray Gell-Mann wrote about the Quark and the Jaguar. Chapter 12 was dedicated to "Quantum Mechanics and Flapdoodle". Gell-Mann argued that the #1 wrong idea underlying all the nonsense in low-brow popular books and journal articles (they had to explode already in the early 1990s if not earlier) was the idea that the EPR-style experiments manage to send some information faster than light, in a voodoo style. It's completely incorrect and, as Gell-Mann argued, this misinformation was the reason why people began to advocate telekinesis and other paranormal phenomena as consequences of quantum mechanics.

Well, Alford's paper is nothing else than the 3,854th copy of this flawed popular idea that some "faster than light influences" are the right explanation of the EPR correlations. And to make sure that he's a textbook example of the paranormal apologists criticized by Gell-Mann, Alford invented one new phrase. The "spooky action at a distance" was no longer spooky enough so in his title, Alford called it a "ghostly action". To make things even worse, the word "telepathy" or "telepathic" appears 8 times in the paper.

Well, as you know, I am one of the ghostbusters. Hahaha. Sorry, there are no actions at a distance and there are no ghosts. His paper is completely wrong.

The first paragraph of his paper promotes various other wrong papers and misinterpretations of Bell's theorem. His own (not so) original stream of incorrect statements begins with the second paragraph of the paper:
Strong locality, also known as “local causality”, states that the probability of an event depends only on things in the event’s past light cone. Once those have been taken into account the event’s probability is not affected by additional information about things that happened outside its past light cone.
He defined the ordinary locality we discuss in quantum field theory rather carefully. I may use this definition basically without any disclaimers (I would probably replace "on things" by "on measurements"). The only problem is that Alford states that this principle fails in our cutting-edge quantum mechanical theories. The reality is that it is perfectly obeyed! It has to be because it follows from the special theory of relativity.

Off-topic: David Cameron has made some profit from his daddy's Panama fund. But it was only enough to buy a Škoda Octavia, Škoda's most best-selling model, as a furious Tory MP calculated. If Cameron is forced to show his tax return, the MP will introduce a bill to ban curtains at homes so that everyone may verify what kind of coitus you prefer. The Nasty Russian Girl was added to the package. She's cute, explains the new features of the 2016 Octavia, as well as the history of the 1959-1971 Octavia, and Cameron should be forgiven this profit, I believe, unless some self-evident serious crime is uncovered. BTW exactly 75 years ago, on April 9th, 1941, trolleybuses were introduced to Pilsen (trams have been installed by Czech Edison Mr Křižík in 1899, 8 years after Prague; the lines are almost the same as today and after 1914, most of the drivers were female; buses appeared in 1922). The war was terrible and the occupation was shockingly humiliating but the relaxed feeling in which "everything works" is an impression from the video one can't avoid, can he?

It's remarkable that a man like Mark Alford who has over 8,000 citations from 112 papers, including 4 papers above 500 citations and 30 other papers above 50 citations, is capable of writing something so fundamentally wrong, something that even ordinary students know to be wrong after several lectures of quantum field theory. But it's true.

If you look at Alford's 34 most cited papers, you will see one troubling sign: with one exception (and it's just a review), all of them have at least two, and usually more, authors – in many important cases, it's Frank Wilczek. You may become suspicious that "something would go wrong" if you allowed to do him research as an "independently thinking entity". And the paper arguing against locality would be an example showing that your suspicion was justified.

By now, I believe that most people are aware of the basic explanation why the faster-than-light influences aren't allowed in relativity. If the spacetime point A influenced the point B where $\Delta t = t_B-t_A\gt 0$ and $\Delta x = x_B-x_A$ which can be connected by a line with $|\Delta x |/ \Delta t = v \gt c$, then the points A and B are spacelike-separated. For two spacelike-separated point, you may switch to a different inertial system – perform a Lorentz transformation – so that in the new coordinates, you will have $t'_B-t'_A\lt 0$. So for this observer, the effect precedes the cause. Such acausal influences (backwards in time) clearly lead to contradictions.

So in classical relativistic mechanics, objects may move at most by the speed of light, $v \leq c$. All massive objects must actually have $v\lt c$. In classical field theory, one may show that the group velocity of the waves – or the speed by which the wave front moves – never exceeds the speed of light in the vacuum $c$, either. In classical physics, the meaning of the restriction on the speed is straightforward.

Well, it's straightforward in quantum mechanics, too, but the formalism describing locality in quantum field theory requires the same "advanced physics tools" that quantum field theory requires in general. As students of quantum field theory courses hear during the first two lectures, if you want to reconcile quantum mechanics with relativity, you're forced to switch to quantum fields, or second-quantize, or do something that is basically equivalent. Quantum field theory is the "minimal" kind of a theory that is compatible both with the principles of special relativity and the postulates of quantum mechanics (string theory is the only known consistent extension or deformation – and it's a matter of advanced subtleties whether string theory is a "totally different" framework than quantum field theory).

So quantum field theory is the simplest framework respecting the laws of quantum mechanics in which we may discuss what it means for physics to be local. Related questions are covered sometime in the 4th or 5th lecture of a quantum field theory course. Instead of allowing you to pick your favorite one or a favorite quantum field theory textbook, let's look at Peskin-Schroeder. Not because I am confident that it's still the best textbook on the market. But because it has been a new standard – a modern-era replacement of Bjorken-Drell – but it's been around for decades so a whole generation of particle physicists who are around was milk-fed by Peskin-Schroeder. In this sense, Peskin-Schroeder may still be considered the "least controversial" or "most mainstream" textbook of QFT (no one has ever screamed that they had covered causality incorrectly) and that's what is desirable in a text trying to solve or reduce controversies.

Well, the relativistic causality – which is the same thing we called locality here – is already discussed on pages 27-31, using the first example of the free Klein-Gordon (scalar) field and its propagator. You will hopefully agree that it's a rather early topic in the textbook because it has something like 850 pages.

Well, if you want to calculate the field $\Phi(x)$ at some spacetime point $x$ out of the values $\Phi(y)$ of the field in the past, you may use the Green's function $G(x,y)$ or the propagator as the "kernel". The Green's function only depends on $x-y$ due to the translation invariance. So this function $G(x-y)$ may be Fourier-transformed and this 4D Fourier transform is something like the usual momentum-space propagator$G(p) = \frac{1}{p^2 - m^2}$ where $p$ is the four-momentum dual to the coordinate $x-y$. When you calculate the Fourier transform of $G(p)$, you will see that the integral has two singularities as a function of e.g. $p_0$ – because the denominator becomes zero twice. You must deal with these singularities in some way. The most natural choices are that your integration left-to-right contour along the real axis should either go "above" them or "below" them in the complex plane. If you go twice "above" them, you obtain the "retarded" propagator$G_\text{ret}(x,y) = i \langle 0| \left[ \Phi(x), \Phi(y) \right] |0\rangle \Theta(x^0 - y^0)$ This is cool because it vanishes for spacelike-separated $x,y$ but it also vanishes if $x$ precedes $y$. Similarly, if you go twice "beneath" the poles, you obtain the advanced propagator where the time ordering of $x,y$ has to be the opposite one. The vanishing of the retarded or advanced propagator is the reason why the free field $\Phi(x)$ may be fully calculated from its values in the past light cone of $x$. Note that the quantum fields obey the "same" Heisenberg equations of motion as the classical equations (it is enough to add hats if you need them) so this locality is just copied from locality of classical field theory. Interactions don't really break the locality, either. At least when the interaction terms are non-derivative ones, they only add "point-wise" changes of the quantum fields but they don't "enhance" the abilities of the quantum fields to propagate quickly.

However, the right propagator to use in quantum field theory is the Feynman propagator where your contour goes "below" the left singularity but "above" the right one. This choice may be made unambiguous by writing the momentum-space propagator more carefully as$G_F(p) = \frac{1}{p^2 - m^2+i\varepsilon}$ and it's this choice that guarantees that the quantum field "converges" to "mostly the vacuum" at $t\to \pm\infty$ which is why the Feynman propagator gives more correct results than you could have with the advanced/retarded propagators (I could give you a better lecture of these things but this stuff isn't the focus here; what's more important is that it's a rather standard stuff that everyone learns in the first semester of good quantum field theory courses).

Funnily enough, however, the position representation of the Feynman propagator does not vanish at the spacelike separation:\begin{align} G_F(x-y) & = -i \langle 0|T(\Phi(x) \Phi(y))|0 \rangle \\ & = -i \left \langle 0| \left [\Theta(x^0 - y^0) \Phi(x)\Phi(y) +\right.\right.\\ &+\left.\left. \Theta(y^0 - x^0) \Phi(y)\Phi(x) \right] |0 \right \rangle. \end{align} The position-representation version of the Feynman propagator has the "time ordering" in it (this time ordering, and therefore the superiority of the Feynman propagator for the calculations, can also be derived through Dyson's derivation where the evolution operator $S$ is the time-ordered exponential of the integrated Hamiltonian $iH$) but it cannot be reduced to commutators. So it's a provoking result. As Peskin-Schroeder and other teachers of quantum field theory immediately point out, you could be worried that this nonvanishing value of $G_F(x-y)$ for spacelike-separated but nearby points $x,y$ (the value of the propagator drops quickly if the spacelike interval $x-y$ becomes long) could mean some violation of locality.

However, as Peskin-Schroeder and everyone else quickly show, this worry isn't justified. The nonzero value of the Feynman propagator for spacelike separations really means that the vacuum state is "entangled". There are quantum fluctuations of $\Phi(x)$ everywhere but if the value of $\Phi(x)$ at some point $x$ is randomly "elevated to more positive values", it's very likely that $\Phi(y)$ at nearby points $y$ will also be "positively elevated" and close to $\Phi(x)$. It's this correlation of the quantum fields at nearby points – which results from the continuity of the fields i.e. from the spatial derivative terms in the Hamiltonian – that is responsible for the nonzero value of the propagator.

In spite of that nonzero value, you can't use it to influence spacelike-separated points. There may be various ways to justify this assertion but most authors including Peskin-Schroeder use one important insight: Whether an action at point $x$ influences predictions for the point $y$ is always encoded in commutators, not generic products of fields, and the commutators are zero for spacelike-separated $x,y$.

The commutator actually can be written in terms of the Feynman propagator as well:$[\Phi(x),\Phi(y)] = \dots = G(x-y) - G(y-x)$ The effect of events at $x$ on the measurements at $y$ is given by the commutators only. And the commutators are equivalent to the difference of the Feynman propagator from $x$ to $y$ and the Feynman propagator from $y$ to $x$. These two terms may be understood as the contribution from "particles" and "antiparticles". Each of those may in some sense "propagate faster than light" but all physically observable quantities always receive contributions both from "particles" and "antiparticles" and once you sum these two values of the propagator, the result is already vanishing at spacelike separations. The antiparticles' and particles' "superluminal" effects exactly cancel.

Now, this point is often left unexplained. Why is the effect of the measurement at $x$ on the measurements at $y$ zero if the commutators of fields at those points are zero? The advanced and retarded propagators above are enough to argue that the "normal unitary" evolution of the physical system respects locality. But there's also the other "part" of the evolution of the wave functions, the reduction of it associated with the measurement.

In fact, all "willful interventions" that can be done at/around point $x$ may be considered to be measurements. If you break something, you also look whether you broke it and have some fun – that's why you did it. If you don't look and don't have fun, you're just a dead part of the physical system whose harmlessness for locality was already proven.

OK, if you're an observer who observes some quantity $L$ around the point $x$, you find an eigenvalue $\lambda$ of $L$ and the wave function immediately collapses to$\ket\psi \to P_\lambda \ket\psi.$ Similarly, if you use a density matrix, it changes as $\rho\to P_\lambda \rho P_\lambda$. After the collapse, you may renormalize $\ket\psi$ or $\rho$ for its norm or trace to be one again. But now, the projection operator $P_\lambda$ may be considered to be a function of $L$, and in quantum field theory, $L$ is a functional of quantum fields near the point $x$. So $P_\lambda$ is a functional of fields $\Phi(x\pm \epsilon)$, too.

We want to know if it affects predictions for fields close to $\Phi(y)$ and their functional $M$ whether the collapse took place due to the measurement of $L$ near $x$. The probability that $M$ will have the eigenvalue $\mu$ is given by ${\rm Tr}(\rho P_\mu)$ where $P_\mu$ is the projection operator for the eigenvalue $M=\mu$.

What is the probability if a measurement of $L$ took place before? It's ${\rm Tr} ( P_\lambda \rho P_\lambda P_\mu )$ This expression depends on both eigenvalues $\lambda,\mu$ and knows about all the correlations. But the experimenter who measures $M$ doesn't know the result of the measurement of $L$. We want to know whether the act of the measurement has affected him. So he must sum the probabilities over all possible outcomes of the measurement of $L$.$\sum_\lambda {\rm Tr} ( P_\lambda \rho P_\lambda P_\mu ) = \dots$ In general, there would be no way to truly simplify this sum. In particular, we can't use the fact that $\sum_\lambda P_\lambda = 1$ to simplify the sum of products$\sum_\lambda P_\lambda P_\mu P_\lambda$ However, because all the fields around $x$ and $y$ commute due to their spacelike separation, their functionals $P_\lambda$ and $P_\mu$ commute with one another, too. So we may exchange their order above:$\dots = \sum_\lambda {\rm Tr} ( P_\lambda \rho P_\mu P_\lambda ) = \dots$ But due to the cyclic property of the trace, the last $P_\lambda$ may be moved to the beginning of the trace where we get $(P_\lambda)^2$ and that's just $P_\lambda$ because $P_\lambda$ is a projection operator:$\dots = \sum_\lambda {\rm Tr} ( P_\lambda \rho P_\mu ) = \dots$ But now it's easy. We simply have the sum of $P_\lambda$ over all eigenvalues. But the sum of all the possible projection operators is simply $1$, so the result is$\dots = {\rm Tr} (\rho P_\mu).$ The probability that $M=\mu$ is the same as if the measurement as if the measurement of $L$ didn't take place. The vanishing commutators of the fields around the points where the observables $L,M$ live was needed to prove this result.

There are probably more elegant proofs (one may also focus on the fact that the probabilities of measurements $L=\lambda,M=\mu$ factorize to the product of probabilities – as expected for independent, uncorrelated, properties, assuming that the measurement events of $L,M$ weren't affected by a common cause in the intersection of their past light cone – because the initial state is a tensor product in that case, and "everything" probability-like factorizes) but the conclusion is more important than the possibly messy ways that convince you that it's true: The vanishing of the commutators at spacelike separations is what guarantees that the measurements – and therefore all willful acts – done in one region don't influence the predicted probabilities in the other region.

This was the last thing needed to establish the fact that the "strong locality" or "local causality", as Alford calls it, holds in any healthy relativistic quantum field theory. Why did Alford skip his 4th or 5th lecture of the introductory quantum field theory course?

Strong locality or whatever Alford calls it holds because the fields (graded) commute at spacelike separations. And they do. One may establish this result in many different ways. For example, at equal times, the commutators are zero for spacelike separations due to the "canonical quantization" and the identification of the canonical momenta. The commutator remains zero for non-equal times (but spacelike separation) because the evolution is generated by the Hamiltonian which is an integral of the Hamiltonian density, so it only modifies $\Phi(x)$ by fields around $x$ that also commuted with $\Phi(y)$, and vice versa. The commutators remain zero if $x-y$ is spacelike.

You may see that his serious papers, e.g. this most cited paper co-authored by Alford, Rajagopal, and Wilczek in 1997, always contain some formalism of quantum field theory, some integrals that are familiar to students in quantum field theory courses. His paper about the "ghostly action" doesn't contain anything of the sort. It only contains rather childish pictures and claims that directly contradict the basic material that the students of the quantum field theory courses normally learn. It's really 12 pages of garbage that is as offensive as this third paragraph of the paper:
Given some reasonable-sounding assumptions about causation (see Sec. III), the violation of strong locality in EPR experiments implies that there are causal influences that travel faster than light. The main goal of this paper is to give an extremely simple non-technical explanation of how EPR experiments lead to this striking conclusion.
No, there is no superluminal action and no violation of "strong locality" in Nature.

One must conclude that the "ghostly action" stuff is really some hobby that is completely disconnected from the normal, professional life of Mark Alford. Like Superman, he seems to live two lives. It looks like when Alford was writing about the "ghostly action", he must have been completely unaware of the fact that he has co-authored many papers that actually use quantum field theory – and not just the stuff from the first five lectures. Has Mark Alford forgotten all of his QFT expertise? Did he ever know it? Or doesn't he realize that when he's writing papers about the "violations of strong locality" in Nature, he discusses questions that should properly be discussed by the same formalism – that of QFT – that was used in his serious papers? And that the conclusion is clearly the opposite one than the conclusion of his "ghostly" paper?

I always have so many questions about the people's irrational behavior but I almost never get clear answers. People's irrationality probably cannot be rationally explained.