Friday, January 20, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

QM is self-evidently free of causality paradoxes

Someone sent me a 2012 preprint by Aharonov and 3 co-authors that claims that one may prove some acausal influence – future decisions affect past outcomes – with the help of the problematic "weak measurement" concept.

This is such a self-evident piece of rubbish that I am amazed how any physics PhD may ever fail to see it.

In the v5 arXiv version of the paper, the paradox is described as an experiment in bullets on page 12-of-15. In the morning, they measure some spins weakly, in the evening, they do so strongly, and some alleged agreement between the two types of measurements is said to prove that the "later randomly generated numbers" were already known in the morning.




Before I will discuss that their talk about this stuff is not just wrong but pretty much every sentence in their story is wrong, I want to remind you how extremely simple, unambiguous, and self-evidently consistent the general rules of quantum mechanics are.




As the 17th century "father of liberalism" John Locke already knew, all knowledge about the state of Nature comes from sensory reception. I just mentioned Locke, a darling of the U.S. founding fathers (who came later), to emphasize that the basic "quantum mechanical philosophy" was already appreciated by wise men more than 200 years earlier so the excuse that the quantum mechanical way of thinking is "too new" just doesn't hold much water.

In quantum mechanics, all of one's knowledge about the physical system is encoded in the density matrix \(\rho\) which reduces to \(\rho=\ket\psi \bra\psi\) in the special case of the pure states (maximum knowledge allowed by quantum mechanics).

The observer has determined \(\rho\) from his previous observations – from previous sensory reception, if I use Locke's synonym. And he may use it to predict the probabilities of his subsequent or future observations. If he observes the status of a Yes-No question determined by the projection operator \(P^2=P\), the probability of getting Yes is \({\rm Tr}(P\rho) = {\rm Tr}(P\rho P)\) – which reduces to \(\bra \psi P \ket \psi\) for pure states. And once the answer Yes or No is known, the density matrix changes to\[

\rho \to P\rho P \quad {\rm or}\quad \rho \to (1-P)\rho (1-P)

\] in the case of Yes or No, respectively, or the pure state changes to the projection\[

\ket\psi \to P \ket\psi \quad {\rm or}\quad \ket\psi\to (1-P) \ket\psi

\] which may be used for subsequent predictions.

These projections play exactly the same role as Bayesian inference has always played in the probability calculus. You learn new data (evidence) so you must adjust your subjective beliefs (probabilities) about everything in Nature. In particular, you "erase" all the possibilities that have been ruled out etc.

The only new aspect of quantum mechanics is that all the probability distributions are encoded in the density matrix with off-diagonal, generically complex, entries (or in the corresponding state vectors). But this complex-matrix-generalized probability calculus still works perfectly.

Well, the projection should be followed by the "renormalization" of \(\rho\) or \(\ket\psi\) to keep their trace or norm (the total probability) at one, i.e. by\[

\rho \to \frac{\rho}{{\rm Tr} \,\rho}\quad {\rm or} \quad \ket\psi\to \frac{\ket\psi}{\sqrt{\langle\psi\ket\psi}}.

\] No division by zero may ever occur because the probability that "the denominator is zero" is equal to zero – the probability of the particular result is the denominator! ;-)

Now, all measurements – everything we can ever learn about Nature – may be reduced to Yes/No observations. Is \(x\gt 0\)? Is \(x\gt 5\)? And so on. I could have discussed more general measurements that produce the eigenvalue of a general Hermitian operator \(L\) but I wanted to be really simple and Yes/No questions are sufficient as elementary building blocks for all measurements.

The projection operators may be written in terms of other observables. A full quantum mechanical theory has an algebra (a non-commuting algebra) of observables. Their time dependence is determined by the Heisenberg equations of motion; that evolution in time may be replaced by the time dependence of \(\rho\) or \(\ket\psi\) if you prefer the Schrödinger picture where the operators don't depend on time.

So the rules above are complete. They tell us how quantum mechanics allows us to determine new truths from old truths: It calculates probabilities that statement about the future observations are correct according to the well-defined formula. Once a particular outcome is produced by the measurement, the \(\rho\) or \(\ket\psi\) changes according to a well-defined prescription, the projection, and that new \(\rho\) or \(\ket\psi\) may be used to predict additional observations, and so on.

Once you know how Hilbert spaces and operators on them work and once you learn the relevant mathematics – which contains no physics and cannot be physically controversial – you only need to understand two things: How the matrix elements of operators or traces predict the probabilities of outcomes by Born's rule, and what you need to do with \(\rho\) or \(\ket\psi\) once you learn the outcome of another measurement.

That's it.

My point is that the people who write dozens or hundreds of pages about the "confusing things in quantum mechanics" and they still fail to understand the simple and self-evidently consistent and complete rules above are just incredibly stupid people. They may claim to be merely bigots who insist on classical physics ("realism" etc.) but in that case, they're masking a big part of the truth. They're not just bigots, they are very stupid bigots.

The simple rules above may be and should be applied to everything in Nature – in principle not just spins or electrons in the atoms but also to falling trees in the forests, moons orbiting their planets, and indeed, particular patterns seen in the cosmic microwave background. If you want to describe anything in Nature fundamentally correctly, you need to talk in terms of density matrices and observables, everything that you may know or test about Nature must be phrased in terms of observables i.e. operators acting on the Hilbert space, and all predictions about such testable things must be made with the help of Born's rule.

If you're trying to talk about some different "truth" about the state of Nature that is unrelated to your observations – and therefore unrelated to particular operators on the Hilbert space and accompanied by the collapse of \(\rho\) or \(\ket\psi\) – then you are just not doing proper modern physics. If you think that anything about the rules above may be paradoxical or that the random numbers may be forecast in advance, then you couldn't have possibly understood the simple rules I have described.

Back to stupidities about the weak measurements

OK, let me return to the Aharonov et al. paper about acusality. On page 12, we read:

1. On morning, several weak spin measurements were performed on \(N\) particles, resulting in even \(\uparrow/\downarrow\) distributions. These outcomes were recorded, thereby becoming definite and irreversible.
This sentence implicitly claims that the results of a weak measurement are either up or down, just like for a regular strong measurement. But these people must have forgotten the very first paper in which they joined the weak measurement movement. The title was How the result of a measurement of a component of the spin of a spin-1/2 particle can turn out to be 100.

So the results of the weak measurements may be pretty much any number, not just "up" and "down", and the sentence (1) totally misrepresents what may actually happen.

Incidentally, those who speak English would say "in the morning", not "on morning".
2. Then on evening, all the particles underwent strong measurements, on spin orientations chosen randomly, hence unknown beforehand, even to the experimenter himself.
Similarly, people say "in the evening", not "on evening". ;-)

In quantum mechanics, one can't ever know everything precisely, due to the uncertainty principle. But one knows something – the probabilities – beforehand. The previous weak measurements have brought the system to a state and this state may be used to make probabilistic predictions. Probabilistic predictions aren't completely sharp but they imply that the outcomes are not "completely" unknown.

So the sentence above is "partly misleading" and "certainly fuzzy". When the predicted probabilities for the "strong" measurement in the evening are 0% and 100%, and this may happen, there is certainty and the sentence above claiming that the result was "unknown beforehand" becomes an unequivocal falsehood.
3. All these evening measurements exhibited Bell inequality violations within each pair.
This is complete nonsense. The Bell inequality is an inequality obeyed by some statistical quantities, averages, probabilities, and especially correlations (degrees of correlation). So it just cannot possibly be "applied within each pair"! Whether or not the Bell inequalities are violated may only be decided when \(N\to \infty\) measurements are being made!

And it goes on and on and on. These people just don't have a damn clue what they are talking about. At the end, the very "program" to find a new acausal paradox with the help of a "weak measurement" is just another plain idiocy. There can't be any paradox like that and the "weak measurement" can't possibly be useful for anything like that.

The reason is simple. Whenever the term "weak measurement" has a well-defined meaning, all stories that include this term may be rephrased in terms of something more fundamental and easier to understand, namely the regular ("strong") measurement. Why?

Just read e.g. this exposition on Wikipedia. You want to measure something you can't touch or measure properly ("strongly"), let me call it a babe B. It should be easy to remember because some babes don't want to be touched etc. But you may make B interact with some other object, ancilla A, and after A and B get entangled a bit, you may measure the ancilla A. The whole demagogy of the "weak measurement" is that this strong measurement of A is sold as a weak measurement of B.

But if the interaction between A and B were non-existent, the measurement of A would clearly tell you nothing about B whatsoever! And even if the interaction exists but is somewhat weak, you should still realize that you're mainly measuring A, not B. The adjective "weak" totally distorts what's wrong with your measurement. The problem isn't that the "measurement of B is weak" in some intrinsic way. The problem is that what you're doing is a measurement of A, not B! ;-) I've discussed this problem in related words in 2012 – the "weak measurement" really depends on all the details of the measurement protocol etc., it's not a pure measurement of B but really a measurement of A+B – actually done as a "strong" measurement of A only.

OK, take the 2012 "paradoxical" paper by Aharonov et al. and simply expand all the steps involving the "weak measurement" by substituting the definition of the weak measurement from the Wikipedia. OK, so they're not really measuring the spins B (they are the "babe" here) only. They're measuring the system A+B that includes the ancilla A. OK, you make a sequence of measurements of some observables. Each possible outcome of such a measurement is predicted by quantum mechanics according to Born's rule I have explained above. Quantum mechanics says that the random choice is really determined at the moment of the measurement and is random – unless the predicted probabilities are 0 or 100 percent. And then quantum mechanics tells you how to update your density matrix.

That's it.

It's spectacularly obvious that there's no illegitimate influence on the future, reading the future random decisions in advance, or anything of the sort. The addition of "weak measurements" only adds room for mistakes and stupidity. This terminology encourages you to forget about the subsystem A altogether. By doing so, you may forget e.g. that all the post-measurement states are still unavoidably orthogonal to each other (because the A or ancilla-related factor of the post-measurement state vectors are orthogonal to each other). It is absolutely obvious that these "weak measurements" can't possibly bring anything fundamentally interesting to this debate. Weak measurements aren't needed, they aren't fundamental, and they aren't even useful in practice. All the talk about "weak measurements" is just a misleading language to make gullible people think that they're doing something completely different than what they are doing.

In this blog post, I wanted you to understand that the quantum mechanical prescription for following and predicting the behavior of Nature is simple. It's a simple sequence of "predict probabilities from \(\rho\) by Born's rule", "update \(\rho\) by the projection when you learn the actual outcome", "predict probabilities from \(\rho\) by Born's rule", "update \(\rho\) by the projection when you learn the actual outcome", and so forth. It's just two simple steps that are being repeated all the time.

When you understand what these two steps do, you should apply it in many situations because this is really how quantum mechanics wants you to understand everything in Nature, including the large objects etc. that were previously described by classical physics rather well. When you combine this knowledge with some mathematical proofs, you should also be able to understand
  1. why the predictions of quantum mechanics become basically equivalent with those of classical physics for "large" systems – why classical physics becomes an OK approximation in certain regimes
  2. why there is never any non-local influence; in quantum field theory, spacelike-separated operators (graded) commute with each other which is why the decision "what to measure" by a distant observer can't modify your local probabilistic predictions
and several other things. But all the claims about paradoxes that follow from the universal postulates of quantum mechanics are self-evident junk and it shouldn't be hard for you to see why. Quantum mechanics just tells you how to evolve the operators (or density matrix) in time, how the probabilities of outcomes are calculated from the matrix elements or traces, and how to update the density matrix once you learn an actual outcome. That's what all the research of Nature by a quantum physicist looks like. You can't ever get any information about the state of Nature without a measurement – without Locke's sensory reception – which is why it's obviously illegitimate to "demand" that the fundamental theory of Nature "must" describe these things.

Nothing can possibly be contradictory or incomplete about quantum mechanics. For example, if you aren't sure whether you have observed/perceived/experienced/measured something or what it was (or if your sensory receptors or your brain is malfunctioning), then you won't be able to calculate meaningful correct predictions but it's not the fault of quantum mechanics. If you didn't know whether or what you observed in a classical world (or if your sensory receptor or brain were malfunctioning over there), you would have been unable to do correct meaningful predictions, too!

So quantum mechanics doesn't prevent you from making any predictions in analogous situations in which classical physics would have been capable to say something. Quantum mechanics is equally consistent, equally complete, and equally predictive as classical physics – it's just a different thing generating different predictions that can't be imitated by any classical theory, a more empirically successful theory, and arguably (from a pure theorist's viewpoint) a more general, natural, and prettier kind of a physical theory.

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :