Monday, March 07, 2016 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Claims about contradictions in first kind measurement are crackpottery

On his blog, Florin Moldoveanu (who describes himself as a researcher in foundations of quantum mechanics) wrote lots of tedious blog posts that were rich in formalism but some of them looked valid.

Even though most of these texts look much more cryptic than necessary, I was repeatedly tempted to think that he must ultimately understand quantum mechanics. But at a sufficient frequency, he posts something that is so self-evidently wrong that the belief that he might be more than just another anti-quantum zealot evaporates almost instantly. These neverending bogus claims about "problems" within quantum mechanics are the only thing that most of the "community" of the anti-quantum zealots calling themselves "researchers in quantum foundations" keeps on producing.

That was also the case of Moldoveanu's recent text

Use and abuse of von Neumann measurement of the first kind
in which he claimed that quantum mechanics with the "first kind measurement" – which, in von Neumann's jargon (and von Neumann's measurement scheme), is basically the most ordinary canonical type of a measurement in quantum mechanics, as I will discuss – suffers from a "flaw" which means that at least one of the three good features below must be sacrificed:
  1. The wave function of a system is complete.
  2. It evolves via a linear equation.
  3. Measurements yield sharp outcomes.
Now, to understand quantum mechanics means to be sure about many things, including the validity of all three statements above – and about numerous other facts. Florin doesn't get it, so he obviously doesn't understand quantum mechanics.

What is this "first kind" stuff? In 1932, John von Neumann published a book that contained some discussion of a "predecessor of decoherence". He discussed the quantum states of the measurement apparatus and the entanglement (without using this word which was only coined in 1935) between the measured system and the apparatus, among other things.

Much of this stuff is really unnecessary to understand any physical aspect of quantum mechanics but at least, von Neumann's formalism worked within proper quantum mechanics. Many decades later, Everett and even Zurek and others have largely copied all the "good stuff" from von Neumann but Everett and his followers began to distort some of the rules and abandon quantum mechanics.

You may check this section at Wikipedia to learn some basics of von Neumann's measurement scheme.

The measured system is found in the state \(\ket\psi\) and may always be decomposed to the superposition\[

\ket\psi = \sum_n c_n \ket{\psi_n}

\] where \(\ket{\psi_n}\) are the eigenstates of the observable \(L\) that we are going to measure. Now, the basics of quantum mechanics simply say that \(P_n = |c_n|^2\) are the probabilities that we will measure the eigenvalue \(L=\ell_n\) and that the wave function of the system of interest collapses to \(\ket{\psi_n}\) after the measurement.

That's simple and it was known years before von Neumann began to play with these things – it was understood by the fathers of quantum mechanics, Heisenberg, Pauli, Jordan, Bohr, and perhaps Dirac. Von Neumann wanted to study some of the "modern" stuff so he also analyzed the wave function of the measurement apparatus etc. – even though many of the "modern" people doing this stuff think that this aspect of their analyses makes them more "high brow" than the early workers in quantum mechanics.

Historically, this belief in their superiority is indefensible. Macroscopic objects – including measurement apparatuses (but also lattices, metals, gases, molecules of any size etc.) – were being described by the newly found formalism of quantum mechanics since the very early years of quantum mechanics. No serious physicist doubted that quantum mechanics described all of those things, not just individual atoms etc.

OK, what happens if we analyze the system of interest as well as the measurement apparatus? The system of interest (the object we measure) is found in the wave function \(\ket\psi\) before the measurement while the apparatus is in the state \(\ket\phi\). They're not entangled yet which is why the state of the "composite system" is simply the tensor product \(\ket\psi \ket\phi\).

The apparatus is able to measure the observable \(L\) acting on the wave function \(\ket\psi\). The measurement starts with allowing the system \(\ket\psi\) to influence or imprint itself to the apparatus \(\ket\phi\). So it means that the composite system (originally described by a non-entangled state vector) unitarily evolves to the entangled state vector\[

\ket\Psi = \ket\psi \ket\phi \to \sum_n c_n \ket{\psi_n} \ket{\phi_n}.

\] In the final state, we see that the states of the measured object as well as the apparatus are entangled. One is in the \(n\)-th basis vector if the other is in the corresponding \(n\)-th basis vector. The basis vectors \(\ket{\psi_n}\) are eigenstates of \(L\).

Now, we're only really interested in the object described by the wave functions denoted by the letter \(\ket\psi\) so we trace over the degrees of freedom of the apparatus to obtain the reduced density matrix of the object which happens to be\[

{\rm Tr}_{\phi} \ket\Psi \bra\Psi = \sum_n |c_n|^2 \ket{\psi_n} \bra{\psi_n}

\] The density matrix for the measured object has been diagonalized. Equivalently, while the diagonal entries are the probabilities, the off-diagonal elements vanish because of the coupling with the apparatus. Note that this reduced density matrix differs from \(\ket\psi \bra \psi\); the latter isn't diagonal (it contains all the nonzero off-diagonal entries, too).

The coupling with the appropriate apparatus had the consequence of "forgetting" the relative phases between \(c_n\). Only the squared absolute values \(|c_n|^2\) – the probabilities – are preserved after the "premeasurement". I haven't said it yet: the aforementioned coupling between the object of interest and the apparatus is referred to as the "premeasurement".

Our transition from the general state \(\ket\psi \ket\phi\) to the reduced density matrix is sometimes called the "weak von Neumann projection" while the "strong von Neumann projection" is the subsequent event of the collapse, the actual measurement which abruptly changes the state vector to one of the vectors \(\ket{\psi_n}\).

For a degenerate spectrum of \(L\), we must adopt a slight generalization of von Neumann's formulae above, namely the Lüders projection where \(|c_n|^2\) are summed over the degenerate basis vectors or, using projection operators, the forgetting of the relative phases is not expressed as the previous "weak von Neumann projection" above but by the product\[

\rho \to \sum_n P_n \rho P_n

\] of projection operators \(P_n\) projecting onto the subspaces respecting the eigenvalues \(\ell_n\).

Simple. All of that is uncontroversial. It is really not adding anything to quantum mechanics. We just study a composite system that contains the measured object as well as the apparatus. It doesn't allow us to make quantum mechanics any less quantum. We still need an actual observer to perceive one of the states \(\ket{\phi_n}\) of the apparatus and therefore one of the states \(\ket{\psi_n}\) of the object, too. And the apparatus wasn't what we were really interested in, either. The addition of the "apparatus layer" to the discussion of the measurement (the addition by von Neumann) was not essential from a physics viewpoint.

If there's the first kind measurement, there should also be the second kind measurement, right? Yes, and there is one. The second kind measurement is one starting with a similar premeasurement evolution\[

\ket\psi \ket\phi \to \sum_n c_n \ket{\chi_n} \ket{\phi_n}

\] but in this case, the vectors \(\ket{\chi_n}\) that we have used for the object of interest are not orthogonal to each other. (It also means that they can't be eigenstates of a Hermitian operator corresponding to differing eigenvalues.) They're more general vectors.

The states \(\ket{\phi_n}\) of the apparatus are still orthogonal to each other (and consequently, the individual terms in the sum describing the composite system are also orthogonal to each other) so we effectively measure some property of the apparatus again. But once this property of the apparatus is measured, we can't deduce a corresponding property of the "system of interest" in such a way that the different outcomes would be strictly mutually exclusive. The information we learn about the measured object can't be interpreted classically and this "second kind measurement" may also be shown to produce irreversibility because the post-collapse probabilities differ from the pre-collapse ones.

OK, so the "first kind" measurement means that the states of the object of interest have to be orthogonal to each other; they are not orthogonal to each other for the "second kind" measurement. Moldoveanu never mentions or discusses this "orthogonality" associated with the "first kind" at all. As far as I can say, he can't possibly know what the "first kind" adjective actually means. He only uses words because they sound fancy but he just doesn't know what he's talking about.

Now, the key "negative" claim by Moldoveanu is that this "first kind measurement" as described by von Neumann cannot be repeatable. This is an extremely bizarre statement because this "first kind measurement" is the most ordinary, the simplest, the most problem-free, the most controllable evolution before the measurement and during the measurement that textbook quantum mechanics involves. Moldoveanu should know that what he claims about the "non-repeatability" is just plain rubbish but he doesn't know it.

In fact, almost all the credible texts using the phrase "first kind measurement" explain that those are "repeatable" (for discrete spectra) almost immediately. You may find a proof in the third chapter of Max Jammer's book. (Or check this page by Pechenkin to see that "first kind" and "repeatable" are basically synonymous.) Jammer's book has been the #1 source of wisdom about the foundations of quantum mechanics for other serious authors, e.g. for Zurek and Wheeler who wrote their book about the subject in 1983.

Alternatively, open this Busch-Lahti 1996 paper and search for the word "repeatable". You will find it at 14 places (and "first kind" at 10 places) and learn that "repeatable" and "first kind" are pretty much equivalent for discrete spectra. Repeatable measurements are of the first kind. Problems occur with continuous spectra – the eigenstates aren't normalized – and since the 1930s, there have been discussions about the possibility to use the "first kind" formalism for continuous spectra etc. Also, if you want to measure two non-commuting observables, you will find a conflict with repeatability – a way to describe the uncertainty principle.

But for the situation in which we really deal with discrete spectra – and therefore with the sums – there can't be any contradiction between "first kind" and "repeatable". You may check Moldoveanu's detailed propositions about the conflict and see that they're just plain stupid. He asserts that there exists no unitary map that could map\[

|\psi_A\rangle \otimes |M_0 \rangle &\rightarrow |\psi_A \rangle \otimes |M_A \rangle \\
|\psi_B\rangle \otimes |M_0 \rangle &\rightarrow |\psi_B \rangle \otimes |M_B \rangle \\
(c_A|\psi_A\rangle + c_B|\psi_B\rangle)\otimes |M_0\rangle &\rightarrow c_A|\psi_A\rangle \otimes |M_A\rangle +\\
&+ c_B|\psi_B\rangle \otimes |M_B\rangle

\] Oh, really? Why should such a unitary operator be forbidden? The third condition follows from the first two and linearity – it's just a damn superposition of them. So we may obviously ignore it because we look for a linear operator, anyway.

And the first two conditions simply say that two particular normalized, mutually orthogonal vectors \(\vec v_1,\vec v_2\) are mapped to two particular normalized, mutually orthogonal (you may check all the adjectives) vectors \(f(\vec v_1),f(\vec v_2)\). Does a unitary operator or a unitary matrix with these properties exist? You bet. The conditions simply tell us what two rows of the matrix are. The norms and inner product of both vectors \(\vec v_1,\vec v_2\) are preserved by \(f\) so there's no conflict with unitarity here. And the remaining rows of the matrix (if other outcomes besides \(A,B\) exist) may be completed by an "orthogonalization process".

To misunderstand these claims means to fail an exam in the first or second semester of an undergraduate linear algebra course. What Moldoveanu says about the non-existence of the unitary operator isn't just wrong; it is utterly idiotic. It is idiotic from the viewpoint of someone who has mastered the basics of linear algebra. It is equally idiotic from the viewpoint of someone who is thinking about quantum mechanics because if you think in terms of quantum mechanics, every damn observation of the world we ever do (at least when it comes to discrete spectra, and due to limitations on accuracy etc., we always measure discrete spectra, strictly speaking) starts with a unitary transformation that Moldoveanu claims not to exist.

The mathematical claim about the non-existence of the "first kind measurement" unitary operator is dumb. I've said that the very usage of "first kind measurement" by Moldoveanu was just a snobbish ritual because Moldoveanu didn't use the difference between "first kind" and "second kind" (about the orthogonality) at all – the adjective was inserted just to make his texts look more incomprehensible. But there are other aspects of his blog post that prove that he is simply not thinking about physics quantum mechanically.

At the beginning, he talks about the states of the apparatus\[

\ket{M_0},\quad \ket{M_A}, \quad \ket{M_B}

\] where \(\ket{M_A}\) and \(\ket{M_B}\) play the role of the states \(\ket{\phi_n}\) I mentioned previously and they describe the state of the apparatus that has detected the corresponding eigenvalue in the discrete spectrum. But as you can see, Moldoveanu has added one more state of the apparatus, \(\ket{M_0}\), which he describes as the "apparatus ready for measurement".

Now, feel free to talk about this state of the apparatus. If the apparatus has a display, \(A\) and \(B\) may describe letters that may appear on the display while the display may also be showing nothing, especially before the measurement is completed, so there is another state of the apparatus. There isn't any corresponding state of the measured object. What corresponds to \(\ket{M_0}\) is "ignorance" about the measured object.

A funny thing is that the correct discussion of the "first kind measurement" doesn't include any \(\ket{M_0}\). After the successful measurement, we're guaranteed not to get the state \(\ket{M_0}\). If we got this state, it would mean that we're still ignorant about the state of the measured object. In other words, it would mean that the measurement hasn't taken place or hasn't been successful!

And if the right hand side of the "premeasurement" contained some extra term\[

c_0 \ket{\psi_0}\ket{M_0}

\] with a nonzero amplitude (Moldoveanu suggests that this term isn't there, \(c_0=0\)), we would run to the following problem: What is \(\ket{\psi_0}\)? We don't have that many basis vectors in the Hilbert space of the measured object because the label \(0\) shouldn't be associated with any particular eigenvalue. As we said, this "ready for measurement" is associated with "ignorance" about the measured object.

So this term would either be an immediate contradiction. Or we would have to define \(\ket{\psi_0}\) as a superposition of the actual eigenstates \(\ket{\psi_n}\). That would mean that \(\ket{\psi_0}\) fails to be orthogonal to all the vectors \(\ket{\psi_n}\) and the premeasurement would therefore be of the second kind, not the first kind! At any rate, there's no way to "undo" our previous proof that there always exists the "first kind" unitary operator governing repeatable measurements that Moldoveanu claims not to exist.

I think there's the well-known fallacious reason why he added the state \(\ket{M_0}\) to the list of states of the apparatus. He doesn't like quantum mechanics so he wants to get rid of the observers. He wants to "objectivize" the question whether an observation has taken place or not and all his states either contain \(\ket{M_0}\) or \(\ket{M_A},\ket{M_B}\) which say "No" and "Yes", respectively. But quantum mechanics always depends on observers and superpositions of all states – including the states "the apparatus has detected something" and "the apparatus hasn't detected anything yet" – are always allowed by the universal superposition postulate. You don't gain any clarity by adding \(\ket{M_0}\). As I said, after a successful measurement, such "ambiguous" state vector must have the probability zero, anyway.

These anti-quantum zealots are just obnoxious. Now, more than 90 years after the birth of quantum mechanics, they will flood your Internet and journals with arbitrarily preposterous claims about alleged "problems" of quantum mechanics, claims that intelligent undergraduate freshmen have to find laughable. This "community" allows (and maybe encourages) itself to emit arbitrarily dumb statements as long as they are used to say that there is some problem with quantum mechanics. Such people should be instantly stripped of their physics or mathematics degrees. Instead, what happens is that they prove their membership in the community of "researchers" in the quantum foundations.

Like the creationist movement, it's a bunch of cranks. The deterioration of that "community" was gradual. Von Neumann started this discussion of the entangled states involving the apparatus etc. It was redundant but basically correct – and if an error appeared, it was admitted and corrected. EPR wrote mostly wrong things about quantum mechanics but the people allowed to call themselves "experts in foundations of QM" agreed that EPR were wrong. In 1927, experts dismissed the pilot wave theory and even its Bohm's revival in the 1950s was considered "not even wrong" by Pauli (and all other big shots of that time). When Everett wrote his bizarre thesis, it was still the case that the experts agreed he was wrong, especially all the suggestions that quantum mechanics had a flaw or needed to be "fixed" in some way, and Everett was rightfully denied a postdoc job. In the 1970s, the field belonged to folks like Max Jammer who were still writing correct things. In the 1980s, the modern decoherence scheme was born along with consistent histories etc. and most of the experts influencing that field still understood quantum mechanics correctly. Since the end of the 1980s, the number of incompetent hacks who just aren't capable of understanding quantum mechanics grew and they're the majority of that field today.

Add to Digg this Add to reddit

snail feedback (0) :