Mark Alford's wrong paper claiming that there's nonlocality has the first followup, Anthony Sudbery's physics.hist-ph paper

The future's not ours to seeSudbery has chosen a formidable foe in the physics world, namely Doris Day.

He presented evidence that her 1956 song is, in fact, incorrect and that he may be an even better physicist than Doris Day. ;-)

Doris Day was singing:

Que sera seraSudbery sensibly argues:

Whatever will be will be

The future’s not ours to see

Que sera sera.

I will argue that quantum mechanics casts doubt on the second line of the song, which suggests that even if we can’t know it, there is a definite future...We may say that the future isn't definite and isn't yet decided – because of the probabilistic character of quantum mechanics and especially results such as the "free will theorem" which may also be phrased as an argument against "fatalism".

We don't even know what will be the right detailed questions in the future. The question "Who wins World War III" will only be relevant if there will be such a war, and so on. The observers will choose their own relevant questions – and those will depend on their previous observations.

Unfortunately, the argumentation by Sudbery doesn't quite make sense because if the second line ("whatever will be will be") is wrong, then also the first and fourth lines ("que sera sera") are wrong because they're just translations of the second line to Spanish! (Doris Mary Ann Kappelhoff picked the Spanish phrase despite her all-German ancestry.)

So if he were treating the whole song carefully, he would know that the truth value of all these three lines, and not just the second one, is the same. Moreover, he would also know that despite the "free will theorem", they are valid because they're really tautologies. Even if I believe in free will, it's still true that "whatever will be will be". ;-)

OK, a paper attacking a singer is funny. But there are more serious misunderstandings. Sudbery quoted Alford's sentence:

In ordinary life, and in science up until the advent of quantum mechanics, all the uncertainty that we encounter is presumed to be... uncertainty arising from ignorance.Quantum mechanics introduces a new source of uncertainty, thanks to the uncertainty principle. But if one is careful, he will see that Alford's sentence above is illogical, too. The fact that he denies in between the lines is that

according to quantum mechanics, all uncertainty arises from ignorance, too!Ignorance and uncertainty are de facto synonyms. They are inseparable. At most, we may use them for slightly different quantities. "Ignorance" is largely used for our not knowing the truth value of Yes/No (or one/zero) propositions; while "uncertainty" is largely used for quantities with different, more complicated, especially continuous spectra – e.g. for \(\Delta x\).

But at the end, if we are ignorant about the truth value of a statement about a continuous quantity, e.g. \(x\lt 0\), then there is an uncertainty in \(x\), and if there is an uncertainty \(\Delta x\), then there exist binary propositions about \(x\), such as \(x\lt 0\), whose truth value we are ignorant about. So while the precise linguistic usage of the words "ignorance" and "uncertainty" may favor one word or the other in various contexts, the ideas that they convey are exactly the same. They follow from one another; and they may be viewed as special examples of one another, too.

One of the many points that Alford and many others don't understand is that

the uncertainty principle states that there exists a certain minimum amount of ignorance.The uncertainty principle imposes the lower bound not only on continuous products such as \[

\Delta x \cdot \Delta p\geq \frac \hbar 2.

\] It also implies the unavoidable ignorance about the truth value of Yes/No propositions. For example, if \(\Delta p\lt \infty\), then the probability that \(x\) is greater than a specific number in the interval \(x\pm \Delta x\) is a number strictly in between 0% and 100%.

Similar facts may be easily derived from the commutators just like the original uncertainty principle. The point is that the Yes/No statement such as \(x\lt 0\) is represented by a linear Hermitian projection operator \(P_{x\lt 0}\). And this operator may be written as a function of the operator \(x\) – in this case,\[

P_{x\lt 0} = \theta(-x).

\] Because \([x,p]\neq 0\), we have \([P_{x\lt 0},p]\neq 0\), too. So unless \(p\) is completely unknown, the uncertainty of the operator \(P_{x \lt 0}\) is unavoidably positive. But that conclusion is exactly equivalent to the statement that the probability that \(x\lt 0\) holds is a number different from 0% as well as 100%.

**Again, the uncertainty principle tells us that probabilities strictly in between 0% and 100% are unavoidable in physics.**

But we may still say that "ignorance" and "uncertainty" refer to the same intrinsic thing. Alford and soulmates try to pretend that these two words are completely different but they never give any coherent explanation in which sense they could be different. Well, there can't be any coherent explanation because they're obviously not different.

At the end, the attempts to "segregate" the uncertainty to two completely different effects is nothing else than a sign of their anti-quantum zeal. They want to talk about the uncertainty that they already knew in classical physics (which is a good one that can tolerate); and the uncertainty that quantum mechanics introduced (and they want to erase it or misinterpret it as something completely different).

On page 3, Alford divides the uncertainty to

- Uncertainty arising from our ignorance. The outcome of the measurement could be predicted given accurate knowledge of the initial state of the object and the laws governing its evolution, but we don’t have sufficiently accurate information about these things to make an exact prediction.
- Fundamental uncertainty: the outcome of the measurement has an essentially random component, either in the evolution of the system or its effect on the measuring device. In a sense the system gets to “decide on its own” how to behave.

But that's not how Nature works. No internal mechanisms or the implied nonlocality exist in the world around us. Even the uncertainty that follows from the uncertainty principle should be interpreted as an equivalent description of the ignorance of the observer, not as some extra pseudorandom generator that the objects contain. Instead, the uncertainty principle says that the ignorance about a question in a given situation can't decrease below a certain lower bound. The uncertainty implied by the uncertainty principle is new and fundamental; but it must still be considered a part of the uncertainty described in (1).

In quantum mechanics, the most general description of the state of a physical system is in terms of the density matrix \(\rho\). The probability that a Yes/No statement encoded in the projection operator \(P\) is right is simply \({\rm Tr}(P\rho)\); it's the expectation value of the operator \(P\) (whose eigenvalues are zero and one). The expectation value of a more general quantity \(x\) is \({\rm Tr}(x\rho)\). The squared uncertainty \((\Delta x)^2\) of an operator is \[

{\rm Tr}(x^2\rho) - ({\rm Tr}(x\rho))^2

\] Now, the density matrix \(\rho\) is the exact quantum counterpart of the probability distribution \(\rho(q_i,p_i)\) on a classical phase space in classical statistical physics. So whenever \(\rho\) has several nonzero eigenvalues, there is some uncertainty – of the same kind that existed in classical physics – about the state of the system. This is analogous to the classical function \(\rho(q_i,p_i)\) that is supported by more than one point in the phase space.

Can you get rid of this uncertainty? In classical physics, in principle, you can, and if you do so, the probability distribution \(\rho(q_i,p_i)\) becomes a delta-function localized at a particular point \((q_i,p_i)\) of the phase space. Can you do it in quantum mechanics?

In quantum mechanics, the

*closest*thing that you can do is that you guarantee that your density matrix \(\rho\) only has one nonzero eigenvalue (equal to one); all other eigenvalues are zero. This is equivalent to \[

\exists \ket\psi:\,\,\rho = \ket\psi\bra\psi

\] The density matrix becomes a simple density matrix calculated from a pure state \(\ket\psi\). If you look at the values of \(x,p\) that this pure density matrix represents, you may make them pretty well-defined but\[

\Delta x \cdot \Delta p \geq \frac\hbar 2

\] will always hold. So in the phase space, the maximally perfectly localized state may occupy a "fuzzy cell" of the area \(2\pi\hbar\) – but not a smaller area (or volume; for many position-momentum pairs, the volume of the cell is \((2\pi \hbar)^N\).

One of my points is that the localization of \(\rho\) may be viewed as a completely analogous process. In classical physics, it may go all the way to the point where \(\rho(q_i,p_i)\) equals to a delta-function and the ignorance goes to zero. In quantum mechanics, that's not possible. The uncertainty principle guarantees that instead of a delta-function, the maximally localized distribution in the phase space occupies the area \(2\pi\hbar\). So there will always be some uncertainty in the values of \(x\) and \(p\) or most of their functions. Most pairs of operators refuse to commute with each other, so if the value of one is known, the other is uncertain etc.

But the space of allowed density matrices \(\rho\) is a compact, continuous, linear space. It is not divided to pieces; and it doesn't have any canonical subspaces. The

*interpretation and consequences*of the uncertainty in quantum mechanics is exactly the same as the interpretation and consequences of the uncertainty encoded in a "spread" function \(\rho(q_i,p_i)\) on the phase space. What's different is that quantum mechanics postulates or guarantees that the ignorance or uncertainty about all physically meaningful questions can't ever go to zero.

In classical physics, models had the property that \(\rho\) could have been a delta-function and the ignorance was zero. But you could have always viewed this feature as an accidental feature of simple enough models we considered. There has never been any

*important principle*that would tell you that the statistical description of any theory of physics

*must allow*the phase-space distribution to be equal to the delta-function.

Let me be more precise. You could have assumed and postulated this principle – it was true in all the models we call "classical" today – but this assumption has never been important for the agreement between the theory and experiments. It wasn't ever possible to use this philosophical assumption for an improved agreement between the theory and the data. It was only useful to make the theories "simple" in some way. Models of classical statistical physics were "simple" in the sense that they were always a "direct derivation" out of some deterministic theories where the uncertainty and ignorance was zero.

In quantum mechanics, it's no longer the case. Quantum mechanics involving a density matrix generalizes the descriptions in classical statistical physics with \(\rho(q_i,p_i)\) on the phase space. But the quantum mechanical models in terms of the density matrix \(\rho\) can no longer be derived from a simplified model where the uncertainty and ignorance completely disappear. The nonzero commutators redefine the realm of questions you can ask and quantities you can measure and their mutual relationships; and the omnipresent nonzero commutators guarantee that the ignorance and uncertainty cannot go away.

Instead, the description of a quantum mechanical theory that minimizes the uncertainty and ignorance is the description in terms of a pure state \(\ket\psi\). It's the "counterpart" of the delta-functions on the phase space except that the minimum blobs aren't quite delta-functions. They have the area \(2\pi\hbar\) and this nonzero area is connected with the fact that virtually all observables \(L\) have a nonzero uncertainty \(\Delta L\) and it's also true about the projection operators \(P\) whose nonzero uncertainty means that the probabilities are strictly in between 0% and 100%. If you need to know, if you compute \((\Delta P)^2\) according to the same formula used for \(\Delta x\) etc., you will get\[

(\Delta P)^2 = p_1-p_1^2 = p_1(1-p_1)

\] which only vanishes when \(p_1=0\) or \(p_1=1\), i.e. in the absence of any ignorance about the Yes/No proposition encoded by \(P\).

While the uncertainty or ignorance is bounded from below in quantum mechanics, it's completely misguided to try to divide the ignorance to "two pieces with a totally different explanation". The explanation of both is in terms of the same mathematical rules – and all the parts of the uncertainty and ignorance should always be attributed to the observer. The new feature of quantum mechanics is that it guarantees that there just can't be any "better observer" who could get rid of all the uncertainty; the commutators are nonzero for any observer so a lower bound on the ignorance or uncertainty is a universal law that no one can circumvent, not even God or an Argentine left-wing pundit who abuses Him. The usual equations involving the density matrix \(\rho\) describe the uncertainty or ignorance of "both types" and they can't be quite separated from each other once you start to write the density matrix as a sum of many terms.

## snail feedback (0) :

Post a Comment