Thursday, July 19, 2018 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Postulates of quantum mechanics almost directly follow from experiments

Most of the ordinary people who have tried to understand modern physics find the novel logical framework of quantum mechanics challenging. Some of them have become full-blown anti-quantum zealots which means that they scream "it cannot be true", "physics must ultimately be governed by the logic of classical physics" – well, they love to use different words but this is exactly what they mean.

Although many of these people may have achieved various things, I find it impossible to consider these people intelligent. I have explained the consistency of quantum mechanics (internal consistency and compatibility with observations) and the failure of any "realist" proposal to replace quantum mechanics from many perspectives.

Here, I want to argue that it's pretty much obvious that certain experimental facts just cannot possibly have a different explanation than the precise framework of quantum mechanics.




Anti-quantum zealots love to separate the empirical knowledge into tiny pieces. They ideally do something very specific – like following Bell's recipe to entangle two spins in some way, measure this and that (there is absolutely nothing special or interesting about the particular two-spin experiment discussed by Bell) – and because it's so specific, they fool themselves into thinking that there must be lots of theories different from the Copenhagen quantum mechanics that explains the observations as well.

They think that they just need to get some mess for some randomness to arise, and there's a lot of mess everywhere, so the problem to explain the "messy" observations must be easy within "realist" theories.




But this very methodology of thinking about too particular examples is just a stupid strategy to find the truth. It's really a deceitful method for those folks to delude themselves and others. Quantum mechanics isn't just a theory of Bell's two-spin experiment or two or three or twenty similar experiments. It's a theory of everything. Well, "a theory of everything" is usually meant to represent quantum mechanics including the knowledge of the dynamics ("the Hamiltonian" or whatever generalizes it). So maybe I should say that quantum mechanics is the framework for theories that explain all experiments that have ever been made – and most likely, all experiments that may be done in the future, too.

And the randomness we need to explain the data isn't just some "mess". In fact, it isn't any "mess" at all. It's a very particular kind of randomness whose statistical features are exactly calculable from very nice and simple formulae in quantum mechanics. Outside quantum mechanics, you have no chance to get the right formulae.

Why don't we try to theoretically explain more general empirical facts? What about the uncertainty principle? For \(x,p\), it says\[

\Delta x \cdot \Delta p \geq \frac\hbar 2.

\] But why don't we talk about the uncertainty principle for two more general observables \(A,B\) instead?\[

\sigma_A \sigma_B \geq \left| \frac{1}{2i}\langle[\hat{A},\hat{B}]\rangle \right| = \frac{1}{2}\left|\langle[\hat{A},\hat{B}]\rangle \right|.

\] You know, these \(A,B\) may be functions of many observables such as \(x,p\) or fields, particles' spins, and other things. The inequality above is an experimental fact that may be verified on infinitely many choices of the pair \(A,B\) which may be arbitrary functions or functionals of positions, momenta, spins, and/or fields.

Imagine you have zillions of graduate students. You divide them into groups. Each group gets some choice of \(A,B\) and the task to minimize the product \(\sigma_A \cdot \sigma_B\). They will work hard. Because they're smart, their best i.e. minimized product will be exactly what the uncertainty principle requires. In some approximation, the commutator on the right hand side may be approximated by \(i\hbar\) times the classical Poisson bracket.

So there's some uncertainty in Nature. Typically, if \(A\) is known accurately enough, almost all other quantities \(B\) have to be inaccurate. The outcomes of measurements of \(B\) are guaranteed to be random and the distribution has a certain minimum variance that is calculable from the commutator of some operators – approximately from the Poisson bracket (times Planck's constant). Does it imply something big? You bet.

This result of the many graduate students – I suppose that the dear reader isn't too stupid to think that they will actually get wrong answers i.e. that quantum mechanics really fails – says that Nature doesn't contain just "some" uncertainty and "some randomness". The products of the variances – which measure the amount of randomness – are precisely linked to the expectation value of the commutator i.e. the Poisson bracket of \(A,B\).

You can try it at arbitrary examples of \(A,B\) in mechanics or field theory, polynomial in \(x,p\) or more complex ones. So clearly, if you want to predict some experimentally measurable quantities – such as the minimum \(\Delta A \cdot \Delta B\) in a given state prepared experimentally – you will find out that the Poisson bracket is sort of important. And indeed, you may easily find out that you need to replace the Poisson bracket by the commutator.

For example, you may consider \(A,B\) to be components of the spin of a spin-1/2 particle along two axes given by unit vectors \(\hat a,\hat b\) i.e. \[

A = \hat a \cdot \vec j, \quad B = \hat b \cdot \vec j.

\] You may prepare any initial state of the spin of the spinning particle and measure \(\Delta A\cdot \Delta B\). The inequality will be obeyed. Well, in many cases such as the spin, it will not be possible to saturate the inequality. But in the continuous examples, it is possible to saturate the inequality.

In proper quantum mechanics, you may prove the inequality for any \(A,B\) and any state in which you want to measure these two observables. It's a straightforward proof and its existence implies that quantum mechanics explains all these inequalities. The proof is so straightforward that the pieces of the proof aren't just sufficient. They seem pretty necessary, too. If you change any piece that matters, the proof will break down.

The proof works because all the quantities that are in principle observable – such as functions of \(x,p\) or fields or components of the spin – must be assigned the corresponding observables which are linear operators on a complex vector space. All the observable quantities – observables – really must be represented by non-commuting operators. To explain the experiments by the zillions of your graduate students, you simply need a theory where all functions of \(x,p\) etc. are described by a linear operator on the Hilbert space. Similarly, you need the predictions to be predictions of the outcomes of the measurements of the observables (operators) and these probabilities must be calculated via Born's rule.

Just try to imagine that you want to explain the general uncertainty principle in a "realist" i.e. fundamentally classical theory. So an anti-quantum zealot will probably admit that the operators are "useful" in some way but they're just a caricature of some "deeper", classical theory. Now you must ask: Can such a hypothetical classical theory have a justification for the uncertainty principle? A reason that implies that if \(A\) is accurately measurable in the prepared state, \(B\) must be less accurate, and vice versa? And can it get the right bound for any choice of \(A,B\)?

In a "realist" theory, if the outcome of the measurement of \(A\) and \(B\) that you're going to get is knowable in advance, there's just no reason why there should be some unavoidable uncertainty. It doesn't matter what is the precise type of your "realist" theory – many worlds, Bohmian mechanics, objective collapse theory, and so on.

All these theories are "realist" which means classical when it comes to their basic logic. It means that there exists the "set \(S\) of possible states" at the given moment – when the set is a continuum, we call \(S\) the phase space and we should use this terminology for a general \(S\), too. \(S\) may contain elements that remember all the information about the pilot wave as well as the "actual" position of the Bohmian particles; or all the information about the number of worlds in which Adolf Hitler has won, and everything else, if you talk about the many worlds, and all such things.

In principle, the precise element of \(S\) that describes the current state is knowable. So God may know the perfect values of \(x_i,p_i\) that parameterize the right point on the phase space. In practice, the precise point isn't known to a mortal and we need to describe our knowledge by a probability distribution\[

\rho(x_i,p_i)

\] that is defined on the whole set \(S\). OK, what is the minimum value of the product \(\Delta A \cdot \Delta B\) in a realist theory? Well, a realist theory admits God or a "super-observer" who just knows the right element of \(X\in S\) before the measurement – regardless of the things that the imperfect human experimenters did at the beginning of the experiment when they were preparing the initial state. For that element \(X\), there are some values of \(A\) and \(B\) which God knows, so He gets\[

\Delta A =0, \quad \Delta B = 0,\quad \Delta A\cdot \Delta B = 0.

\] In that world, graduate students don't have any fundamental obstacle that prevents them from approaching omniscient God increasingly closely, so these graduate students may only derive a vacuous inequality\[

\Delta A \cdot \Delta B \geq 0.

\] Too bad, they get a much weaker inequality which means that they have no explanation for the correct, stronger inequality – one that has a quantity proportional to \(\hbar\) on the right hand side.

This fatal disease kills all realist theories, whether they were promoted by David Bohm or any other fudging Bolshevik. All of these theories simply predict that the minimum possible product of uncertainties is zero.

OK, if you want him to fall, you need to shoot a Bolshevik thrice and hit him with your hands, too. So these Bolsheviks will protest. We are separated from God – whom they will call George Soros or Joseph Stalin. Let me use the latter convention. So Joseph Stalin may know the precise \(X\in S\) with its value of \(A,B\) which means that His minimal \(\Delta A \cdot \Delta B\) is going to be zero.

But the graduate students can't do certain things because their apparatuses are unavoidably messy, so their \(\Delta A\cdot \Delta B\) will be the correct bound proportional to \(\hbar\).

Great theory, comrade. Now, all people are equal, right? What is it that allows Joseph Stalin to squeeze the uncertainties all the way to zero while the zillions of graduate students see the bound proportional to \(\hbar\) – a bound that Joseph Stalin knows to be spurious? Joseph Stalin is also equal to the students but he's even more equal, right?

Even if you invented some qualitative story that "explains" that the students must be expected to get a nonzero bound on the uncertainties, it would be just an infinitely small part of the problem that you would have to solve to justifiably claim that you have a viable "realist" alternative to (Copenhagen) quantum mechanics. To sensibly claim that you have an alternative, you would actually have to offer a quantitative scheme that predicts the right lower bound for any choice of \(A,B\). Even if the uncertainty is just an artifact of the apparatuses' imperfection, these apparatuses are still governed by the laws of physics and the laws of physics must have some explanation why their minimum uncertainty always seems to be what the uncertainty principle claims, right?

This is an obvious yet huge task that none of the Bohms and similar stinky Bolsheviks has even attempted to solve. I think that all of them know that only the proper apparatus of quantum mechanics – in which the observables really are linear operators, and the calculable predictions really are subjective probabilities of outcomes – can achieve this triumph. The goal of the Bohmian, many-world, and similar theories is just to fake quantum mechanics – to "embed" quantum mechanics in some "realist" framework and claim that it's the better one.

But there's a problem with "faking". The things you're proposing are still "fake". If the comrades try to fake the capitalist economy but they impose all the Bolshevik constraints such as egalitarianism, they still get just the communist economy which totally sucks. The magic of capitalism and its prosperity strictly contradicts the communist axioms such as egalitarianism. You simply can't fake the capitalist economy within communism – and you can't fake quantum mechanics within a "realist" theory.

Your "realist" theory doesn't fundamentally associate the observable quantities with linear operators. So it's infinitely unlikely that the millions of bounds obtained by the zillions of graduate students will agree with the quantum mechanical right hand side, \(\langle [A,B] \rangle\). The probability isn't just infinitely small. It's really \[

P \approx \frac{1}{\infty^\infty}

\] because there are infinitely many experiments or choices of \(A,B,\ket\psi\), and for each of them, your fundamentally non-operator theory should predict the right lower bound on the product of uncertainties. An infinite amount of fine-tuning would be necessary for you to fake quantum mechanics in a "realist" theory – but even an infinite amount of fine-tuning wouldn't be sufficient because Joseph Stalin still knows that \(\Delta A \cdot \Delta B = 0\).

So the main message of my text is that you should ask:

Has someone ever asked why the Bohmian theory or any other "realist" theory predicts the uncertainty principle with the correct right hand side?

And the answer is a resounding No. No one has ever made even the tiniest first steps towards that goal. "Realist" theories really predict that you should always be able to reduce the uncertainties further i.e. closer to zero. But the experiment speaks in a clear language. The product of uncertainties just can't get beneath the bound proportional to \(\hbar\). The "realist" theories are falsified.

There's one cute, almost equivalent, way to kill the "realist" theories. And it's the universality of \(\hbar\). Tell all your teams of graduate students – who hadn't known Planck's constant to start with – to write the minimum value of their \(\Delta A\cdot \Delta B\) as \(\hbar \cdot \{A,B\}\) where the braces are the Poisson bracket and \(\hbar\) is a new coefficient. The fascinating thing is that regardless of their \(A,B,\ket\psi\), all of them will get the same value of \(\hbar\)!

This universality of Planck's constant is also totally incompatible with any realist theory simply because realist theories don't have and can't have any universal constant whose units are those of \(\hbar\). There's just no room for such a constant in classical or "realist" physics! The classical Hamiltonian dynamics is fully given by the Hamiltonian \(H\) whose units are just joules, but \(H\) isn't a universal constant and the scaling of \(H\) doesn't affect the evolution equation at all, anyway. All other universal constants in classical physics are various coefficients defining various terms in \(H\) etc. and those apply differently to different degrees of freedom – they are not universal. For example, if some students try to determine the energy-to-frequency ratio, \(\hbar\) from \(E=\hbar\omega\), "realist" theories predict that they must get different values of \(\hbar\) from different particle species etc. A universal value of \(\hbar\) would be an infinite amount of fine-tuning because there can't be a reason for that in a "realist" theory.

So if many groups of graduate students try to extract a constant with units of \(\hbar\) from their experiment, it's basically guaranteed that each group will have a different answer for \(\hbar\): "realist" theories predict that nonzero quantities with the units of \(\hbar\) simply cannot be universal constants of Nature! This is perfectly falsified by Nature where \(\hbar\) may be extracted from infinitely many different experiments (with particles or fields or strings or branes of any kinds, or any combinations of those) and it always has the same value, despite the high precision of the modern experiments.

"Realist" failure to get quantized measured values

The uncertainty principle is just one famous, and almost defining, consequence of the basic rules of quantum mechanics. But there are many others. Such as the quantized spectrum of the energy. The hydrogen atom only has (in the non-relativistic approximation) the energy levels\[

E_n = -\frac{E_0}{n^2}, \quad n\in\ZZ.

\] That can be seen in the emission or absorption spectra. The photons only have energy \(E_n - E_{n'}\). Great. Can a "realist" theory actually predict the discrete energy spectrum of atoms?

You may embed the mathematics of the wave functions in your "realist" theory. But the interpretation of the wave function will be wrong – the wave function will be misinterpreted as a classical wave – and this misinterpretation has far-reaching consequences.

One of them is that every observable that you can measure will have a continuous spectrum.

The reason is utterly simple. If you interpret the wave function as a classical wave, your phase space \(S\) is a connected, infinite-dimensional continuum. It's as continuous as you can get. If your apparatus ends up measuring the energy of a photon, \(E_\gamma\), you know that a priori, all positive values of the energy of a general photon must be allowed. If the transformation mapping the initial state to the final state is continuous in any way, it's obvious that you may perturb the desired final value of \(E_\gamma\), run the evolution backwards, and find an appropriately perturbed initial state that leads to this non-quantized value of the photon's energy.

Bohm's theory can't be constructed for relativistic particles such as photons (Quantum Electrodynamics) but a Bohmist would surely say that they explain the measured quantized energy because the pilot wave gets reduced to several beams and the real Bohmian particle is in one of them. Great but this sleight-of-hand won't work if you measure other observables that aren't reduced to positions – such as the voltages in our brain which is how we actually perceive things at the end.

All "realist" theories are in serious, and mostly fatal, tension with the quantized spectrum of many observables.

There are many more rudimentary, universal, empirical facts about the world governed by quantum mechanics that really instantly kill all "realist" theories. But three is enough for today so let me mention the final one:

Another fact about quantum mechanics: If you make a measurement of the observable \(L\) and the wave function collapses to an eigenstate \(\ket\psi\) of that observable, all the parts of the wave function that "existed" (outcomes that were possible) before the measurement completely disappear and they have exactly zero impact on anything after the measurement of \(L\).

Now, this is an extremely general empirical fact – i.e. a fact that you may experimentally check in millions of different situations involving thousands of different physical systems, particles of all types, with or without spins, fields, strings, branes, whatever you like.

Prepare the electron with its spin along some axis \(\hat n\). Measure \(j_z\). You will get "up". Then all the probability distributions are fully calculable from the new initial state "up". The erased parts of the collapsed wave functions are totally eradicated, totally forgotten. You may do anything to your experiment, try ingenious methods to persuade your clever apparatus to "remember" or "recall" the erased parts of the wave function. But your clever apparatuses will not be able to say anything about the number \(a\) that defined the state before the collapse\[

\ket \psi = \frac{ \ket\uparrow + a \ket\downarrow }{\sqrt{1+|a|^2}}

\] Once you know that the spin is "up", the coefficient \(a\) is set to zero. Precisely and genuinely. No kidding.

If you think about it for a second, this trivial fact totally contradicts any natural (not fine-tuned) "realist" theory. Take Bohmian mechanics as an example. In that "realist" theory, there's the objective pilot wave, a classical field/wave whose numerical values are chosen to fake the quantum mechanical wave function, and then there are the objective "real" positions of the particles.

Now, the pilot wave guides the "real" particle somewhere, and you may measure the real particle and say something about the spin – Bohmian mechanics doesn't allow the spin directly so the spin measurement has to be reduced to some measurement of the position. So the Bohmian particle is known to be at the place corresponding to "up". However, the pilot wave still exists in the region that would correspond to "down", too.

The point is that this "wrong part of the pilot wave" hasn't been cleaned or forgotten. This pilot wave is coupled to other degrees of freedom in the physical system so in principle, it should be observable. However, experiments clearly say that whatever you do, you just can't observe this "wrong part of the wave function". To avoid the contradiction with the basic empirical facts, the Bohmian mechanics really needs some "janitors" that remove the zombie parts of the pilot wave at places where the particle wasn't seen.

It's not just some aesthetic requirement for David Bohm to hire the illegal Mexican comrades. He really fudging needs it because if that mess isn't cleaned, his theory predicts that the mess will almost certainly show up in some doable measurements. This mess is coupled to the degrees of freedom remembering particles "genuine" position so whatever is done, this mess will be imprinted into some future measurements – which is not observed.

To say the least, the janitor hired by David Bohm has to make the evolution irreversible because cleaning is irreversible. You know, in quantum mechanics, there's absolutely nothing wrong about the irreversibility of the measurement. Measurements are fundamentally irreversible because the learning of the new data (just like in the classical Bayesian inference – and the collapse of the wave function is just a complex, non-commuting generalization of Bayesian inference) has a logical arrow of time. Schrödinger's equation governing the evolution of the wave function – the probability amplitude – is still perfectly reversible.

However, if the wave function is "faked" and misinterpreted as some classical wave, all hell breaks loose. Your laws of physics have to include the irreversible janitors which make the fundamental classical transformations – the part of your theory that is governed by analogous differential equations as Schrödinger's equation – irreversible. That's also bad because the reversibility of Nature at the fundamental level is another experimental fact.

In quantum mechanics, all the irreversibility only exists at the subjective or psychological level. "Realist" imitations of quantum mechanics don't have any subjective or psychological level. So they either include janitors which violate the experimental fact about the fundamental reversibility of all unobserved processes in Nature; or they violate the experimental fact that the "not realized" part of the wave function is perfectly forgotten.

The "realists" don't bother to tell you which of these two failures they prefer. They don't discuss their janitors. That's very ironic because these janitors are totally essential to have a coherent theory of the measurement. Quantum mechanics has a coherent story about the measurement – it's a part of the universal postulates. The anti-quantum zealots didn't like something about the "measurement theory" in quantum mechanics (because it differs from the classical physics' story about the measurement which is the only "acceptable" one for them) and that was their excuse to pursue "realist" theories in the first place. But because they say nothing about the janitors – not even whether they should exist – they make it clear that the anti-quantum zealots have nothing to say about the measurement at all!

Whatever they would try to say about the cleaning of the not-realized portions of the wave function, it just doesn't work and it doesn't matter at all into which sect of the anti-quantum zealots – Bohmian, many-worlds etc. – a given Bolshevik belongs. (Well, in "many worlds", some information may be totally forgotten by being thrown away into a "different world" but the irreversibility – why you cannot return back from a different world – remains unexplained and if it holds, a massive fine-tuning has to be present. Moreover, if it's really impossible for the "different world" to influence yours again, that "different world" should be considered unphysical according to the empirical or operational rules.)

The "realist" theories are just old-fashioned classical theories which fundamentally contradict the uncertainty principle, the universality of Planck's constant, the quantization of energy and other observables, and the elimination of the not realized parts of the wave function and/or the reversibility of all unobserved processes in Nature – among other fundamental facts that I discussed elsewhere. And all the people persistently (in 2018) trying to negate the basic rules of the game as articulated in Copenhagen are morons – sadly, in most cases, pompous morons.

And that's the memo.

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :