What about e.g. the difference that one of them predicts all phenomena in Nature correctly while the other doesn't predict a single one correctly?

I was rehearsing a presentation for teachers – it's about the foundations of quantum mechanics. This has pushed me to make some arguments and explanations even more concise – and also comprehensive when it comes to the basic incorrect "alternative interpretations" that are being spread by the anti-quantum zealots.

First, let us recall how quantum mechanical theories work in general. These rules are called the "general postulates of quantum mechanics" and the anti-quantum zealots love to call them "the Copenhagen Interpretation" to make the postulates look relative or debatable which they are not.

Fine. The new rules of physics as understood since 1925 are the following:

- Physical theories of this new kind – quantum mechanical theories – may only predict probabilities of outcomes of measurements. All efforts to make the predictions non-probabilistic or "less ambiguous" are bound to fail because they build on incorrect assumptions about Nature.
- Statements whose probabilities may be predicted are statements about observables. Every observable must be mathematically represented by a Hermitian linear operator on a complex Hilbert space describing the particular quantum mechanical theory; and vice versa, every Hermitian linear operator corresponds to an observable that may in principle be measured by a sufficiently refined apparatus.
- The wave function \(\ket{\psi(t)}\) or its mixed generalization, the density matrix \(\rho(t)\), represents the observer's knowledge about the external world and there cannot exist any "more direct" description of the external world than through an observer's knowledge. In particular, the wave function or the density matrix is not observable, not even in principle.
- The transformation of knowledge from one moment to another is achieved by transforming the wave function (Schrödinger picture) or the operators (Heisenberg picture) by the unitary evolution operator \(U\) where \(U=\exp(Ht/i\hbar)\) if a Hamiltonian exists. More general "direct" transformations such as the S-matrix may be calculable in some theories. Intermediate pictures may exist and are mathematically proven to be equivalent.
- The eigenvalues \(\lambda_i\) from the equation \(L \ket{\psi_i} = \lambda_i \ket{\psi_i}\) – the spectrum of \(L\) – represent all the possible results of the measurements of \(L\).
- When an observer is preparing to measure the observable \(L\), he decomposes the state vector \(\ket\psi\) to the sum i.e. superposition \(\sum_i c_i \ket{\psi_i}\) of the normalized eigenvectors \(\ket{\psi_i}\) of \(L\). (In general, the eigenspace is degenerate and a slight yet self-evident generalization of these rules is needed – I avoid this complexity because it is clear what the generalizations are and this generalization has nothing to do with the alleged "philosophical subtleties" of QM.)
- The probability of getting the eigenvalue \(\lambda_i\) is obtained by Born's rule, as \(|c_i|^2\) where \(c_i\in\CC\).
- As soon as an observable \(L\) is measured, the changed knowledge about the state of the world is represented by the "reduction of the wave function" i.e. the replacement of \(\ket\psi\) by the term \(\ket{\psi_i}\) from the decomposition above. This reduction is just a complex generalization of the replacement of subjective probabilities in Bayesian inference.
- The wave function and the aforementioned operations with it are observer-dependent or "subjective" in general. Wigner's friend is the simplest example in which two observers use different wave functions at a given moment. No sharp contradictions between observers' conclusions may arise in this way, one may prove.

Everyone who has studied these matters for years but who is incapable of verifying that the rules above are internally logically consistent or that these are the only rules that are actually supported by the amazing body of experimental verification is a moron. Although the previous sentence is somewhat interdisciplinary – on the border of physics and psychology – it could be added as another postulate of quantum mechanics to the list above.

Great. The three basic types of the anti-quantum revisionism are the following ones:

- R1: Bohmian mechanics
- R2: Many Worlds Paradigm
- R3: Objective collapse of the wave function models

R1: Bohmian mechanics says that the wave function objectively exists and is labeled a pilot wave. It evolves by Schrödinger's equation. On top of that, there also exists an unambiguous classical trajectory of an additional particle that is governed by Prince Louis de Broglie's equation in which the pilot wave appropriately repels the particle from the interference minima and attracts it to the interference maxima so that the end points of the classical trajectories are statistically distributed just like predicted by QM, assuming that the initial locations were properly distributed. Fans of R1 believe that this emulation of some behavior of quantum mechanics is enough to replace quantum mechanics. The variables added to the wave function, in the simplest case the particle position \(x(t)\), are generally called the beables.

R2: Everett's Many Worlds Paradigm says that the wave function is an objectively existing wave that never collapses and its appropriate split to terms defines "many worlds" – alternative histories which simultaneously exist – and the observers in these parallel worlds feel that they exist separately from other worlds and they have detected random results.

R3: Ghirardi-Rimini-Weber also say that the wave function is an objectively existing wave. It claims that no observers play an active role in physics. Instead, Nature uses "its own observers" that, according to a Poisson process, prevent the particles' locations from being too ambiguous or uncertain. For every particle, once in a time \(T\) in average, the whole wave function collapses so that the post-collapse reduced density matrix for the particle is a 3D Gaussian of width \(R\).

**Lethal flaws of these revisionist frameworks**

*R1,R2,R3 represent which of the frameworks suffer from the flaw*

OK, let's go. The bold titles after each subset of R1,R2,R3 will convey

*correct*statements; the "interpretations" say the opposite.

**R1,R2,R3 vs The wave function isn't measurable**

All the "interpretations" above – and all "real critics' interpretations" – postulate that the wave function (or the density matrix) represent a collection of objectively existing complex numbers. This statement about the objective existence

*must physically mean*that the value of \(\psi(x,y,z,t)\) for an electron must be measurable in principle (in one repetition of an experiment).

It's obviously not the case. No one has ever designed magnetic sawdust or a voltmeter that would show us "what the wave function looks in space", and if the laws of QM are at least somewhat correct, QM clearly implies that this will never be possible. So "interpretations" R1,R2,R3, and others are assuming the existence of measurement protocols for \(\ket\psi\) that almost certainly can't exist in Nature – so they almost certainly contradict the observations.

Similarly, all the pictures add some extra information that is in principle objective but for some reasons, must have zero influence on any experiment. R1 predicts real particles of the extra classical particles, R2 postulates the reality of the counters for every type of the world (and perhaps the particular detailed events in all the other worlds), R3 postulates the reality of some extra added collapses (flashes). The condition that all of them remain unobserved (which seems necessary to match the experiments) seems to be an infinite number of conditions and they can't be simultaneously obeyed in any of the alternative framework – unless one adjusts the framework to be an "emulation of QM including observer" which means that infinitely many prescriptions "what an observer 'should' do and measure in various situations" have to be added as parts of the definition of the alternative theory. Due to the infinite number of prescriptions and their parameters, these alternative pictures would be completely unpredictive.

**R1,R2,R3 vs The molecules' heat capacity is of order \(k_B\)**

According to the universal methods of statistical physics, the heat capacity of a molecule may be calculated as a certain change of the entropy per unit temperature, \(dS/dT\), or a multiple of it, with some other variables fixed and/or with some extra constant terms. That's because at the thermal equilibrium, the probability gets evenly divided among all mutually distinguishable states of the molecule.

Experimentally, all molecules' heat capacities at normal temperature are of order \(k_B\), the Boltzmann constant, which means that a molecule's detailed inner workings only carry a small number of bits, also of order one. Quantum mechanics achieves this very small information per molecule because only the ground state (lowest energy eigenstate) and a small number of excited states have sizeable probabilities and contribute to the information. It's essential that two nearby, non-orthogonal wave functions are not mutually distinguishable according to QM – this non-reality of the wave function is needed for the entropy to be given by the logarithm of the dimension of the Hilbert space, and not the number of points in the Hilbert space (or in the complex projective space of "rays").

All realist interpretations assume lots of detailed degrees of freedom – especially the objective wave function (R1,R2,R3) and sometimes also the exact positions of particles (R1) and/or the counters saying how many worlds of each types are in a certain state (R2). So all these pictures – all realist pictures – predict a much greater "real" information per molecule, and therefore much higher heat capacities than something of order \(k_B\).

None of the defenders of these revisionist pictures actively realizes it because they lack the background in statistical physics and/or don't want to analyze inconvenient facts. They avoid the detailed description of the collapse which would be needed in a full theory, and when incorporated, each such a completed theory would predict a huge information per molecule at thermal equilibrium, and therefore a huge heat capacity, in conflict with observations.

In this sense, we may say that the very title of Carroll's book, "Something Deeply Hidden", is completely wrong. The observation about thermodynamics in previous paragraphs – and many no-go theorems about the hidden variable models – make it clear that "Something Deeply Hidden" cannot exist at all! If there were any deeply hidden information (e.g. about a molecule) in Nature, its heat capacity would have to be much higher than \(O(k_B)\) but it's not. The likes of Carroll seems to misunderstand that they're "walking in the exact opposite direction" than quantum mechanics took. Quantum mechanics says that there's "less" possible maximum information that can exist about each system than assumed classically – e.g. the phase space has fewer points (because they're clustered into the \((2\pi\hbar)^K\) cells) than the infinite number in classical physics. The likes of Carroll want to

*add*hidden variables – exactly the opposite thing than what you need to account for the quantum phenomena.

Quantum mechanics postulates "less information" about a fixed real-world object than classical physics – but allows this information to be studied in new ways, probabilistically through any of the many Hermitian linear operators/matrices.

**R1,R2,R3 vs Any proper understanding of the measurement must admit that "something collapses"**

Proper quantum mechanics contains the rule that, just like in Bayesian inference, the wave function or density matrix collapses when the theory is applied and when a measurement is made. This is needed to allow controllable, simple enough initial states that are used for any doable calculation; as well as the "cleaning of junk" at the end.

Proponents of R1,R2,R3 don't like the collapse but it's clearly needed at the end – because we repeatedly start to describe the world or objects in it with clean, controllable initial states – and they have no replacement for the collapse. A part of the engine is missing, no replacement is substituted, the theories don't work at all and don't allow to justify any of the controllable calculations that begin with a nice initial state.

**R1,R2,R3 vs There exists no canonical or unique split of the wave function to terms, no universally preferred observables**

Proper quantum mechanics doesn't postulate and doesn't need any particular split of the wave function to terms; and it doesn't need any preferred observables. These choices must be made by the observer (it is sometimes called "the Heisenberg choice"). The observer must know what observable he is going to measure and this observable may be predicted once it's inserted and the wave functions are collapsed to the eigenstates.

Quantum mechanics may calculate the answers – but to do so, it needs the precise questions, including the choice of observables, from the user. It's unavoidable and "fair" that the observer has to insert something to localize the right question, it's surely one or two of the rules of QM above. There exists no working replacement for this "division of responsibilities" between the theory and its users.

Fans of R1,R2,R3 don't "like" that the observer is fully responsible for articulating the question whose answer should be calculated by quantum mechanics. So they implicitly or explicitly say that Nature itself is responsible for the choice of preferred observables or preferred bases or decompositions of the wave function. According to their assumptions, these preferred observables must therefore be given uniquely by mathematical rules.

R1 needs to choose preferred observables which produce the "beables". That's normally claimed to be the position of the particle but no analogous "natural" choice may be made for particles with spin, quantum field theories, or string theory. R2 needs to split the wave function to "terms interpreted as the many separate worlds" but there exists no canonical split of this kind. (Decoherence cannot be said to be "quite a unique" prescription, either, because decoherence proceeds assuming some semi-arbitrary separation of the system to the environment and the interesting core and details of this separation require someone like an observer, anyway, because labeling something "environment" really means "I won't be able or willing to observe these variables".) Locality doesn't help because e.g. the quantum mechanical description of a brain, a compact object, involves a pretty much connected wave function. R3 needs to generalize the notion of the "particle's position" that is objectively reduced by the added collapse phenomena.

But it's obvious that no such observer-independent yet preferred observables, bases, or decompositions may exist. It's really a postulate of QM – and an experimentally proven one – that with various choices of the apparatus, different – and mutually non-commuting – observables that are properties of an object may be measured, according to the observer's choice of the apparatus or behavior. The very postulate of preferred observables, bases, or decompositions in R1,R2,R3 is indefensible and any such preferred objects would be extremely mathematically unnatural. For example, the choice of a preferred basis or observable on the 2-dimensional Hilbert space of the electron's spin would pick a preferred axis in the 3D space and break the rotational symmetry. It's important that no such symmetry-breaking extra structures exist in the theory. To say the least, no such extra superstructures have observable consequences, otherwise we could observe the breaking of the rotational symmetry (in our example). But if these superstructures are agreed to have no observable consequences, they should be considered physically non-existent.

**R1,R2,R3 vs State-of-the-art quantum mechanical theories include very different observables from non-relativistic QM**

There exist no defensible generalizations of the realist "interpretations" for the theories that high-energy physicists (but even condensed matter physicists etc.) actually use today – which have particles with spin; these particles are obtained as quanta of fields; and have an internal structure because they're strings or branes or black holes with the information linked to the event horizon.

R1 have no meaningful picture what could be the sensible "beables" generalization particle positions' in the realistic theories. R2 have no generalization of the "distance between particles" that could be used as a criterion to split the worlds. R3 has no observable generalizing the particle positions whose values are "localized" by the extra collapses – for the more realistic theories that are actually not constructed from particle positions as fundamental degrees of freedom.

**R1 (and maybe R2,R3) vs Phenomena of Nature are Lorentz-covariant, relativity holds**

The R1 Bohmian mechanics is a textbook example of a theory that violates locality. For many particles, the pilot wave (wave function) is a multi-local object that directly influences distant places at the same moment. A particular slicing of the spacetime is needed and a transformation of the description to a different inertial frame is impossible. So a postulate of special relativity is violated by R1. Consequently, the conclusions of relativity are almost certainly violated, too.

In particular, some signals will propagate faster than light. The suppression of the faster-than-light signals amounts to infinitely many conditions and it's almost certainly impossible to obey all of them even when the fine-tuning of a finite number of parameters is allowed.

Quantum mechanics (Copenhagen) doesn't have any problem with relativity because it postulates that the particular slicing or procedure to collapse the wave function is immaterial and must only be done correctly before the next prediction is needed. The wave function is unobservable in QM which means that there are no observable consequences of the detailed timing of the collapse, or the choice of the spacetime slice on which the wave function collapse looks instantaneous. Only the probabilities are considered physical predictions and they may be proven to be Lorentz-covariant. The Lorentz covariance of a QFT is manifest in the Heisenberg picture as well as Feynman's path integral approach.

R2,R3 would actually violate the Lorentz covariance as well – as soon as the missing structure would be inserted. Many worlds R2, if they needed to define which parts of the wave function are separate from each other, would actually need to quantify something like a "spatial distance between two packets in a wave function" and this distance is only well-defined assuming a spacetime slice. The separation to many worlds would therefore be inertial-system-dependent, too, but that contradicts the statement of the R2 paradigm that the splitting to the world is objective and observer-independent.

R3 needs to add the extra collapses whose frequency (number of collapses per unit time) is generally proportional to the number of particles or whatever generalizes it. But the number of particles is an integral over the 3D space that depends on the slicing, too. So a proper generalization of the R3 theories to a quantum field theory would require the theory to count the "number of particles \(N\)" which would make the extra added structure, the additional objective collapses, inertial-frame-dependent, too. Also, due to the Unruh effect, the rate of the collapses could be predicted very different in accelerating frames (particle production changes \(N\) and theories of this kind would also be incompatible with the equivalence principle because gravity and acceleration would be distinguishable by the number of extra collapses per unit time.

To summarize, pretty much all the realistic "interpretations" postulate an extra superstructure that is bound to pick a preferred inertial frame and break the Lorentz symmetry (and/or the equivalence principle) – and therefore conflict with the observation that the Lorentz symmetry holds and preferred inertial frames don't exist.

**R2 vs Physics needs to have axioms (Born's rule in QM) that talk about probabilities**

This is a problem of the Many Worlds Paradigm only, R2. Proper QM says that all predictions of the theory are fundamentally the collections of calculable probabilities. R2 doesn't speak about probabilities at all which means that it doesn't allow

*any*of the actual predictions of QM to be connected with observations. R2 just pretends that probabilities aren't needed – but when probabilities are banned or discarded, then all of the quantitative physics is discarded.

R2 really naturally postulates that all the "many worlds" are equally real and should be equally likely. Defenders of R2 disagree on whether Born's rule is consistent with their picture and/or derivable from their picture without circular reasoning. More importantly, all of them want to erase all axioms of physics that are formulated in terms of probabilities – like the Born's rule postulate of QM above – but none of them has any idea how physics could work without an axiom involving probabilities.

R1,R3 don't suffer from this problem because they still postulate that \(|\psi|^2\) affects some probabilities. In R1, this density is a part of the pilot wave that makes the "real trajectories" more or less likely near the interference maxima or minima. In R3, the final state of the added objective collapses depend on \(|\psi|^2\) at one point or another.

**R3 vs There are no new dark-matter-like flashes from the foundations of QM**

R3, the objective collapse framework, basically adds a "kick" into the particle localizing it to a packet of width \(R\) roughly once per time \(T\). They incorrectly think that they need to add such an extra "localization mechanism" because they want the wave function to describe how the real world "really feels", and the real world feels "sharp, not diluted, and free of dead-alive mixtures".

In QM, no such collapses are needed because the wave function describes the probabilitistic distributions and when their support is wide or ambiguous, it doesn't mean that the phenomena or outcomes of measurements are fuzzy. Outcomes of measurements are predicted by QM to be sharp – just uncertain, and therefore the probabilities are "spread"!

But if one adopts the "need to objectively shrink the wave function" and prevent it from dead-alive macroscopic superpositions and too wide packets that "feel fuzzier than what we feel", then both the width \(R\) and \(T\) need to be small enough, for the localization to be sharp enough and for the "strange dilution of the wave function" to be avoided even temporarily.

But if \(R,T\) are this small enough, then one adds too many "kicks" per unit time in which a particle's momentum is changed by \(\hbar/R\) or so, by the uncertainty principle. These extra kicks/collapses manifest themselves as flashes analogous to the spontaneous ionization of an atom or a radioactive decay of a nucleus (or the decay of an atom). Even in very pure huge canisters of water, none of such processes are observed. The upper bound on their frequency are extremely tight e.g. in the proton decay experiments. It follows that \(R,T\) simply cannot be too small. At least one of them must be very large and the localization therefore "doesn't exist" in practice (the added collapses are too invisible) and cannot therefore be the right explanation why we "feel the world to be sharp".

**R1,R2,R3 vs Lots of additional choices would ruin the predictivity of a theory**

R1 needs some precise generalization of the pilot wave equation for the more complex, QFT-like or string-like frameworks, and aside from the choice of the "beables" which is unclear, there would be many ways to add the extra "piloting" terms that would still yield to the same probabilities of trajectories but different detailed individual trajectories. The number of these – absolutely unobserved – parameters would probably be infinite in a complex enough theory. All of them are absolutely unobservable.

R2 needs one to create special "cuts", the critical distance between packets or something that allows two parts of a wave function to "become" separate worlds. The "cut" could depend on many other circumstances. The precise definition of such a "cut" would almost certainly depend on infinitely many parameters, too. Not a single phenomenon linked to these parameters is observable.

R3 was said to bring the particle to a width \(R\) Gaussian after the collapse. The choice of the Gaussian is clearly arbitrary. The general theory of the kind has an arbitrary post-collapse shape of the wave function. That's clearly infinitely many parameters that are added. None of them seems observable because the whole class of postulated new phenomena seems to be physically invalid. In fact, the most general GRW mechanism of the extra added collapse would be described by many more parameters than just the shape of the post-collapse wave function.

**Note that most of the problems above aren't just some aesthetic details – they are rather sharp contradictions with the experimental facts. The observations say that "many phenomena are almost certainly invisible or non-existent" while the alternative paradigms say that "many such phenomena should almost certainly be real and observable" – or vice versa.**

This was my proof that all the people claiming that the proper, "Copenhagen", quantum mechanics should be completely replaced by an alternative "interpretation" (or supplemented by some superstructure) are idiots. QED.

## No comments:

## Post a Comment