Stephen Hsu has posted a preprint on the many worlds interpretation

The measure problem in no-collapse (many worlds) quantum mechanics(which he announced on his blog as well) and in the arXiv comments, the arXiv admin added a note saying that the text significantly overlaps with a preprint that Hsu posted in 2011. I agree with the content of the admin's critique. Hsu and all these people just keep on repeating the same verbal flapdoodle and they have absolutely no new results or ideas – and nothing that makes sense.

However, I don't like the idea that an anonymous admin "edits" the submitted preprints (I don't remember seeing such a thing in hep-th) which is why I suspect that this particular "admin" could have been an even more radical anti-quantum (or pro-many-worlds) jihadist than Hsu himself.

At any rate, the main claim of Hsu's new paper is right but not new at all; almost all his remaining comments are totally wrong. His main claim is that the many worlds interpretation postulates that all possible histories objectively exist and it implies that that almost all of these histories are the so-called "maverick histories", a euphemism that Everett invented for histories whose probability is so tiny that it's zero for all practical purposes (in plain English, histories that we never see to occur). And Hsu says that all attempts to derive the Born rule or eliminate "maverick theories" suffer from circular reasoning.

So far so good. But this objection against the "many worlds interpretation" in any form (there exists no form of this metaphysical wishful thinking that would be at least remotely viable) is one that

*every intelligent person*must have raised when she was told what the "many worlds interpretation" is supposed to achieve.

Also, you don't need to have a vast mathematical training to realize that the "maverick branches" are one of the lethal problems that instantly, totally, and irreversibly show that the "many worlds interpretation" is a no-starter. You just need to be sufficiently intelligent.

The guy behind the InspiringPhilosophy channel has recorded numerous films defending the Christian beliefs – that I mostly find about as irrational as Hsu's paper and similar papers – but he has also recorded several videos coherently explaining why quantum mechanics is incompatible with what is euphemistically known as "realism". And those videos are much more intelligent than e.g. Hsu's paper.

I've embedded the 17-minute critique of the "many worlds interpretation" at the top and it is as crisp as you can get. Most of the basic ways to show that the "many worlds interpretation" doesn't work are presented very clearly. The problem to derive the Born rule – and the "maverick branches" are just a particular way to describe this problem – is discussed from 4:32 in the video.

Classical physics used to be able to predict the truth values of propositions about Nature with certainty – if you inserted the full information about the initial conditions which was in principle always possible. You start with two planets at two points, with given velocities, and classical mechanics may unambiguously calculate whether they will collide in the next 100 years or not.

But quantum mechanics doesn't and can't work like that. The uncertainty principle (which mathematically follows from the nonzero commutators between most pairs of observables) is a key feature that distinguishes quantum mechanics from classical physics – from any classical theory – and this principle implies that even if one knows as much as possible (in principle) about the initial state, most questions about the future will have uncertain answers (therefore the name, "uncertainty principle") and the theory can "only" calculate the probabilities that one property of the final state or another will be satisfied when the final state is observed.

All the predictions in classical physics could have been phrased in terms of the "correct final values of observables" which were exact and objective. But in quantum mechanics, all the predictions are probabilities of one outcome or another. The values of these probabilities matter. All the scientific knowledge is encoded in the values of such probabilities! Very rarely, quantum mechanics predicts probabilities to be 0% or 100% exactly (usually some violated conservation laws or causality have to be involved when some probabilities are strictly 0%). Most of the time, what matters is whether quantum mechanics predicts something to be more likely and less likely. The union of many "unlikely" predictions is "extremely unlikely" – the value of \(P\) is tiny – and that's quantum mechanics' way of eliminating possibilities.

Because the probabilities play the central role, they give a natural measure on the sets of possible outcomes. Some outcomes may "exist as possibilities" but because they are calculated to be very unlikely, we may de facto ignore them.

For example, Hitler has arrested some people who may assassinate him later. But they had a nonzero probability to perform quantum tunneling and escape from the prison. The probability of quantum tunneling is clearly nonzero – this is one of the novelties that define the new quantum physics. So the "many worlds interpretation" allows the future to be a world war of the kind we encountered; but most of the histories that are a priori "possible" (or a posteriori possible when you only care about probabilities' being zero or nonzero) look like histories with people walking through the walls and assassinating leaders by their mind – they focused so strongly that the upward-fluctuating electromagnetic radiation coming from their brain happened to be strong enough to shoot Hitler, and so on.

A discussion of ludicrous histories like that could be funny – insert your own fiction here – but I don't want to distract you. My point is that all these "crazy histories" are actually totally generic.

*Most*of the histories that are "possible" or that occur with "at least slightly nonzero" probabilities are insane. Even when you use Feynman's path integral to calculate the evolution, you must sum over

*literally all histories*, including one in which the whole Earth visits the neighborhood of Jupiter for a while tonight. Before you insert a sensible dynamical measure, "almost all" histories for the Earth contain the planet leaving the Solar System for a while tonight.

Those "crazy" histories are weighted by the same \(\exp(iS/\hbar)\) Feynman integrand as all other histories. The absolute value of the integrand is always the same – for crazy as well as unsurprising histories. However, the complex phases of the "crazy" histories tend to oscillate extremely quickly and the overall amplitude almost exactly cancels. The "crazy" histories are said to be "basically impossible" in quantum mechanics because of destructive interference; and because the squared absolute value of the complex amplitudes has to be interpreted as the probability, the ultimate "weight".

If you're intelligent and if you're neither a retarded young wannabe nor a senile hasbeen, you must totally understand that for the right theory to be able to suppress the "maverick histories" – the generic possibilities that are clearly wrong in practice – the probability has to play a fundamental role in the theory. It cannot be "derived" or "emergent" or "handwaved" in any way. In a (wrong) realist theory, the "maverick histories" have to be made real by the basic rules of the theory. Quantum mechanics makes them "basically forbidden" by calculating tiny probabilities for them – and by saying that

*everything in physics*is about the values of the probabilities.

But if you have any

*realist*that says that "something is real" or "something is not" in this binary way, it is spectacularly obvious that the "maverick histories" have to be real, not unreal, and the theory will therefore predict that it's absolutely possible for us to end up on those "maverick branches". This prediction is wrong. It's not just one particular prediction. It's a template for

*any prediction*of

*absolutely everything or anything*that you have ever seen or you could see in the world – in all of science and in the everyday life, too. The "many worlds interpretation" fails in totally everything and the failure is maximal.

It is totally clear that if you want to have any theory that doesn't acknowledge the fundamental role of probabilities in the laws of physics – and that instead tries to objectively say that "something is real" and "something is not" – you will run into this problem. All the people who have continued to work on this nonsensical metaphysical program for some 50 years must be at least borderline mentally ill. What they're doing is exactly like trying to calculate \(1+1\) again and again and hoping that the rules of mathematics or logic will suddenly change and the result will be \(5\). Sorry to dash your hopes, Everettians, but it will

*never*be \(5\).

A month ago, on Sunday at 4:30 am or so ;-), in one of the most expensive apartments in Prague on the National Avenue, and with several glasses of a very expensive vodka provided by our host, I asked a Czech physicist what really made him believe in this totally hopeless "many worlds" paradigm (which he previously endorsed). And I immediately got an answer. Sometimes, if you want to know why people believe utterly idiotic things, you have to interact with them for the whole night.

When he was an undergrad, he saw a simulation of a wave packet, something like the video above (I badly wanted better videos but it's not easy to find good ones). A wave packet approaches a barrier or something like that and it splits to two separated packets, a reflected one and a transmitted one, and those continue like two different histories that are geometrically separated from one another. Both of these two new packets "exist" and you may find yourself in one of them so the whole world may be phrased in this way. The only new feature of the complex situations is that the two sub-packets are separated in a very high-dimensional (Hilbert) space.

Except that it's complete nonsense whenever you try to go beyond these dumb and brief slogans of the type "many worlds interpretation akbar". The InspiringPhilosophy video at the top sketches most of the key problems:

- Observers are known to affect the outcomes. Depending on how we measure things, we may actually pick different bases and the measurement "collapses" the state vector to one of the basis vectors (that can't be specified without any role of the observer). It doesn't help to simulate the whole system, observer+observed, because there still needs to be "someone" who is external and who picks the basis and what is interesting.
- The branches are said to be "real" by MWI and it's forbidden in MWI to attach probabilities as the fundamental concepts. For this reason, probabilities are at most "arbitrary bureaucratic labels" attached to the histories that don't have any physical implications. So there's no way to reconcile modern science (in the sense of making predictions) with the MWI framework because all scientific predictions are in terms of probabilities. In particular, the "maverick branches" i.e. wrong predictions totally dominate.
- The "splitting wave packet" suggests that there is a preferred basis but in almost all the real-world situations, this is demonstrably not the case. The wave packets generally and generically do not split to separated pieces and even when they do, the separated pieces usually can't be interpreted as independent worlds.

But you know, some 25 years ago, the physicist saw the final wave packet that is composed of two geometrically separated components. In that case, he could assign "a classical bit of information" to the question which outcome took place. However, this is clearly a way to delude himself because no prediction in quantum mechanics ever involves classical information. Quantum mechanics has to predict probabilities and they're probabilities of propositions. Propositions are connected with projection operators – and those are almost always non-commuting and therefore non-classical.

The bold assertion computed by quantum mechanics isn't that "two outcomes could take place" but "one outcome was 95% likely, another one was 5% likely, and others were negligible". Those – generically unequal – probabilities always matter while the picture of two components of the wave packet carries no information about that. Also, the options have to be specified by an observer – what outcomes he may distinguish. There is no preferred basis.

In the wave packet case, the components may look isolated but this is simply not how the evolution generically takes place. InspiringPhilosophy mentions the radioactive nucleus. At every moment of time, it has some chance to decay. So if you don't observe it, the part of the wave function saying that "it hasn't decayed yet" is exponentially decreasing while the other parts of the wave function saying that "a decay has already taken place" are gradually increasing. At the level of the wave function, the process is totally continuous. It looks nothing like a sequence of "splittings of the history" that would resemble the partly transmitted, partly reflected wave packet.

Instead of a decaying nucleus, take something healthier. A brain. Quantum mechanics describes the processes inside the brain as well, right? Let me make the experiment very "clean". ;-) Extract all the gas from your lungs, fill your lungs with some powder, and surround your body and head tightly with some shiny metal so that it becomes a closed system. You will have a minute of life left which is enough for our purposes. ;-)

Quantum mechanics predicts the probabilities what you may feel in this last minute, and so on. But it is rather obvious that there are no "sharply and naturally separated histories" in this physical system at all. The whole system we study is a piece of solid with some liquids in it – tightly packed atoms that your body and brain are made of. There is no gas (in gas, there are big gaps between atoms). There is no separation here. So it will be impossible to find any "obvious" or "canonical" ways to divide the wave function (in the \(10^{26}\)-dimensional Hilbert space for all the nuclei and electrons) into pieces. The wave function of your body+brain is one big connected piece in the big Hilbert space for your body. The probability densities for nuclei and electrons are basically uniform in the whole body etc. The "split wave packet" is absolutely inapplicable as a metaphor.

My point is that an observer is still needed to define the preferred basis. And in fact, it is not true at all that the preferred basis has to correspond to portions of wave functions that look "geometrically separated". After all, even in the double slit experiment, the electron may go through one slit or the other slit – and they are separated and distant from each other. But it is the whole point of the double slit experiment that you

*cannot*consider the two slits as separated classical histories. The parts of the wave function from the two slits may easily "reunite" and interfere again.

In principle, no irreversible "splitting" may ever take place in quantum mechanics. And in fact, even if it were true that the "geometric separation" in the Hilbert space defines "different classical worlds" – and it is not true as we said – we would have no precise definition "which separation is big enough"; and no explanation why the separation should play such a role and what the role actually is (what the physical implications are). The very idea that the geometric separation should "fundamentally matter" is silly because it relies on some preferred coordinates on the Hilbert space – which basically reduce to the \(x\)-like coordinates of particles. But everyone who has understood the first semesters of quantum mechanics must know that there's nothing more fundamental about \(x\) relatively to \(p\) or other observables. What distinguishes them is that the Hamiltonian is basically "local in \(x\)" but this locality is a technical difference that affects the evolution, not a sign that the \(x\) observable is superior in any metaphysical way.

The physicist's belief must be that as the world keeps on evolving, it is basically "splitting" because the wave function produces an increasing number of "gaps" between the components. And he must believe that all the other components or regions of the Hilbert space – a persistently increasing majority of regions in the Hilbert space – may be forgotten by every observer "finding himself" in one region or subpacket. But this is a totally inadequate description of the indisputable

*mathematical shape*of the evolved wave function. A much more accurate description is the opposite one. As the wave function keeps on evolving in the unitary fashion, it is

*filling*all the gaps that could have previously existed. Think about the ergodic hypothesis in classical statistical mechanics. A bump in the phase space is chaotically evolving and the nonzero probability basically fills all the accessible points in the phase space (those with the same values of the conserved quantities). There won't be any gaps left. The evolution of the wave function for a solid system is analogous. The wave function basically fills everything.

To make any predictions, one must pick a basis and use the Born rule to compute the probabilities of each possible outcome. The basis of "possible outcomes" must be actively chosen by an observer. There can't exist any "canonical" or "objective" way to pick the right basis for the Hilbert space. If the people were thinking about actual physical problems and not some idealized propagandist clichés that are designed to make the MWI paradigm look viable, even though it is not, they would know that what they claim to be possibly clearly isn't possible.

At the end, people like Hsu are driven to write all this repetitive nonsense because they aren't capable of "believing" that the world is ultimately not classical. The last paragraph of Hsu's blog post says:

It seems to me absurd that many tens of thousands of papers have been written about the hierarchy problem in particle physics, but only a small number of theorists realize we don't have a proper (logically complete) quantum theory at the fundamental level.The hierarchy problem is arguably an overrated problem but at least a somewhat rationally justified one. One can use some Bayesian methods to estimate the values of low-energy parameters and it seems that "generically", the scalar particles etc. should be much more massive than the Higgs boson is measured to be. Lots of interesting models and mechanisms emerged from papers that were driven by the hierarchy problem.

On the other hand, the bloated literature claiming that there is a "problem" with quantum mechanics that needs to be solved – but this literature has obviously never presented a viable new solution for anything and it never will – is driven purely by people's prejudices and stupidity.

We've had the final, new, well-defined, totally logically satisfactory framework for all of science – the theoretical framework known as quantum mechanics – since the 1920s. Every claim that there is something incomplete or unsatisfactory about it is a sign of systemic defects in the critic's mental activities.

In the paper, Hsu repeats the sentences that are often employed to claim that there is some problem with quantum mechanics:

Quantum mechanics, as conventionally formulated, has two types of time evolution. An isolated system S evolves according to the Schrödinger equation\[Note that these crackpots love to refer to (bogus) authorities all the time. Crackpot John Bell could have written rants "against measurement" but those rants couldn't have changed the fact that measurements are the processes from which we acquire

\ket{\Psi(t)} = \exp(−iHt) \ket{\Psi(0)},\qquad (1)

\] where the time evolution operator \(U(t) = \exp(−iHt)\) is unitary: \(U^\dagger U = 1\). However, when a measurement is made the state undergoes non-unitary von Neumann projection (i.e., collapses) to an eigenstate corresponding to the eigenvalue observed by the measuring device. The two types of time evolution are so radically different that a rigorous definition of exactly when each of them apply is essential. Unfortunately, as is widely acknowledged, the conventional interpretation does not supply a satisfactory definition – see, e.g.,Against Measurementby J.S. Bell [2].

*all the information about Nature*and that the laws of physics, as we have understood them for 90 years, can only be formulated in terms of rules in which measurements play a fundamental role.

The right text to cite isn't an anti-science rant by John Bell but the breakthroughs by Heisenberg, Born, Pascal, Bohr, Dirac, and a few others who have discovered the new fundamental framework for science; or basic textbooks that introduce students to this essential subject. Many of the famous physicists have received well-deserved Nobel prizes, unlike John Bell or Hugh Everett who didn't even deserve a postdoc job. To place these two groups of men on the same level – or even place the Everettians above the founders of modern physics – is just a lunacy. Yes, already the founders of quantum mechanics have pointed out that the "evolution" has two components. The unitary evolution of the wave function or the Heisenberg evolution of operators is what plays the same role as the classical equations of motion in classical physics.

But the state vector and operators cannot be interpreted as "observations" or "measurements" right away. "Observations" or "measurements" are events in which an observer – someone or something associated with a particular, non-unique (i.e. in principle subjective or observer-dependent) system for assigning truth values to some propositions – actually gains the information about the external world. The wave function encodes the probabilities of observations, not the character of a particular observation itself.

The observations and measurements existed in the classical theory as well but they played a "passive role". What we observed was just a "reflection" of some objective reality that was the same for everyone. But quantum mechanics authoritatively says that it was wrong – according to science, that assumption of classical physics is no longer true. This assumption has been falsified in the very same sense and to the same extent as geocentrism or phlogiston or creationism or aether or any other wrong theory once believed to be right.

An observer is needed to specify the exact question – the observable that may be measured or, equivalently, the preferred orthogonal basis representing the mutually excluding outcomes (those are the same because the basis is the basis of eigenstates of the operators) – and quantum mechanics can calculate the probabilities of each outcome. Once the observer actually sees or otherwise detects an outcome, it's one of the outcomes with non-negligible predicted probabilities and the wave function with which the observer can make predictions for future measurements is abruptly changed ("collapsed") to the corresponding eigenstate of the observable (the preferred basis vector, or a projection upon the space connected with an eigenvalue or a set of eigenvalues)

This "collapse" is nothing else than the quantum description of the adjustment of the observer's knowledge forced upon him by the new data. It is totally analogous to – a quantum description of – the Bayesian inference in classical logic. Before the new evidence arrives, a person believes that the probabilities of various states of the world (various hypotheses) are \(P(A_i)\) and \(P(B_j)\), the prior probabilities. I divided the probabilities to the probabilities of statements that will be measured now, \(A\), and those that will be measured later, \(B\).

Once the new evidence \(E\) arrives – i.e. once the new measurement or observation is made – we get some particular answers to the questions about \(A\). Each answer may emerge from the measurement and the probability is \(P(A_i)\). Once \(A\) is measured, the probabilities are replaced by 0% or 100% for the a priori possible outcomes that weren't or were measured, respectively.

Meanwhile, the probabilities for \(B\), the future measurements, are replaced by posterior probabilities \(P(B_i|E)\) that are calculable via the Bayes theorem. These probabilities change abruptly when the new data arrives – when the measurement is made. That's how it worked even in classical physics. The case of quantum mechanics and the measurement is absolutely analogous except that the probabilities \(P\) aren't the "most fundamental" mathematical objects to describe our knowledge. Instead, all the probabilities \(P\) are extracted by the Born rule from a more profound time-dependent entity, the state vector (or density matrix) which encodes the complex probability

*amplitudes*. But again, the act of the measurement "collapses" some probability amplitudes to \(1\) and some to \(0\) and there are the Born-rule-based formulae to calculate the new probabilistic predictions for \(B\).

The whole framework of quantum mechanics – with the unitary evolution and the measurement dictated by the Bayesian inference – is exactly as complete as the Bayesian inference was in a classical world, especially in classical statistical physics. There is no quantum counterpart (analogy) of the deterministic classical physics describing "how things objectively are", however, because quantum mechanics makes it clear that objects' properties are

*not*objective. You may team up with your crackpots and write thousands of rants "against measurement" but this irrational activity of yours won't be able to change how Nature works, and Nature works in the way understood by Dirac and all those people, not according to the speculation and prejudiced wishful thinking of Bell and his equally prejudiced or retarded fans.

Again, quantum mechanics requires the observer to define the right question or measured observable or preferred basis – this is sometimes called the Heisenberg choice – and Nature picks one of the options according to the calculable probabilities – it is the Dirac choice. Does this procedure to describe Nature depend on what we consider "measurement"? Absolutely. And it has to. Since 1920s, we have known that the correct laws of physics can't be independent of measurements. The measurements are determining the reality – whether you call this "abrupt event" a "collapse" or an "observation" or a "measurement" or "Bayesian inference" or "perception". These are just words. The content is always the same and it is absolutely fundamental that the beef exists.

Do you protest that you don't know whether you have actually perceived or measured or... something? Great, if you don't know whether you have measured some result or not, you won't know how to adjust (or not to adjust) the state vector. If you use a wrong state vector, you may get wrong predictions for future measurements. But it is

*your fault*, not a fault of quantum mechanics. Quantum mechanics is a scientific theory and is only usable by people who know what they are doing.

Analogous problems exist in classical physics, too. They're not being used against classical physics because anti-quantum zealots only use them against quantum mechanics because they're prejudiced dishonest demagogues. Take classical physics and try to predict the motion of planets. You take a fancy new digital telescope and observe a new planet which would affect the trajectory of Neptune if it existed.

But you may say that you don't know whether you may trust the new telescope. You may believe your own eyes but you didn't see the new planet with your eyes. You used a telescope. It may give you misleading information about Nature. (For a while, the Catholic officials asserted that the observations made by Galileo's telescopes couldn't have been trusted.) At the beginning, you don't know how it works. If it is digital and run by software, the software may contain a virus or a prank. So maybe the planet doesn't exist!

Take mapy.cz – great Czech maps of Europe (which also offer you an Android app (and Windows Phone and iOS) containing downloadable offline maps of every individual European country, among other things, recommended). Maybe you shouldn't trust the maps. Maybe if you open mapy.cz and press five letters, iddqd, an airplane will appear and you will be able to navigate it with arrow keys. Maybe if you press idkfa, the aircraft becomes a bomber and you may bomb Europe by pressing the CTRL key (try it, maybe these possibilities are accidentally true). ;-)

Similarly, you don't know whether you may trust the telescope or other devices (or biological organs) that you believe to perceive the reality. If you observe a planet that isn't really there (or vice versa, if you overlook a planet that exists), you may be producing wrong predictions. But this is clearly not a defect or a sign of incompleteness of the classical laws of physics. If you are uncertain about some observations you have made, the laws of physics may become useless. The laws of physics – if you really want to use them – simply assume that you know something about Nature, you know what you observed.

Exactly the same is true in quantum mechanics. Quantum mechanics simply demands that you know which observables you have measured – and caused the corresponding "collapse". Be sure that you may only measure or perceive a limited set of observables connected to your retinas, ears, nerves on your skin and tongue and nose, and cells in your brain that are normally defended from the external world but they can measure their state, too. If you don't know what a measurement is or what information about what observable you have extracted, you won't be able to use the laws of quantum mechanics. Quantum mechanics isn't demanding anything new that didn't exist in science before. When you want to predict properties of Nature in the future, you must know something about the properties in the past (or present) and you must know what you know. You only know something about the physical objects if you measure/observe/perceive/detect them and to do so, you need some senses and a trustworthy connection between your very internal awareness about the measurement, and the physical observables that you believe to be sensitive to.

The key difference between classical physics and quantum mechanics is the uncertainty principle or complementarity of quantum mechanics. In classical physics, there could be a "complete ensemble of information" that allowed you to know everything about the physical system and therefore everything about its future evolution, too. But in quantum mechanics, it's not the case. If you know something, the value of an observable \(L\), you are unavoidably uncertain about most other observables – those that don't commute with \(L\) and almost all observables refuse to commute with \(L\), indeed.

In practice, the predictions for the future in classical physics depended on your knowledge obtained from the observations in the past but you could think about an idealized observer who has measured "everything". In quantum mechanics, there can't be an observer that measures "everything" because the non-commuting observables just can't be measured – and therefore can't "exist" – at the same moment.

It's always the same difference that is, in one way or another, being taught even in introductory classes of quantum mechanics for the undergraduates. If you don't see that the new framework is qualitatively different from classical physics, but it is perfectly sensible and sufficient to settle any scientifically meaningful dynamical question about any physical system whose "operator algebra" is known, and there is no incompleteness and no contradiction, then you are simply not intelligent enough to understand the foundations of modern physics that have been around for 90 years, and it doesn't matter whether your name is Einstein, Zweistein, Bohm, Homeless, Bell, Zwell, Hsu, King-Kong, 't Hooft, Pchooft, Weinberg, Rudolph, Pusey, Pussy, Barrett, Carroll – let me omit 5483 different anti-quantum zealots.

Heisenberg and pals were not only good enough students to understand how it worked already 90 years ago – and to defeat the psychological obstacles that prevent most laymen (including those who consider themselves to be more than laymen) from "getting" quantum mechanics. They were ingenious enough to actually

*discover*this new set of rules that govern Nature. The gap between those who could discover the new framework; and those who can't even understand when it was around for 90 years and when they have been taught about it for years seems so huge to me that I am not willing to say that an anti-quantum zealot may ever be "in the same league" with the founding fathers of quantum mechanics, regardless of his mastery of some detailed technical questions.

## snail feedback (0) :

Post a Comment