The Multiverse Interpretation of Quantum MechanicsThe ordinary multiverse with its infinitely many bubbles whose possible vacuum states are located in 10^{500} different stationary points of the stringy configuration space was way too small for them. So they invented a better and bigger multiverse, one that unifies the "inflationary multiverse", the "quantum multiverse", and the "holographic multiverse" from Brian Greene's newest popular book,

*The Hidden Reality*.

Yes, their very first bold statement is that parallel universes in an inflating universe are the same thing as Everett's many worlds in quantum mechanics! ;-)

Sorry to say but the paper looks like the authors want to stand next to Lee Smolin whose recent paper - as much crackpottish as any paper he has written in his life so far - is about "a real ensemble interpretation" of quantum mechanics. Bousso and Susskind don't cite Smolin - but maybe they should! And in their next paper, they should acknowledge me for pointing out an equally sensible and similar paper by Smolin to them. ;-)

Just like your humble correspondent would always emphasize that the "many worlds" in Everett's interpretation of quantum mechanics are completely different "parallel worlds" than those in eternal inflation or those in the braneworlds, these famous physicists say - On the contrary, they're the same thing!

However, at least after a quick review of the paper, the drugs seem to be the only tool that you can find in the paper or in between its lines to convince you that it's the case. ;-)

It's a modern paper involving conceptual issues of quantum mechanics, so it treats decoherence as the main mechanism to address many questions that used to be considered puzzles. Good. However, everything that they actually say about decoherence is a little bit wrong, so their attempts to combine those new "insights" with similar "insights" resulting from similar misunderstandings of the multiverse - and especially the way how outcomes of measurements should be statistically treated in a multiverse - inevitably end up being double gibberish that is cooked from two totally unrelated components such as stinky fish and rotten strawberries.

**In what sense decoherence is subjective**

One of the first starting points for them to unify the "inflationary multiverse" and the "many worlds" of quantum mechanics is the following thesis about decoherence:

Decoherence - the modern version of wave-function collapse - is subjective in that it depends on the choice of a set of unmonitored degrees of freedom, the "environment".That's a loaded statement, for many reasons. First of all, decoherence isn't really a version of the collapse. Decoherence is an approximate description of the disappearing "purity" of a state in macroscopic setups with various consequences; one of them is that there is no collapse. The probabilities corresponding to different outcomes continue to be nonzero so nothing collapses. They're nonzero up to the moment when we actually learn - experimentally - what the outcome is. At that point, we must update the probabilities according to the measurement. Decoherence restricts which properties may be included in well-defined questions - for example, insane linear superpositions of macroscopically different states are not good "basis vectors" to create Yes/No questions.

As first emphasized by Werner Heisenberg and then by anyone who understood the basic meaning of proper quantum mechanics, this "collapse" is just about the change of our knowledge, not a real process "anywhere in the reality". Even in classical physics, dice may have probabilities 1/6 for each number, but once we see "6", we update the probabilities to (0,0,0,0,0,1). No real object has "collapsed". The only difference in quantum physics is that the probabilities are not "elementary" but they're constructed as squared absolute values of complex amplitudes - which may interfere etc.; and in classical physics, we may imagine that the dice had the state before we learned it - in quantum physics, this assumption is invalid.

It may help many people confused by the foundations of quantum mechanics to formulate quantum mechanics in terms of a density matrix "rho" instead of the state vector "psi". Such a "rho" is a direct generalization of the classical distribution function on the phase space "rho" - it only receives the extra off-diagonal elements (many of which go quickly to zero because of decoherence), so that it's promoted to a Hermitian matrix (and the opposite side of the coin is that the indices of "psi" may only involve positions or only momenta but not both - the complementary information is included in some phases). But otherwise the interpretation of "rho" in quantum mechanics and "rho" in classical statistical physics is analogous. They're just gadgets that summarize our knowledge about the system via probabilities. Now, "psi" is just a kind of a square root of "rho" so you should give it the same qualitative interpretation as to "rho" which is similar to "rho" in classical statistical physics.

Second, is decoherence "subjective"? This is a totally equivalent question to the question whether "friction", "viscosity" (or other processes that dissipate energy) is subjective. In fact, both of these phenomena involve a large number of degrees of freedom and in both of them, it's important that many interactions occur and lead to many consequences that quickly become de facto irreversible. So both of these processes (or their classes) share the same arrow of time that is ultimately derived from the logical arrow of time, too.

First, let's ask: Is friction or viscosity subjective?

Well, a sliding object on a flat floor or quickly circulating tea in a teacup will ultimately stop. Everyone will see it. So in practice, it's surely objective. But is it subjective "in principle"? Do the details depend on some subjective choices? You bet.

Focusing on the tea, there will always be some thermal motion of the individual molecules in the tea. But what ultimately stops is the uniform motion of bigger chunks of the fluid. Obviously, to decide "when" it stops, we need to divide the degrees of freedom in the tea to those that we consider a part of the macroscopic motion of the fluid and those that are just some microscopic details.

The separation into these two groups isn't God-given. This calculation always involves some choices that depend on the intuition. The dependence is weak. After all, everyone agrees that the macroscopic motion of the tea ultimately stops. In the same way, the information about the relative phase "dissipates" into a bigger system, a larger collection of degrees of freedom - the environment - during decoherence. The qualitative analogy between the two processes is very tight, indeed.

But a punch line I want to make is that decoherence, much like viscosity, isn't an extra mechanism or an additional term that we have to add to quantum mechanics in order to reproduce the observations. Instead, decoherence is an

**approximate method to calculate the evolution in many situations that ultimately boils down to ordinary quantum mechanics and nothing else**. It's meant to simplify our life, not to add some extra complications. Decoherence justifies the "classical intuition" about some degrees of freedom - what it really means is that interference phenomena may be forgotten - much like the derivation of equations of hydrodynamics justifies a "continuum description" of the molecules of the fluid.

Clearly, the same comment would be true about friction or viscosity. While the deceleration of the car or the tea is usefully described by a simplified macroscopic model with a few degrees of freedom, in principle, we could do the full calculation involving all the atoms etc. if we wanted to answer any particular question about the atoms or their collective properties. However, we should still ask the right questions.

When Bousso and Susskind say that there is an ambiguity in the choice of the environment, they misunderstand one key thing: the removal of this ambiguity is a part of a well-defined question! The person who asks the question must make sure that it is well-defined; it's not a job for the laws of physics. Returning to the teacup example, I may ask when the macroscopic motion of the fluid reduces to 1/2 of its speed but I must define which degrees of freedom are considered macroscopic. When I do so, and I don't have to explain that there are lots of subtleties to be refined, the question will become a fully calculable, well-defined question about all the molecules in the teacup and quantum mechanics offers a prescription to calculate the probabilities.

The case of decoherence is completely analogous. We treat certain degrees of freedom as the environment because the state of these degrees of freedom isn't included in the precise wording of our question! So when Bousso and Susskind say that "decoherence is subjective", it is true in some sense but this sense is totally self-evident and vacuous. The correct interpretation of this statement is that "the precise calculation [of decoherence] depends on the exact question". What a surprise!

In practice, the exact choice of the degrees of freedom we're interested in - and the rest is the environment - doesn't matter much. However, we must obviously choose properties whose values don't change frantically because of the interactions with the environment. That's why the amplitude in front of the state "0.6 dead + 0.8i alive" isn't a good observable to measure - the interactions with the environment make the relative phase terribly wildly evolving. Decoherence thus also helps to tell us which questions are meaningful. Only questions about properties that are able to "copy themselves to the environment" may be asked about. This effectively chooses a preferred basis of the Hilbert space, one that depends on the Hamiltonian - because decoherence does.

To summarize this discussion, at least in this particular paper, Bousso and Susskind suffer from the same misconceptions as the typical people who deny quantum mechanics and want to reduce it to some classical physics. In this paper's case, this fact is reflected by the authors' desire to interpret decoherence as a version of the "nice good classical collapse" that used to be added in the QM framework as an extra building block. But decoherence is nothing like that. Decoherence doesn't add anything. It's just a simplifying approximate calculation that properly neglects lots of the irrelevant microscopic stuff and tells us which parts of classical thinking (namely the vanishing of the interference between 2 outcomes) become approximately OK in a certain context.

Let's move on. They also write:

In fact decoherence is absent in the complete description of any region larger than the future light-cone of a measurement event.If you think about it, the purpose of this statement is inevitably elusive, too. Decoherence is not just "the decoherence" without adjectives. Decoherence is the separation of some particular eigenstates of a particular variable and to specify it, one must determine which variable and which outcomes we expect to decohere. In the real world which is approximately local at low energies, particular variables are connected with points or regions in spacetime. What decoheres are the individual possible eigenvalues of such a chosen observable.

But the observable really has to live in "one region" of spacetime only - it's the same observable. The metric in this region may be dynamical and have different shapes as well but as long as we talk about eigenvalues of a single variable, and in the case of decoherence, we have to, it's clear that we also talk about one region only. Decoherence between the different outcomes will only occur if there's enough interactions, space, and time in the region for all the processes that dissipate the information about the relative phase to occur.

So it's completely meaningless to talk about "decoherence in spacelike separated regions". Decoherence is a process in spacetime and it is linked to a single observable that is defined from the fundamental degrees of freedom in a particular region. Of course, the region B of spacetime may only be helpful for the decoherence of different eigenvalues of another quantity in region A if it is causally connected with A. What a surprise. The information and matter can't propagate faster than light.

However, if one restricts to the causal diamond - the largest region that can be causally probed - then the boundary of the diamond acts as a one-way membrane and thus provides a preferred choice of environment.This is just nonsense. Even inside a solid light cone, some degrees of freedom are the interesting non-environmental degrees of freedom we're trying to study - if there were no such degrees of freedom, we wouldn't be talking about the solid light cone at all. We're only talking about a region because we want to say something about the observables in that region.

At the same moment, for the decoherence to run, there must be some environmental degrees of freedom in the very same region, too. Also, as argued a minute ago - by me and by the very authors, too - the spatially separated pieces of spacetime are completely useless when it comes to decoherence. It's because the measurement event won't affect the degrees of freedom in those causally inaccessible regions of spacetime. Clearly, this means that those regions can't affect decoherence.

(A special discussion would be needed for the tiny nonlocalities that exist e.g. to preserve the black hole information.)

If you look at the light sheet surrounding the solid light cone and decode a hologram, you will find out that the separation of the bulk degrees of freedom to the interesting and environmental ones doesn't follow any pattern: they're totally mixed up in the hologram. It's nontrivial to extract the values of "interesting" degrees of freedom from a hologram where they're mixed with all the irrelevant Planckian microscopic "environmental" degrees of freedom.

They seem to link decoherence with the "holographic" degrees of freedom that lives on the light sheets - and a huge black-hole-like entropy of A/4G may be associated with these light sheets. But those numerous Planckian degrees of freedom don't interact with the observables we're able to study inside the light cone, so they can't possibly contribute to decoherence. Indeed, if 10^{70} degrees of freedom were contributing to decoherence, everything, including the position of an electron in an atom, would be decohering all the time. This is of course not happening. If you associate many degrees of freedom with light sheets, be my guest, it's probably true at some moral level that the local physics can be embedded into physics of the huge Bekenstein-Hawking-like entropy on the light sheet - but you must still accept (more precisely, prove) that the detailed Planckian degrees of freedom won't affect the nicely coherent approximate local physics that may be described by a local effective field theory - otherwise your picture is just wrong.

The abstract - and correspondingly the paper - is getting increasingly more crazy.

We argue that the global multiverse is a representation of the many-worlds (all possible decoherent causal diamond histories) in a single geometry.This is a huge unification claim. Unfortunately, there's not any evidence, as far as I can see, that the many worlds may be "geometrized" in this way. Even Brian Greene in his popular popular book admits that there is no "cloning machine". You can't imagine that the new "many worlds" have a particular position "out there". The alternative histories are totally disconnected from ours geometrically. They live in a totally separate "gedanken" space of possible histories. By construction, the other alternative histories can't affect ours, so they're unphysical. All these things are very different from ordinary "branes" in the same universe and even from other "bubbles" in an inflating one. I don't know why many people feel any urge to imagine that these - by construction - unphysical regions (Everett's many worlds) are "real" but at any rate, I think that they agree that they cannot influence physics in our history.

We propose that it must be possible in principle to verify quantum-mechanical predictions exactly.Nice but it's surely not possible. We can only repeat the same measurement a finite number of times and in a few googols of years, or much earlier, our civilization will find out it's dying. We won't be able to tunnel our knowledge elsewhere. The number of repetitions of any experiment is finite and it is not just a technical limitation.

There are many things we only observe once. Nature can't guarantee that everything may be tested infinitely many times - and it doesn't guarantee that.

This requires not only the existence of exact observables but two additional postulates: a single observer within the universe can access infinitely many identical experiments; and the outcome of each experiment must be completely definite.In de Sitter space, the observables are probably not exactly defined at all. Even in other contexts, this is the case. Observers can't survive their death, or thermal death of their surrounding Universe, and outcomes of most experiments can't be completely definite. Our accuracy will always remain finite, much like the number of repetitions and our lifetimes.

In the next sentence, they agree that the assumptions fail - but because of the holographic principle. One doesn't need a holographic principle to show such things. After all, the holographic principle is an equivalence of a bulk description and the boundary description so any physically meaningful statement holds on both sides.

At the end, they define "hats" - flat regions with unbroken supersymmetry - and link their exact observables to some approximate observables elsewhere. Except that this new "complementarity principle" isn't supported by any evidence I could find in the paper and it isn't well-defined, not even partially. In the quantum mechanical case, complementarity means something specific - that ultimately allows you to write "P" as "-i.hbar.d/dx" - a very specific construction that is well-defined and established. In the black hole, complementarity allows you to explain why there's no xeroxing; the map between the degrees of freedom isn't expressed by a formula but there is evidence. But what about this complementarity involving hats? There's neither definition nor evidence or justification (unless you view the satisfaction of manifestly invalid and surely unjustified, ad hoc assumptions to be a justification).

If you read the paper, it is unfortunately motivated by misunderstandings of the conceptual foundations of quantum mechanics. In the introduction, they ask:

But at what point, precisely, do the virtual realities described by a quantum mechanical wave function turn into objective realities?Well, when we measure the observables. Things that we haven't measured will never become "realities" in any sense. If the question is about the classical-quantum boundary, there is obviously no sharp boundary. Classical physics is just a limit of quantum physics but quantum physics fundamentally works everywhere in the multiverse. The numerical (and qualitative) errors we make if we use a particular "classical scheme" to discuss a situation may be quantified - decoherence is one of the calculations that quantifies such things. But classical physics

*never*fully takes over.

This question is not about philosophy. Without a precise form of decoherence, one cannot claim that anything really "happened", including the specific outcomes of experiments.Oh, really? When I say that it's mostly sunny today, it's not because I preach a precise form of decoherence. It's because I have made the measurement. Of course, the observation can't be 100% accurate because "sunny" and "cloudy" haven't "fully" decohered from each other - but their overlap is just insanely negligible. Nevertheless, the overlap never becomes exactly zero. It can't. For more subtle questions - about electrons etc. - the measurements are more subtle, and indeed, if no measurement has been done, one cannot talk about any "reality" of the property because none of them could have existed. The very assumption that properties - especially non-commuting ones - had some well-defined properties leads to contradictions and wrong predictions.

Decoherence cannot be precise. Decoherence, by its very definition, is an approximate description of the reality that becomes arbitrarily good as the number of the environmental degrees of freedom, their interaction strength, and the time I wait become arbitrarily large. I think that none of the things I say are speculative in any way; they consider the very basic content and meaning of decoherence and I think that whoever disagrees has just fundamentally misunderstood what decoherence is and is not. But the accuracy of this emergent macroscopic description of what's happening with the probabilities is never perfect, just like macroscopic equations of hydrodynamics never exactly describe the molecules of tea in a teacup.

And without the ability to causally access an infinite number of precisely decohered outcomes, one cannot reliably verify the probabilistic predictions of a quantum-mechanical theory.Indeed, one can't verify many predictions of quantum mechanical properties, especially about cosmological-size properties that we can only measure once. If you don't like the fact that our multiverse denies you this basic "human right" to know everything totally accurately, you will have to apply for asylum in a totally different multiverse, one that isn't constrained by logic and science.

The purpose of this paper is to argue that these questions may be resolved by cosmology.You know, I think that there are deep questions about the information linked between causally inaccessible regions - whether black hole complementarity tells you something about the multiverse etc. But this paper seems to address none of it. It seems to claim that the cosmological issues influence even basic facts about low-energy quantum mechanics and the information that is moving in it. That's surely not possible. It's just a generic paper based on misunderstandings of quantum mechanics and on desperate attempts to return the world under the umbrella of classical physics where there was a well-defined reality where everything was in principle 100% accurate.

But the people who are not on crack will never return to the era before the 1920s because the insights of quantum mechanics, the most revolutionary insights of the 20th century, are irreversible. Classical physics, despite its successes as an approximate theory, was ruled out many decades ago.

I have only read a few pages that I considered relevant and quickly looked at the remaining ones. It seems like they haven't found or calculated anything that makes any sense. The paper just defends the abstract and the introduction that they have apparently pre-decided to be true. But the abstract and and introduction are wrong.

You see that those would-be "revolutionary" papers start to share lots of bad yet fashionable features - such as the misunderstanding of the conceptual issues of quantum mechanics and the flawed idea that all such general and basic misunderstandings of quantum physics (or statistical physics and thermodynamics) must be linked to cosmology if not the multiverse.

However, cosmology has nothing to do with these issues. If you haven't understood a double-slit experiment in your lab or the observation of Schrödinger's cat in your living room and what science actually predicts about any of these things, by using the degrees of freedom in that room only, or if you haven't understood why eggs break but don't unbreak, including the degrees of freedom of the egg only, be sure that the huge multiverse, regardless of its giant size, won't help you to cure the misunderstanding of the basics of quantum mechanics and statistical physics.

The right degrees of freedom and concepts that are linked to the proper understanding of a breaking egg or decohering tea are simply not located far away in the multiverse. They're here and a sensible scientist shouldn't escape to distant realms that are manifestly irrelevant for these particular questions.

And that's the memo.

Excellent post! I was just scanning thought the paper this morning and my stomach started reacting. Your post gave me some comfort ;)

ReplyDeleteIs this philosophy mascerading as physics? From an excellent

ReplyDeletephilosophicalarticle by the mathematician Wolfgang Smith: "Richard Feynman once remarked: 'I think it is safe to say that no one understands quantum mechanics.' To be sure, the incomprehension to which Feynman alludes refers to aphilosophicplane; one understands the mathematics of quantum mechanics, but not the ontology." Click here for more info on the proper relationship between philosophy and physics.excellent post indeed. Thanks for educating me on decoherence...I'll follow you anywhere! :)

ReplyDeleteLubos, I have a question for you that is a bit involved. I've read several of your articles on misunderstandings of quantum theory.

ReplyDelete1. The Heisenberg uncertainty principle shows us that we cannot empirically obtain a classical picture under QM, no matter how hard we try. However, it does not demonstrate that no such classical picture exists, only that we have an epistemic horizon preventing us, in principle, from obtaining it.

2. However, the combination of Bell's inequality and Dirac's thermodynamics argument that you provided recently pretty much shows that there cannot be a classical picture under QM, not even a classical picture that we can't obtain. It isn't that there are exact trajectories we can't obtain, it's that there simply are no exact trajectories of classically-acting "particles." They simply don't exist.

3. The wave-function can't be "real" in some sense because changes updates to the wave-function happen FTL after observations are made. It's not a real classical wave.

So, I'm wondering if the following is a valid way to look at quantum theory.

1. The wave-function is simply a representation of all the information we possess about a system. The reason that so-called collapse (actually decoherence) is probabilistic is because measurements give us more information. It is impossible to know what information you will get before you get it, because then it wouldn't be information, but redundancy. Thus, updates to the wave-function due to observation obviously cannot be predicted, not even in principle.

2. Black hole thermodynamics shows us that there is a finite, objectively calculable limit (in principle) to the amount of physically meaningful information in a bounded region of space.

3. Causality, in QM, is encoded in the commutators (and anti-commutators) of Hermitian operators. In QFT, where all operations are defined in terms of field operators on Fock space, every possible change in the information of a system is determined by the field equations and the space of allowable states.

It thus seems, to me, that what's really "real" in quantum theory is information of the Shannon theory type. All physically meaningful things appear to go back to information. Causality being encoded in the canonical commutation (or anti-commutation) relations of field operators shows that information (but not redundancy or noise) cannot propagate faster than light. The various field operations that can be performed on the allowable states in Fock space simply represent the various ways that different information can be "processed" in an abstract sense.

Just like relativity shows that three dimensional length is an artifact of one's Lorentz frame, the wave-function (the so-called "state of a system") is just an artifact of our information. Classical reality is thus seen as a phenomenon which emerges from information processing.

Thus, we live in a universe with a background made of space-time (4D in QFT, 11D in M-theory), where the only real physical process is some kind of information processing, and where the rules of information processing are largely determined by various forms of symmetry.

Is this a correct or useful way to view QM? Is it possible, given a combination of quantum theory and relativity, to conclude that the universe is, in some abstract sense, a computer? And no, I'm not saying that the universe is a simulation.

Is this wrong?

Dear u. seeker, it's kind of annoying. Am I the only person in the world who understands QM? You may see answers to all your questions hundreds of times on this blog - and elsewhere. Why do you kindly give me this huge amount of work again?

ReplyDelete1. The nonzero commutator *does* mean that there's no classical picture because in any classical picture, the observable quantities such as X,P are ordinary c-number-valued functions of some underlying parameters and those *do* commute, which is a contradiction. This is a proof by contradiction. You may prefer some fuzzy verbal tricks to circumvent this indisputable argument, but then you're not acting rationally or scientifically.

OK, sorry, the rest makes no sense to me and I don't think it's right for me to waste time with it. You're mixing apples and oranges. The uncertainty principle for X,P has no direct link to the black holes or Shannon entropy or anything else. You're just confused about everything and I can't possibly help you.

LM

You seem to think that I believe there's an underlying classical picture. I do not. I think you've either misread my post or I didn't communicate what I was saying well enough.

ReplyDeleteAn addendum: I was not saying that commutation relations had anything to directly do with black hole thermodynamics. I included the comment on BHT as an additional fact.

ReplyDelete"It seems to claim that the cosmological issues influence even basic facts about low-energy quantum mechanics and the information that is moving in it. That's surely not possible. "

ReplyDeleteThat is a statement that may turn out to be untrue