## Saturday, September 12, 2009 ... /////

### Schrödinger's virus and decoherence

The physics arXiv blog, Nature, Ethiopia, Softpedia, and many people on the Facebook were thrilled by a new preprint about the preparation of Schrödinger's virus, a small version of Schrödinger's cat.

The preprint is called

Towards quantum superposition of living organisms (click)
and it was written by Oriol Romero-Isart, Mathieu L. Juan, Romain Quidant, and J. Ignacio Cirac. They wrote down some basic stuff about the theory and a pretty clear recipe how to cool down the virus and how to manipulate with it (imagine a discussion of the usual "atomic physics" devices with microcavities, lasers, ground states, and excited states of a virus, and a purely technical selection of the most appropriate virus species).

It is easy to understand the excitement of many people. The picture is pretty and the idea is captivating. People often think that the living objects should be different than the "dull" objects studied by physics. People often think that living objects - and viruses may or may not be included in this category - shouldn't ever be described by superpositions of well-known "privileged" wave functions. Except that they can be and it is sometimes necessary. Quantum mechanics can be baffling but it's true.

A rational viewpoint

Let me admit, I don't share this particular excitement because it's damn clear what will be observed in any experiment of this kind. It's been clear since the 1920s - and all the "marginal" issues have been clarified in the 1980s. People often say that the interpretation of quantum mechanics is confusing and they expect similar experiments to lead to surprising or uncertain results. However, they don't.

As long as decoherence is as small for the virus as for any other microscopic dipole, it will behave as a quantum dipole. It will interfere and do all the things you expect from the small things. Once it becomes large, it will behave as a cat. ;-)
See also: Entanglement, Bell's inequalities, interpretation of quantum mechanics, decoherence (lecture 26)

Google Docs viewer may view the PDF files...
The Copenhagen school may have said some confusing things about the "collapse" of the wave functions and about "consciousness" but they surely knew how to predict experiments where both quantum mechanics and classical physics played a role. They realized that
1. only probabilities may be predicted in this quantum world: the usual QM calculations are helpful
2. "small" objects behave according to the quantum logic while "large" objects behave according to the classical logic; they interact according to the "measurement theory"
Nothing has changed about the point (1) whatsoever. All attempts to deny or weaken (1) have been ruled out. This "probabilistic" rule seems to be a completely fundamental and universal feature of the real world. There can be no "hidden variables" because the "probabilistic character" of the predictions is not emergent but fundamental.

Limitations of the Copenhagen interpretation

On the other hand, the Copenhagen rule (2) was phenomenological in character. It allowed them to predict what's going on when microscopic and macroscopic objects interact. However, it didn't allow them to explain several related questions, namely
1. Where is the boundary between classical objects and quantum objects located?
2. What's exactly happening near this boundary, in the marginal situations?
3. How can this boundary be derived?
The question (1) has led to all kinds of philosophical speculations and quasi-religious delusions.

The ability to "reduce" the wave function and to "perceive" the results has been attributed to mammals (mammal racism), humans (anthropocentric racism), the white people (conventional racism), the author of the sentence and no one else (solipsism), macroscopic objects above a micron (approximate truth but not quite exact and universal), and to many other categories of "objects" and "subjects".

While it was known that the cat behaved classically, the question (2) looked pressing. People wanted to know what happens when objects in one category "cross the boundary" and behave according to the other set of rules. They thought that the co-existence of the "two philosophies" - behind quantum and classical objects - was problematic. That was why Schrödinger invented his cat.

People had no idea how to calculate the answer to the questions (2) and (3), i.e. how to derive the location of the quantum-classical boundary and the precise behavior near the boundary.

Decoherence as the cure for these Danish imperfections

Your humble correspondent prefers the "Consistent Histories" (associated with the names of Gell-Mann and Hartle; Omnes; Griffiths; and others) as the most concise, state-of-the-art framework that tells you which questions are legitimate in quantum mechanics; and how the answers to these questions - i.e. the probabilities of different histories - should be calculated.

But we should realize that the Consistent Histories are just a formalism. The actual physics needed to overcome the difficulties of the Copenhagen interpretation is called "quantum decoherence" or "decoherence" for short, pioneered primarily by Wojciech Zurek (see the picture, paper).

Decoherence is a universal, omnipresent process that destroys coherence, i.e. the information about the relative phases of distinct quantum complex amplitudes. I will discuss it in the rest of the article. Decoherence is important because:
Decoherence is the only process in Nature that leads to the transition from the quantum rules to the ordinary Joe's familiar classical rules of physics.
If you ask whether a system is allowed to be found in superpositions of well-known states, whether it has the right to "perceive" its own state, whether it exhibits "consciousness", and so on, decoherence is the only physical consideration that determines the boundary between the quantum and classical worlds.

Of course, objects and subjects may be more or less able to manipulate with the information, to remember it, and so on, but all objects or collections of degrees of freedom that are strongly influenced by decoherence have the same qualitative behavior when it comes to the ability to "reduce" wave functions as the humans.

How decoherence works

Imagine two quantum states of a virus, |ψ¹» and |ψ²». And imagine that the Hamiltonian destines them to emit and/or reflect a photon (which will be the representative of any kind of environmental degrees of freedom) in such a way that the corresponding states of the photon created by |ψ¹» and |ψ²» are orthogonal to each other.

That will usually happen if |ψ¹» and |ψ²» are chosen to be "natural" states that have well-defined "local properties" or other observables that can be "seen" by the photon. The "locality" properties are determined by the Hamiltonian. That's why the Hamiltonian also contains the information about the "privileged basis" of the Schrödinger's virus Hilbert space.

At any rate, if the initial state is a superposition
|ψ» = a|ψ¹» + b|ψ²»
with complex amplitudes "a,b" (and you may want to press "ctrl/+" if the superscripts are too small), it will evolve into
final» = a |ψ¹» |photon¹» + b |ψ²» |photon²».
Note that this simple "tensor squaring" of the terms only works in one privileged basis of states. For example, it is not true that the initial state
(|ψ¹» + |ψ²»)
evolves into a simple "square" of it,
(|ψ¹» + |ψ²») (|photon¹» + |photon²»).
It can't! Expand the product above to see that it contains previously unwanted "mixed terms". The "squaring rule" is not true for all states because such an evolution would violate the quantum xerox no-go theorem which is a simple consequence of the evolution operator's being linear rather than quadratic.

Nevertheless, it is easy to see that this evolution is what happens for a properly chosen basis. The following steps are clear. The photon quickly disappears somewhere in the environment. It becomes impossible (or at least useless) to follow its state - or the state of all other particles it influences in the future. Because we can only study the virus (or we want to study the virus only), we must trace over the photonic part of the Hilbert space.

The usual rules to trace over give us the final density matrix, after the photon was emitted:
final» = a |ψ¹» |photon¹» + b |ψ²» |photon²»,
ρ = |a|2 |ψ¹» «ψ¹| + |b|2 |ψ²» «ψ²|.
The Greek letters starting the two lines above are pronounced "rho" and "psi". Note that the information about the relative phase of "a,b" has been forgotten. The relative phases have been forgotten because they would only survive in the off-diagonal elements of the density matrix. But all the off-diagonal states were abruptly set to zero because of the orthogonality of the photonic states. Only the absolute values of "a,b" are remembered. The latter may be interpreted as "classical probabilities".

How quickly does it work?

In general, the photon states are not exactly orthogonal. But when you calculate how quickly the process destroys the off-diagonal elements of the density matrix, it is extremely fast. Even the interactions with the cosmic microwave background are enough for a very tiny speck of dust to decohere within a tiny fraction of a second (if we care about the off-diagonal elements between position states separated by as little as the CMB wavelength or even less).

The rate of decoherence gets faster for larger, hotter, denser, and strongly interacting environments. You must really cool the viruses very brutally to have any chance to avoid decoherence.

Also, the typical time dependence of the off-diagonal matrix elements is schematically "exp(-exp(t))", i.e. expo-exponential. (I omitted many coefficients, to make the function more readable.) It's much faster than an exponential decrease. Once decoherence begins, it destroys the information about the relative phase immediately: let us accept an approximate yet pretty accurate convention (for all conceivable purposes) that probabilities smaller than 10^{-2000} are identified with zero. ;-)

The expo-exponential dependence emerges because the number "N" of degrees of freedom that the state of the virus influences is going exponentially with time (an exploding, cascading propagation of information, "N=exp(t)"), and each degree of freedom adds a small multiplicative factor to the inner products of the environmental degrees of freedom ("ρ(12)=exp(-N)").

Decoherence, i.e. the "classical-quantum boundary in action", has been routinely observed in the labs since the 1996 experiments by Raimond, Haroche, and others.

Preserving the probabilistic character of physics

Fine, so the quantum, interfering, complex "probability amplitudes" (that routinely violate Bell's inequalities) have been transformed to classical probabilities (that obey Bell's inequalities). Now, you may ask how does the theory make the second step: how does it transform the classical probabilities to particular, sharply determined classical answers?

This cat is way to similar to Lisa whom I have fed (and discussed quantum mechanics with) for 10 days in late August. Ouch.

Well, it never does. Even for macroscopic objects, the probabilistic character of the predictions is real. The outcomes of the experiments can't be determined by any hidden variables, not even in principle. They are genuinely random. It is a fundamental fact about Nature.

The only way why "determinism" seems to arise in the macroscopic world is when the probabilities predicted by quantum mechanics for "most outcomes" except a small neighborhood of the "classical result" are nearly equal to zero. That's why the macroscopic world looks approximately deterministic, given a finite accuracy of the measured eigenvalues. But it never becomes "fundamentally deterministic".

Decoherence, i.e. the liquidation of the information about the relative phases, is the only transformation that physics is doing in order to make the quantum world behave as a classical one. All the ideas that "something else is needed" to get macroscopic, conscious, large objects similar to us and the cats (vital forces, holy spirits, additional privileged "beables" that differ from ordinary "observables", or gravitational collapses of wave functions) are delusions and deep misunderstandings of quantum mechanics.

You may ask whether Schrödinger's cat can ever "feel" as being in a linear superposition of those two states that will quickly decohere. Well, it can't. Its "feelings" are an observable whose only allowed eigenvalues are "dead" and "alive". Whenever you make any observations (including a "poll" in which you ask the cat about its feelings), you will see that the cat is either dead or alive.

In fact, because the probabilities have been transformed into ordinary classical probabilities that don't interfere, you may always imagine that one of the answers about the cat's condition - "dead" or "alive" - was true even before you made the observation. More generally, you can always "imagine" that the "reduction" of the wave function took place immediately when decoherence became strong.

With this assumption, you can't ever run into any contradictions because the classical probabilities do obey Bell's inequalities and all the similar conditions. But you may still ask whether the "reduction" was real - whether the cat was "really" in one of the states before you measured it.

Because you just don't know what the state was - and an observation is the only way to find out - the question whether the answer was decided "before your measurement" is unphysical. For microscopic systems that don't decohere much, it can be shown that the outcomes couldn't have been determined before the measurements. For macroscopic objects that do decohere, you can't prove such a thing. In fact, one can prove that no one can prove such a thing. ;-)

So it's consistent to imagine that decohering degrees of freedom had one of their allowed "classical values" even before the measurement. That's what most people do, anyway. The Moon is over there even if no mouse is watching it.

Alternatively, if you're a solipsist, you may keep linear superpositions as a description of all objects (and cats) inside the Universe and only "reduce" this wave function when you want to determine what your brain feels.

Your predictions will always be identical to the case when you "reduce" the wave function for all degrees of freedom as soon as they decohere. And if "two theories" give identical predictions for all situations that are measurable, at least in principle, they are physically identical, regardless of the gap between the feelings that these "two theories" create in our minds.

Decoherence: arrow of time

The arrow of time is being frequently discussed on the physics blogs. Decoherence has its own arrow of time, too. The states tend to be "pure" (vectors in the Hilbert space) in the past but "mixed" (density matrices) in the future.

Our derivation of decoherence instantly shows that the "decoherent arrow of time" is inevitably correlated with the logical arrow of time. We are tracing over the environmental degrees of freedom because we're either "forgetting" about them, or we "want to forget" about them. The photons won't matter for the future life of the virus which is why we are able to "eliminate them" by tracing over their Hilbert space. Our ability to predict things about the virus is not reduced at all.

This process can't be reverted because the information that doesn't exist or has been forgotten can't suddenly be "created again" or "unforgotten". ;-) Yes, all these arguments assume an asymmetry between the past and the future - in the methods how we remember the past, and not the future, and so on.

These assumptions are called the "logical arrow of time" and no logical or logically sound argument relevant for the evolution of anything in time can ever avoid the "logical arrow of time". When we think about time, the "logical arrow of time" is a part of the basic logic. And the basic logic is more fundamental than any "emergent process" that someone could imagine to "explain" the arrow of time by some convoluted dynamics.

The "decoherent arrow of time" is also manifestly aligned with the "thermodynamic arrow of time", determining the direction in which the entropy increases. After all, if you define the entropy as the uncertainty present in your density matrix, i.e. as the coarse-grained entropy
S = -Tr ρ ln(ρ),
it's clear that the evolution of pure states into mixed states (by tracing over some degrees of freedom, as in decoherence) increases "S".

So these two arrows of time coincide (and in fact, even the rate of decoherence is pretty much linked to the rate of the entropy growth) but it's not a new insight - it's an insight that sane physicists have understood since the very moment when they started to discuss decoherence, or even the density matrices (computed as partial traces).

Summary

So all these things are cool and sexy and we're used to viewing them as mysterious. And we often love the profound feelings of mystery. But in reality, there is no genuine question concerning the behavior of Schrödinger viruses (or even cats) that would remain uncertain as of 2009.

And that's the memo.

#### snail feedback (6) :

Dear Lubos,
thanks so much for this post.

I've been frustrated by hand waving about "wave function collapse" for so many years and now I have something that at least makes sense. I have to read it a few times more to make sure I really get it.

This comment has been removed by the author.

Dear Mike,

thanks for your compliments. The Palmer preprint is here (click).

It's the kind of words about a new picture of quantum mechanics that unifies it with the emergent geometry and clarifies everything blah blah blah - that may occur sometimes in the future.

However, this particular paper looks like another confused diatribe about Bohmian mechanics - with fractals etc. un-quantitatively added to the mixture - so I'm not gonna study it in detail. (It has 0 citations, so I am probably not the only one who decides in this way.)

Best wishes
Luboš

Mike,

I forgot to say: this text about contextual and other observables explains one key aspect of QM that people like Palmer don't understand, namely that all observables are and must be treated by the same mathematical structure.

All classical quantities are promoted to linear operators, all of them have a spectrum (eigenvalues), all these eigenvalues can only be predicted probabilistically (probabilities of different outcomes), no eigenvalue exists for "certain" prior to the measurement, and all these basic postulates are true and must be inevitably true regardless of the spectrum's discreteness, continuity, or mixed character.

Also, it doesn't matter whether one can create a good Bohmian model for a given observable or not. All of observables - operators - in quantum mechanics (and in our real quantum world) are equally "real" or "unreal".

Cheers
LM