Friday, January 16, 2009

Killing the information softly

The information has been complaining, until 1996 or so, in this way:



Yes, Strominger plays with his fingers, stringing her life with his words. I wanted to make sure that almost everyone finds something enjoyable in this posting. ;-)

Moshe Rozali has written something about the black hole information paradox. He praised Juan Maldacena's 2001 paper about the information inside eternal AdS black holes, the paper that was essential for Stephen Hawking to convince himself and admit that the information was preserved. And he discussed the preservation of the information.

Information is not lost, in principle

In the AdS/CFT context, a black hole may also be described as a generic thermal "gas" of gluons and quarks. A new particle that enters this bath will eventually distribute its energy among all the other particles.




On the other hand, the gravitational description only sees some "macroscopic" degrees of freedom of the bath rather than the detailed information about the hot gluons and quarks. So it is not shocking that in the gravitational description, it looks like the information is getting lost. But it is not.

There is a lot of information that the description in general relativity doesn't see. Its amount - proportional to the event horizon area - indicates that this information may be linked to the horizon. And the density is so huge - Planckian - that the effective theory of gravity would inevitably break down. Or at least, it would be on the edge.

This picture is morally universal. Today, we know that quantum mechanics doesn't have to be thrown out of the window because similar subtle effects can preserve - and almost certainly do preserve - the information. A detailed story in the gravitational variables is not available and we may only speculate whether such a cool story - a thrilling paper - will ever appear or not.

Damping and information loss

But Matthias Rampke, a reader of Moshe's blog, wrote something that I have already seen coming from Dmitry Podolsky's keyboard (and brain). It is a myth that is so wide-spread that I decided to dedicate this posting to the myth. Let me emphasize that Moshe, your humble correspondent, or anyone who knows what she is talking about (yes, I admit that the size and identity of this set is disputable) agrees with me. ;-)

Dmitry and Matthias write something like that:
I don’t see where information would be lost, even if the perturbations die off exponentially. They are still there after any finite amount of time, aren't they?

As a simple example, take a lightly damped classical harmonic oscillator. It’s amplitude falls off exponentially. But it never comes to a complete halt if you are looking closely enough.
I've corrected some Germanisms such as "perturbetions", "Oszillator", and "falls of". :-)

Well, Matthias says that the traces of the initial state never drop strictly to zero which means, he thinks, that the information is still there because it can be reconstructed by a microscope of an exponentially increasing capacity.

Except that it can't.

Unitarity: classical and quantum

You often hear that quantum mechanics has to be unitary. What does it mean? Well, it means that the evolution operator - something that dictates how any initial state evolves - has to be unitary. What does that mean? Well, a "unitary operator" is a slightly generalized, complex version of an "orthogonal transformation", a rotation. And the "orthogonal" or "unitary" transformation is one that preserves the length of every vector.

For vectors in the Hilbert space of states, the squared length (of their projections to appropriate subspaces) is interpreted as the probability that a certain quantity has a certain value or that another condition is satisfied by the state, depending on the subspace we choose.

So the preservation of the length is the preservation of the probability. And the total probability of all alternatives or the probability of the statement "there is anything" must remain 100% at all times. ;-) When we demand a quantum mechanical theory to be unitary, we want the evolution operator to be unitary (which is pretty much equivalent to a Hermitean Hamiltonian, whenever the latter exists), and we also usually demand the physical Hilbert space to be positively (semi-)definite, to avoid negative probabilities (which would be hard to agree with observations because we rarely observe an event minus 2009 times).

Is there a counterpart of this statement - unitarity - in classical physics? You bet. When it comes to the last condition we added, the volume form on the phase space must be positive (to avoid negative probabilities). More importantly, the total probability must be preserved. If we deal with specific points on the phase space, it is somewhat trivial: we have one state at all times.

But if we deal with statistical distributions on the phase space, which is useful if we don't know the precise microscopic state (or we're not interested in it), this statistical distribution evolves but the total probability must remain equal to one. For sensible models of mechanics, it does. This fact is known as the Liouville theorem.

Are damped processes unitary?

Imagine that you have a harmonic oscillator, just like Matthias proposes, whose initial perturbation "x(0)" is exponentially damped in time, something like
x(t) = cos(omega t) exp(-gamma t) x(0)
and similarly for the velocity if you allow me to simplify a bit. Well, in principle, you can always evolve this equation back in time to find "x(0)". But does it mean that the information is not lost? The answer is a resounding No.

The information is being killed slowly by the friction.

Of course that in actual physical processes, the information doesn't completely die "instantly", all 100% of it. That's because pretty much all candidate laws of physics (including the ugly and non-unitary ones) are continuous equations and such a lethal, abrupt murder of the information is never possible, not even a priori. No physicist was ever afraid of such a sudden death because it was absurd.

But the black holes looked much like the damped harmonic oscillator which is losing the information slowly, as I will explain, and this was a problem before the people realized that a huge amount of additional degrees of freedom are able to remember the information, assuming tiny violations of causality that can be identified as quantum tunneling.

Fine. So you insist: why can't we reconstruct x(0)?

The problem is that if you start with an ellipse on the phase space spanned by "x,p" - or another volume you want to choose - the damping will shrink this ellipse. Its area would decrease. This result would violate the Liouville theorem.

In a more realistic model of classical mechanics, this violation wouldn't occur. Why? Well, "x(t), p(t)" are not the only two degrees of freedom in the world. There are all the atoms of air and the rope supporting the harmonic oscillator. And these degrees of freedom actually get the energy of the harmonic oscillator and transform it to heat. The oscillator couldn't lose the energy if it were not interacting with the air - or the individual molecules of the rope. They also remember the information and if you consider all atoms of the harmonic oscillator, rope, and air, the Liouville theorem will hold, much like it does for undamped oscillators.

But let's return to the picture where "x(t), p(t)" are the only two degrees of freedom and where the friction was added by hand. Can't we still reconstruct their values at "t=0" by an exponential magnification of their values at time "t", at least in principle? While such a procedure would clearly become unrealistic in practice - because the resolution of microscopes can't double every second for many years :-) - you could still think that it is possible in principle. And you would be right in the sense of strict mathematics describing classical physics. Differential equations are "in principle" reversible although the accuracy required from your measurement devices and computers exponentially skyrockets.

You may simply sacrifice the Liouville theorem and insist that the information was preserved. However, statistical physics and thermodynamics care about the exponential increases of the required resolution. They have powerful tools that classify friction as an irreversible process, after all.

Nearby quantum states

Is it possible to reconstruct the information in quantum mechanics? The answer is No. That's the key point of this posting. Let me show why.

In classical physics, an exponentially damped configuration may look like
Fields(t) = Fields(t=infty) + exp(-t) QN(0)
where "Fields(t=infty)" is the asymptotic value in the far future and "QN" is a quasinormal mode or another kind of perturbation that is dying off exponentially. You might think that the same thing can be done with quantum objects, e.g. the wave function:
Psi(t) = Psi(t=infty) + exp(-t) Phi
Imagine that you normalize "Psi(t)" so that its norm equals one. Is such an evolution possible? Well, you can write this equation but it can surely not arise from a Hermitean Hamiltonian. The evolution is not unitary.

One of the consequences of orthogonal and unitary transformations is that they preserve angles. So if you transform or evolve two orthogonal vectors, "u0" and "v0", they will become new vectors "u1" and "v1" that are still orthogonal to one another. This is not an independent condition. You can prove that it actually follows from the conservation of the length as long as the conservation law holds for all vectors, including combinations of "u0" and "v0".

But if only "Phi" depends on the initial state, and not "Psi(t=infty)", you see that the impact of the initial state is exponentially decreasing. Even if you begin with two orthogonal states, they will evolve to a state that approaches "Psi(t=infty)" in the far future. So the angle in between these two states will go to zero. It is not a unitary transformation.

You might say that you don't care about angles. But as I have already written, not caring about angles means ignoring the length, too. Imagine that you begin with two normalized, orthogonal microstates "alpha" and "beta" of a black hole, and they will evolve into the "Psi(t)" above, with only "QN" different for "alpha", "beta". At "t=infty", they go to the same "Psi(t=infty)".

Because the evolution operator is linear, it will evolve the initial state "alpha+beta", whose length equals "sqrt(2)" by the Pythagorean theorem, to "2 Psi(t=infty)" whose length is "2": a wrong length. (You could also normalize the initial length to be "1": the final one would be "sqrt(2)".)

Now, you could continue to try to dismantle quantum mechanics, by allowing non-linear evolution operators that rescale things to keep all the vectors normalized. You would end up with a complete mess where the laws of logic no longer hold. The probability of "A or B" for mutually exclusive "A,B" would no longer be the sum of the two probabilities, the interactions would no longer be local, and so forth.

I can write about these things later but now you should believe me: nonlinear quantum mechanics would be a complete mess. However, you might still focus on the question whether the information is preserved, regardless of the legality of the evolution of the states we used, if the state behaves as indicated in the formula above. Well, even if you accepted the crazy non-unitary evolution, the information is not preserved.

One way to see it is to calculate the entropy. A coarse-grained formula for the entropy defines it as
S = - Tr (rho ln(rho))
where "rho" is the density matrix. Imagine that you construct a density matrix out of a 50%:50% mixture of the states "alpha,beta" discussed above. Its entropy is going to be "ln(2)": it is always "ln(N)" where "N" is the number of (orthogonal) microstates if these microstates contribute equally to the density matrix.

However, this density matrix would evolve into a density matrix constructed out of the same "Psi(t=infty)": the density matrix would actually describe a pure state. Its entropy would be zero. That's bad, too. This problem is the opposite of the problem that we normally talk about: we usually start with a pure state, like "alpha" itself, and end up with a mixed state - because there's a lot of random (mixed) thermal radiation around the black hole that I have neglected. Both effects contradict unitarity.

At any rate, the exponential decrease of a perturbation is the textbook example of the information loss. You can't ever get a worse information loss in physics than that. All quantitative measures of the information will tend to decay exponentially, too. They will never drop strictly to zero but the information is clearly getting lost.

Small effects vs small probabilities of finite effects

You see that our conclusions were slightly different for "Psi" representing wave functions of quantum mechanics and for "Fields" representing classical observables. Where does the difference come from?

Well, there's a whole world of radical differences between classical physics and quantum physics and "Psi" is simply not another type of a classical variable, even though the "hidden variable" magicians want you to believe otherwise. But I want to focus on a specific aspect of this difference, namely the physical meaning of weak effects.

In classical physics, you could have an arbitrarily weak effect. For example, an electromagnetic wave could have been arbitrarily weak and whether or not you could listen to the radio depended on the strength of your antenna. Also, classical physicists thought that they could take arbitrarily good pictures of dancers at night, assuming a good enough brand of the camera.

But quantum mechanics dictates otherwise. Electromagnetic waves are composed out of photons. The energy carried by the electromagnetic wave of frequency "f" cannot be arbitrarily small: the energy "hf" is the minimal allowed positive value because it corresponds to one photon.

Many people who haven't really understood quantum mechanics yet often imagine that the expectation value of the energy in a wave can be smaller than "hf", for example "hf/50". Well, it can be. But this expectation value is never measured in a single experiment. It is not physical, much like the forbidden energy levels of a Hydrogen atom. The expectation value is just the average of the results of many experiments.

In a single experiment, you can get either 0 photons or 1 photon, but nothing in between. Sometimes you get 0 photons and sometimes you get 1 photon and the average can be 1/50. But it is not 1/50 in a particular experiment. Quantum mechanics is quantum and the number of photons is quantized.

So when we weakened the electromagnetic wave, we replaced the arbitrarily weak electromagnetic wave and its arbitrarily small amplitude by an arbitrarily small probability that we obtain at least one photon. But the probability is something completely different than an amplitude. What's the main difference?

Well, the main difference between an amplitude and a probability is that an amplitude can be measured accurately in one experiment, at least in principle. On the other hand, if you want to "measure" a probability (or the wave function that was used to predict the probability) accurately, you need to repeat the very same experiment many times. And if you're trying to take a picture of a dancer at night, that's pretty hard because she might have problems to move in the exactly same way many times in a row. :-)

For the digital cameras at night, that's pretty difficult. A light bulb can emit 10^{18} "visible" photons per 0.01 second, the time you need for a sharp picture. In a typical situation, only 10^{14} of them will hit a dancer and only 10^{13} of them will get reflected. Out of those, 10^{8} will enter your camera. That's pretty bad. If it is a 10-megapixel camera with 10^{7} pixels, each pixel will get 10 photons in average. Statistics dictates that "N" photons are almost never exact, you will always get "N +- sqrt(N)" of them, so with my particular numbers, a pixel will receive "10 +- 3" photons, a 30% uncertainty of the intensity. That's quite a noise and the picture won't look terribly sharp. It can be even worse.

Let me emphasize that this is probably not a problem of your camera manufacturer or the technology they use. Quantum mechanics implies a fundamental limitation on the sharpness of photographs of dancers that you can ever make with any camera of a fixed size. If the calculated number of photons per pixel drops well below one, you will simply see a black picture. You may perhaps extract a lower-resolution picture by combining clusters of pixels but you can never get a sharp, high-resolution picture out of the limited number of photons, not even in principle.

Well, that's why professional photographers sometimes need bigger cameras. It's not just because of their old-fashioned sentiments. ;-)

To summarize, the exponential decrease of a quantity was "irreversible" in the thermodynamic sense even in classical physics. But quantum mechanics makes this irreversibility really indisputable. If some degrees of freedom are exponentially dropping towards an equilibrium point and there are no other degrees of freedom whose properties are intensely affected by details of the initial state, it is a canonical example of a situation where someone is killing the information softly.

And that's the memo.

1 comment:

  1. Now the topic gains some definition. If the thermal gas with gluons and quarks is quantized by alignment with the wavefunction in terms of h, then as long as the state equation's evolutionary operator is orthogonal we should be able to derive a gravitational spectrum of ultramicro particles which fit into any atom's photon gain precession jump to solve it's topological psi function. That happens, and black holes appear in the scene.
    Hence, electromagnetic waves coincide with that by any process to have the observed frequencies, in agreement with that function network's prediction. That is all true, as a relative quantum topological atomic state function, RQT psifunc, displays. Such a math system is presented online at http://www.symmecon.com, where the visual map output of the psi's stored heat capacity energy shows exactly sized images for the: h, h-bar, delta, nuclear magneton, beta magneton. Exact values for k, 5/2 k, 3/2 k also result. A clear image of the h-bar, a magnetic energy field particle of ~ 175 picoyoctometers, is on view there.
    An RQT function network like this displays the complete, exact atomic 3D image as an animated interactive video data point map.

    ReplyDelete