## Friday, December 17, 2010 ... //

### Event horizons and thermodynamics: more than an analogy

Black holes' exceptional thermodynamic properties have been gaining prominence for 35+ years

Thanks to Jorge P. for the screenshot. Click to zoom in.

The concept of black holes began as "frozen stars" that Einstein and others noticed to be possible according to general relativity. However, whether they would be created in the real world, remained controversial. Only in the 1960s when John Wheeler gave "black holes" their catchy name, people had the courage to realize that their birth is inevitable, even in relatively mild conditions. Singularity theorems made this "inevitability" very concrete.

Black holes used to be thought of as the ultimate cemeteries. If you fall into one, you're permanently doomed. Once you're fully swallowed and destroyed by the singularity, nothing is moving inside the black hole anymore. So there's no temperature, either. The region of space is totally empty and structureless.

However, things began to change in the early 1970s. Stephen Hawking was able to show that the total area of the event horizons never decreases.

When black holes swallow some surrounding matter, their mass and area obviously increases. But even if two black holes merge, the total area goes up. In 3+1 dimensions, the black hole mass is proportional to the radius. So when two mass M black holes merge into a single mass 2M black hole mass, the radius doubles and the final area is 4 times greater than an initial black hole's area i.e. 2 times greater than the total event horizon area at the beginning.

Jacob Bekenstein (on the picture) was going to make his most important argument. The finding above - that the area never decreases - reminds us of another similar law in physics, the second law of thermodynamics, which states that the total entropy never decreases. Is there an analogy between the entropy and the black hole horizon area?

There had to be one. After all, black holes are also physical objects which is why they can't reduce the entropy. If you could reduce the entropy of your messy room by throwing the whole house into a black hole which has no entropy, the second law would be violated. So the black holes should better remember the entropy in a quantity that is guaranteed not to drop. The area of their event horizon was a totally natural candidate.

(If the entropy confined in the black hole interior would have to be confined forever, its value would arguably be unphysical for the observers outside and all paradoxes following from it would lose their sharp edge. However, as we will see, black holes can disappear and the confinement ends. So we have to know what their entropy is.)

So Bekenstein realized that the black holes should actually carry some entropy,
S = coefficient * Area
For dimensional reasons, the coefficient should only depend on fundamental constants that control black hole physics - speed of light; Planck's constant; Boltzmann's constant; Newton's gravitational constant. The units have to match. In particular, it follows that up to a purely numerical coefficient, the entropy (divided by Boltzmann's constant) should equal the event horizon area divided by the Planck area.

Stephen Hawking went further. He analyzed what happens with a quantum field in the background of a black hole that forms and becomes stable. He was able to show that in the future, the black hole inevitably emits some radiation. The radiation totally coincides with Planck's black body radiation at a temperature. The temperature is equal, if you allow me to use the Newtonian terminology, to the gravitational acceleration at the event horizon in some natural units.

They found the maps
Entropy ↔ Black hole event horizon area
Temperature ↔ Acceleration at the black hole surface
Just to be sure, it was known that even if a black hole is rotating and its surface is spherically asymmetric, the gravitational acceleration will be constant all over the event horizon. This is much like the temperature that becomes uniform after you reach a thermal equilibrium.

The precise calculation - semiclassical calculation in quantized general relativity which is accurate enough for large enough black holes (with small curvature and matter density) - produced the coefficient as well:
S = A/4G
The Bekenstein-Hawking entropy is equal to the area divided by four times Newton's constant; additional powers of the speed of light and Planck's constant are omitted. Similarly, the normalization constant in the relationship of the temperature and the acceleration was found as well.

Related developments

Note that Hawking's argument was "macroscopic": he first derived the temperature, from the wavelengths of the photons and other particles that are emitted, and then he used it to deduce the entropy. The entropy wasn't initially derived microscopically, by counting possible arrangements of some atoms.

Because the Planck area is so tiny, 10^{-70} meters or so, the black hole entropy is gigantic. It was totally unclear where the ultimate boring object - a black hole - could store so much information to account for this huge entropy. We will discuss this point later.

Bill Unruh was able to derive the Hawking-like radiation that is observed by an accelerating observer. The creation and annihilation operators mix because the accelerating observer's "vacuum" is something else than the normal vacuum - it is the ground state of another Hamiltonian, one that includes a boost generator. Unruh's calculation for the flat space - and the Rindler space, a way to look at a wedge of the flat space from the accelerating observer's viewpoint - should have been done before Hawking's calculation because it's easier but the history of physics chose the more bizarre chronology. Unruh's radiation is a simplified model of Hawking's radiation but it was found after Hawking's more elaborate calculation.

It was also realized that in the Feynman path integral approach, where one can compute properties of thermal systems by making the time Euclidean and periodic, the Hawking temperature is exactly such that it allows to continue the black hole solution (for any black hole) to a Euclidean geometry that is smooth, even near the black hole horizon. This is a very logical conclusion. The Feynman thermal path integral automatically knows about all the configurations that include the black holes and their right temperature.

Ted Jacobson

In early 1995, Ted Jacobson found an intriguing way to "localize" the relationship between the temperature and entropy on one side, and the acceleration and areas on the other side. In fact, he derived Einstein's equations from a well-known thermodynamic equation. See also Einstein's equations as equations of state.

What did he do? He decided to impose the usual equation of thermodynamics, "δQ = T dS", in the following situation involving an accelerating observer:

The heat "δQ" that crosses a small surface "A" may be computed as "T_{ab} k^a k^b" (times "A", of course) for a light-like vector "k" that encodes the directions, up to some constants. It shouldn't be shocking that the quantity is just a component of the stress-energy tensor.

The change of the entropy is computed as the change of the area "A", times the usual constants above. The area changes with time because the space is curved - some kind of gravitational lensing. The temperature "T" is related to the acceleration of the accelerating observer. After some choices and manipulations, "T dS" is written in the form "R_{ab} k^a k^b" (times "A", of course).

Because the "thermodynamic" identity has to hold for any null direction "k", we learn that "R", the Ricci tensor, is equal to the stress-energy tensor. That's of course wrong because the Ricci tensor should be replaced by the Einstein's tensor so Jacobson also explains why the derivation was only determined up to scalar multiples of the metric tensor.

The right scalar multiple changes the Ricci tensor to the Einstein's tensor - plus an undetermined value of the cosmological constant.

It's a cute heuristic argument - an argument similar to the derivation of the gravitational red shift because it also uses local frames cleverly - but one should be very careful about its meaning. First of all, the entropy associated with the Rindler horizon is not "objectively real". It only exists relatively to an accelerating observer who decides not to ever see the space behind the horizon. From the viewpoint of the whole spacetime and coordinates that cover the whole spacetime, random areas in the spacetime obviously carry no entropy!

This basic confusion has also partly led Erik Verlinde to write down his delusions about gravity as the entropic force. The important reality is that if there are no event horizons, there is no huge black-hole-like entropy. It's completely nonsensical to assign a huge entropy to a pair of neutron stars just because their masses and distances are not far from those of similarly sized and comparably heavy black holes. Neutron stars, and even orbiting ones, carry a negligible entropy.

Jacobson also claimed a related nonsensical conclusion that already appears at the end of his abstract:
This perspective suggests that it may be no more appropriate to canonically quantize the Einstein equation than it would be to quantize the wave equation for sound in air.
This is, of course, totally preposterous. Even if the heuristic argument were totally valid from an accelerated perspective viewpoint, and this point is somewhat questionable, it can't say anything whatsoever about the "need to find a quantum description" of the reality.

Quantum mechanics is needed because we observe phenomena that prove its relevance. Objects are generally found in linear superpositions of different states. Different states have different positions, momenta etc. - and therefore also different gravitational fields. It's obvious that because some objects in the real world are demonstrably probabilistic in character, all objects have to be probabilistic because all objects may be affected by the objects whose probabilistic nature has been established.

Jacobson's reasoning at the end of the abstract (and corresponding places of the paper) is a striking reminder of the irrational thinking of some 19th century physicists - including some very well-known ones - who just couldn't live without the luminiferous aether. The fact that you may imagine that the electromagnetic or gravitational waves are analogous to the sound waves - which require atoms - doesn't imply that the electromagnetic and gravitational fields are composed of atoms similar to gas atoms in the air. They're definitely not.

One surely needs quantum mechanics to describe any object in the real world properly. Also, the entropy used in Jacobson's heuristic derivation is observer-dependent. It is not the entropy that can be defined in sensible coordinates that cover the whole spacetime; if there are no black holes in the spacetime, the latter entropy is negligible relatively to the entropy attributed to the macroscopic surfaces.

Strominger and Vafa

Many formal and vague arguments why the black holes had the right entropy - which is huge - have been proposed by people like Steve Carlip but none of them was controllable. Everything was just handwaving. If you really want to calculate the result, and to be sure that the terms you neglected should have been neglected, among other things, you need a consistent theory of quantum gravity.

The only one that exists is string theory but people had to wait until the Second Superstring Revolution when many things took place. For example, Joe Polchinski has found new dynamical p-dimensional objects, the Dp-branes. Using them as well as excited strings, Strominger and Vafa could finally calculate the total number of bound states of strings and branes with the right values of 3 charges that are enough to produce a macroscopic event horizon around a 4+1-dimensional black hole.

In January 1996, their result exactly agreed with the Bekenstein-Hawking prediction. But in the new case, it was obtained from genuine statistical physics - from counting of the elementary degrees of freedom (that are owned by strings and branes). Black holes look like smooth curved objects at strong coupling but if you adiabatically reduce the coupling constant - and the strength of gravity - the corresponding black hole microstates will look like knitted configurations of vibrating strings and branes occupying a nearly flat space.

Because string theory is a consistent theory of gravity, it had to work. However, the internal mechanism that makes the agreement possible remained - and still partly remains - somewhat mysterious. The term "consistency of string theory" seems to be too far-reaching and many people often dream about more detailed explanations of the "miracles" that almost directly follow from the "consistency". The calculations on the string theory side are completely different than those performed by Hawking in his semiclassical calculation in general relativity. Nevertheless, the result is identical for large classes of black holes in many dimensions.

The black holes that have been checked include 7-parameter rotating near-extremal black holes, non-extremal black holes, black holes that can be seen in the telescopes (extremal Kerr black hole), and others. The stringy calculation pretty much universally boils down to a SL(2,R) group identified at the horizon and the Cardy's formula to count highly-excited states.

There is no longer any sensible doubt that any extra black hole in string theory whose entropy is calculated in any well-established calculational framework will produce the right entropy. So further checks became largely unmotivated. But there are still some mysterious questions about black holes at the quantum level.

Exceptional thermodynamic properties of black holes

The black hole entropy is a fundamental property of quantized space. The information may be generally attributed to surfaces - the event horizons. In Jacobson's derivation of Einstein's equations via Rindler spaces - much like in the case of the "cosmic horizon" of the anti de Sitter space - these event horizons depend on the observer. But there are also event horizons that are shared by pretty much everyone who lives in the Universe - the event horizons of localized black holes that causally separate their finite internal volume from the rest of the Universe.

Black hole entropy is the initial insight that has led to the holographic principle. If the maximum information in some region may be identified with the boundary of the region, why shouldn't we be able to describe the whole dynamics by a theory that is only defined at the boundary? The AdS/CFT correspondence - where the boundary is located at infinity where the proper distances are infinitely large - remains the only truly well-established example of the holographic principle in actions.

It's highly questionable, to say the least, whether similar "correspondences" may be accurately defined for boundaries that are localized at "generic regions of space". After all, generic surfaces in space don't have a fixed shape or area so the corresponding theory isn't fixed, either. (The AdS boundary at infinite is so large that its geometry is fixed; you would need an infinite energy to change the asymptotic conditions in an infinite spacetime.) Moreover, one should be able to find theories with finite-dimensional Hilbert spaces for finite volumes. However, no large yet finite-dimensional Hilbert spaces actually arise as a natural description of a physical system.

It seems that finite-dimensional Hilbert spaces in physics only and universally arise as truncations of infinitely-dimensional ones - in which one "microcanonically" focuses at states with the energy in a desired range (aside from other conceivable restrictions). However, the states of the right energy never seem to be separated from others at the level of the defining equations of the theory.

However, the black holes themselves, because of their ability to maximize the entropy, are also the "generic states" for a given value of the mass and charges. If you look at ever heavier elementary particles, you may go from the massless ones - such as gravitons, gluons, photons - to the neutrinos and electrons, quarks, W- and Z-bosons, Higgs bosons, superpartners, and others. Near the GUT scale and string scale, you may see new heavy gauge bosons and/or excited string states.

When you get to the Planck scale, you will inevitably see new many metastable states that are universal - the black hole microstates. For masses well above the Planck scale, almost all states - that may appear as resonances etc. - are black hole microstates. The density of these states is completely determined by long-distance physics because the heavy black holes are large in size and they are long-distance objects, with small curvatures (everywhere except for the vicinity of the singularity which is arguably unphysical for all external observations of the object).

Black holes exhibit the UV/IR connection

To summarize, black holes are the most universal localized objects in any theory that includes general relativity. They're not just random siblings of comets, planets, stars, and quasars: they're special. They maximize the density of many things. Despite the large matter densities that are needed to produce them, black holes are connected with "apparently empty" space, space without matter. Their properties therefore have something to do with the properties of the empty space itself.

The prominence of these objects survives in the quantum theory, too. In the quantum theory, they can be shown to be the highest-entropy states that are inevitably the final points of a gravitational collapse and that dominate the trans-Planckian superheavy spectrum of the quantum theory.

There exists a surprising connection between very light physics - fields at low curvatures - and the very massive states - the black hole microstates. This sort of a UV/IR connection arises because the massive black holes are very large. A consistent theory must smoothly extrapolate the "light physics" to the "heavy physics" so that the relationship continues to hold.

This is a damn tough constraint (and be sure that all the claims that they have been satisfied somewhere outside string theory are fraudulent): while the black holes must correspond to "new heavy particle species" - the black hole microstates, so that the quantum theory of gravity still continues to be a "field theory with infinitely many species", all the statistical properties of the massive species - the black hole microstates - must be determined from their long-distance classical descriptions.

That's why you can't really produce a consistent theory of quantum gravity by inventing a high-energy limit and/or inventing a low-energy limit. If you extrapolate either of them to the other end, the theory begins to misbehave. It's infinitely unlikely that it will respect the UV/IR connection I mentioned.

String theory exhibits these UV/IR connections all the time. In the context of perturbative string theory, it was really the first insight that gave birth to the whole field. I am talking about the "world sheet duality" that led the people to use the term "dual models" in the early years of string theory in the late 1960s and early 1970s. According to the world sheet duality, a thin rectangle may be rotated by 90 degrees and interpreted as a very fat rectangle.

This seemingly obvious point always links the properties of many high-energy fields and excitations to the properties of low-energy physics. And this link, the UV/IR connection, survives even non-perturbatively. Non-perturbatively, it is needed for the consistency of any quantum theory of gravity. The UV/IR connection in quantum gravity requires the non-linear equations of general relativity. So its explanation doesn't seem as easy as the rotation of a rectangle by 90 degrees. But the constraint remains equally important.

Black hole microstates are composite and elementary at the same time

There is another consequence of the UV/IR connection: this connection makes the black hole microstates simultaneously composite as well as elementary.

They're elementary because they can be thought of as some excitations of a collection of strings and branes. Even if you had a different theory, it would have to be able to calculate the black hole entropy from the number of excited states of "something". This "something" would play the very same role as strings in perturbative string theory. And differently excited strings must be interpreted as different fields in spacetime - in an effective field theory. The same has to hold for "something" whatever it is.

On the other hand, the number of black hole microstates is so large that there must exist many ways to excite the "something", some internal degrees of freedom.

String/M-theory remains the only theory that smoothly interpolates between the experimentally established long-distance physics and the high-mass black hole physics that is also established because of the macroscopic size of the black holes. The "map of possible physical theories" that we have became increasingly familiar with suggests that this is not due to our lack of imagination; string/M-theory is really the only mathematically possible solution.

Well, it's a theory and it has many solutions to its own equations, too. But that's a different issue. If you don't want to consider string theory a "single theory" (you should!) and you want to count the solutions to string theory individually, they're the only viable candidates for "detailed theories of quantum gravity". Their number is whatever it is.

In different situations, we have seen that there exist very many different mathematical ways to describe its physics - and their relationship to the "strings" that gave string theory its name is often confusing or absent. But it just seems inevitable that any other type of a consistent theory of quantum gravity has to be connected by some dualities or transformations or shared defining equations with a background of string theory we already know: the network of string theory in the realm of ideas has simply become way too dense already.

For this reason, we will probably continue to use the term "string theory" for any consistent definition of a theory of quantum gravity, even though the relative importance of one-dimensional objects may keep on decreasing. Whoever doesn't understand that this is the likely outcome must completely miss the fact that string/M-theory is a consistent theory of quantum gravity, even though it is not established as the accurate predictive theory of the real world, and that it contains consistent realizations of very many powerful ideas in theoretical physics.

The only way to show that one can't identify the terms "quantum gravity" and "string/M-theory" is to show that string/M-theory is actually not a consistent theory of quantum gravity. And given the miraculously overwhelming evidence of the consistency of string theory, I assure you it is not a realistic project. ;-)

Black holes' role in checking the consistency of a quantum gravity background will never disappear again. Their existence also guarantees that the "variable properties" of a particular background of quantum gravity only influences the "intermediate scales", not the "trans-Planckian heavy scales".

#### snail feedback (3) :

The calculation of the Unruh effect was done before Hawking's work - the history of physics did choose the correct chronology - because Stephen Fulling described the mixing of creation and annihilation operators in Rindler space in his 1973 paper in PRD, a year before the Black Hole Explosions? paper of Hawking's in Nature.

The calculation of the Unruh effect was done before the Hawking effect - the history of physics did choose the correct chronology (just) - because Stephen Fulling described the mixing of creation and annihilation operators in Rindler space in his 1973 paper in PRD, a year before Hawking's paper in Nature.