Monday, May 07, 2012 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Wrong log corrections to BH entropy exclude LQG, inconsistent theories of QG

Ashoke Sen's new research is much more interesting when it comes to positive claims

The correct value of the black hole entropy is an important litmus test for assessing the consistency of candidate theories of quantum gravity.

A translation of this text to Japanese is available (click).
In the 1970s, Jacob Bekenstein decided that black holes should carry a nonzero entropy that should be proportional to the area of their event horizon. But if an object has a variable nonzero entropy, it must also have a variable nonzero temperature.

Stephen Hawking made his famous calculation of the Hawking radiation and determined the black hole temperature. For any black hole, it's proportional to the gravitational acceleration at the event horizon; the coefficient is universal. For example, for the \(D=4\) Schwarzschild black hole, the temperature is \(T = 1/(8\pi M)\).

(Later, Bill Unruh deduced the same temperature for an equivalently accelerating observer in the flat space. The history was a bit acausal here because Unruh's calculation is arguably simpler and "more elementary" but complexity hadn't repelled Hawking.)

The entropy could have been determined from thermodynamics indirectly, via \[dQ= T\cdot dS.\] For large black holes, the resulting entropy of any black hole may be expressed as\[

S = \frac{A}{4G}

\] in the \(\hbar=c=k=1\) units where \(A\) is the event horizon's area. However, at that time, one couldn't calculate the entropy using the conventional methods of statistical mechanics.




In statistical mechanics, the temperature (or \(kT/2\) in SI units) is roughly speaking the energy per one degree of freedom. More precisely, the entropy is given by the formula at Ludwig Boltzmann's tomb:\[

S = k\cdot \ln (N)

\] where \(N\) is the number of microstates that are macroscopically indistinguishable from the given state of the system. To calculate entropy by the statistical methods, you need to find lots of microstates, lots of ways to reorder its numerous elementary building blocks.

That initially looked hard if not impossible because black holes seem to be completely empty – after a short time that the hole needs to digest all the food in the singularity. So they don't seem to have any internal structure or "atoms" that could remember the huge amount of information such as \(S=A/4G\). And be sure it's huge because in the relativistic quantum units, \(G\) is the Planck area, about \(10^{-70}\,{\rm m}^2\) in the case of our Universe. A black hole with a solar mass would have a radius equal to a few miles so the entropy would be of order \(10^{76}\) nats (or bits).

People tried to find the microstates somewhere but the successful calculations were only kickstarted at the beginning of 1996 when Andrew Strominger and Cumrun Vafa calculated the entropy of the first black hole with a macroscopically large event horizon using the statistical methods. The entropy precisely agreed with the Bekenstein-Hawking prediction even though the agreement looked like a miracle.

They converted the calculation of the number of black hole microstates to a calculation of the number of microstates of a bound state of D-branes, strings, and units of momentum with the same charges. The D-branes etc. are the right description of the object for a small value of \(g\), the string coupling constant; the black holes become manifest at a large \(g\). For the microstates on both sides to match, they needed solutions with a sufficient amount of supersymmetry. However, the simplest supersymmetric black hole solutions usually have \(A=0\) in the classical limit. So they needed to pick a supersymmetric extremal black hole that carried three types of charges. That's where they got their amazing agreement for a black hole solution in five-dimensional general relativity (embedded into type II string theory).

The calculation was later extended to many other black holes, including up to 7-parameter families of near-extremal ones, non-extremal ones, black holes in many different spacetime dimensions and many different shapes (and topologies). Corrections to all orders were more recently calculated for some classes of black holes by OSV (Ooguri-Strominger-Vafa). Some "essence" of the methods was isolated so that the entropy calculation could have been extended to certain black holes whose embedding into string theory doesn't have to be quite well-known (e.g. extremal Kerr in four dimensions).

Those were checks of the consistency of string/M-theory as a theory of quantum gravity and everything has worked perfectly. It's clear that people get bored at some point. You won't really revolutionize the field by verifying another black hole. It's interesting but it has to work. As the physicists are internalizing the reasons why string theory is a consistent theory of gravity, they are increasingly frequently saying that even the Strominger-Vafa calculation simply had to work. No expert has serious doubts that string theory gives what it should for every black hole that will be computed in the future.

Now, loop quantum gravity.

Years ago, you could read TRF articles about the the quasinormal story on quasinormal modes and related articles. People doing the so-called "loop quantum gravity" also wanted to claim that they can address the black hole entropy. However, what they found was just a ridiculous parody of string theory's successes.

A not-so-subtle difference was that nothing works in LQG. First of all, LQG doesn't give you any "dual" description of the black holes (or anything else). So the black holes always look like black holes in classical general relativity – except for the pathological attempt to discretize things. Now, the entropy of such discretized ordinary black holes is expected to have volume-extensive terms because there's still some junk inside the black holes. The LQG black holes have atoms; too many, in fact. To eliminate the volume-extensive terms, one has to truncate it by hand i.e. introduce a new ad hoc rule that the objects describing black holes in LQG shouldn't have any interior whatsoever. It's wrong but you must assume it.

You eliminate some other obvious wrong terms and you end up with a setup that only contains some degrees of freedom at the event horizon. If you do so, then of course the entropy has a chance to be proportional to the horizon area, by construction (you chose to hide all other terms). So far, nothing has worked correctly so you at least hope to recover the factor of \(1/4\) in the Bekenstein-Hawking entropy. However, what you get is wrong by factors such as \(\ln(2)/\sqrt{3}\). So you invent a justification that doesn't really make sense but you introduce a new rescaling of the factor by a coefficient that is claimed to be related to "quasinormal modes". An intriguing numerological hint by Shahar Hod, later analytically proved by your humble correspondent (and in another paper by me and Andy Neitzke), gives you a great hope that it will work. Except that the quasinormal-mode/entropy relationship may also easily be shown not to work for any other black hole.

So it's a sequence of disasters. Those folks were ready to make their speculations arbitrarily vague and sloppy and reduce their standards by any amount but it still didn't work. The only reason why some people disagree that the black hole entropy tests haven't proved that LQG is excluded as a consistent theory of quantum gravity is that they have no scientific integrity at all. I am talking about people such as Smolin, Rovelli, and a few others.

Ashoke Sen's new paper

Ashoke Sen who is the world's top 5 leader in black hole thermodynamics, to say the least, has just analyzed another cute aspect of the black hole entropy that may be used as a test of consistency of a proposed theory of quantum gravity:
Logarithmic Corrections to Schwarzschild and Other Non-extremal Black Hole Entropy in Different Dimensions (arXiv)
As I have suggested, the term \(S=A/4G\) is just the dominant term for the entropy of the large black holes. But the formula isn't quite accurate. You should imagine terms proportional to other (e.g. zero and negative) powers of the area \(A\) or other parameters if there are any. However, there's one very important subleading term,\[

S = \frac{A}{4G} + K\cdot \ln{\zav{\frac{A}{4G}}}

\] one proportional to the logarithm of the area. The logarithm of one trillion is twelve (times the logarithm of ten) which is much smaller than a trillion so these extra terms are negligible for really large black holes. But they still produce terms that grow to infinity if the black hole event horizon area does.

Such logarithmic corrections have been calculated for various extremal black holes using the methods of Euclidean gravity; it turns out that not only the leading Bekenstein-Hawking piece but also the coefficient of the logarithm is fixed by the long-distance behavior of the black hole.

In the newest paper, Ashoke Sen calculated the coefficient of the logarithm for various non-extremal black holes, too. The world's first black hole solution is the Schwarzschild solution in \(D=4\) pure gravity (the metric tensor is the only field), of course. Sen had be careful about the integration over zero modes and differences between various ensembles etc. But he was able to calculate the logarithmic correction to the number of "singlet states",\[

\Delta S_{\rm singlet} = \zav{ \frac{212}{45} - 3 } \ln a \sim +1.71111\cdot \ln a.

\] The constant \(a\) is the black hole radius in the Planck units. The constant \(212/45\) looks amazingly awkward but it's still fully dictated by the consistency of the long-distance physics. If Schwarzschild didn't die after he returned from the battlefront and if he could continue his research for a century, he would find out that his black hole solution inevitably leads one to notice that this constant is important. (Not bad an achievement for a guy who is exactly 100 years older than me, within two months.)

Of course, if you have a random would-be theory that claims to calculate the entropy of black holes, it's unlikely that it will get numbers such as \(212/45\) by chance. And indeed, the people in loop quantum gravity claimed that the right result should have been\[

\Delta S_{\rm singlet} = - 2 \cdot \ln a.

\] Note that the coefficient is far less "complex" and it has a wrong sign, too (just to be sure that you won't try to claim that \(1.711\) is "essentially" two – the kind of breathtaking sloppiness that is completely common in non-stringy papers on black hole microstates).

Sorry, Gentlemen, but this really disproves loop quantum gravity. The multiplicative discrepancy of \(\ln 3/\sqrt{2}\) or \(\ln 2/\sqrt 3\) that they triedf to "fix" by occultist references to quasinormal modes was serious, awkward, but you could still hope that there's some simple explanation. But the constant \(77/45\) in front of the logarithmic correction nails things down. I think that they know that this value can't possibly follow from loop quantum gravity or any future "refinement" of it. And indeed, the loop quantum gravity people themselves have claimed that even within their framework, the size of the logarithmic correction can't be modified by any more detailed corrections. It's non-refundable. The "undead" loop quantum gravity is really dead by now.

For the Schwarzschild solution, we're currently not able to calculate the logarithmic correction in string theory which is, given the findings above, good news. We don't have any "canonical embedding" of the Schwarzschild black hole in string theory, after all. However, that doesn't mean that string theory hasn't passed many such logarithmic tests. It has. It is able to microscopically calculate (i.e. using the methods of statistical physics) the logarithmic corrections to extremal black holes in many vacua. And in all these cases, the results agree with the macroscopic expectations.

It's kind of fun so let me list these vacua for which the solution already exists in literature. For \(\NNN=4\) supersymmetric CHL models in \(D=4\) and type II on \(K3\times T^2\) with \(n_V\) matter multiplets, the logarithmic correction vanishes by the macroscopic methods and string theory's microscopic calculation agrees.

That was too easy. A wrong theory could fabricate a zero, too. And sloppy or dishonest physicists would be more than eager to do so.

For type II on \(T^6\), the logarithmic contribution is \(-8\ln \Lambda\) where \(\Lambda\) scales like the charges. String theory agrees.

That was harder.

But now look at BMPV in type IIB on \(T^5/\ZZ_N\) or \(K3\times S^1/\ZZ_N\) with \(n_V\) vectors preserving 16 or 32 supercharges. The logarithmic corrections go like\[

-\frac{1}{4} (n_V \pm 3) \ln \Lambda

\] where the sign \(\pm\) is plus if the angular momentum vanishes and minus if it scales with the charges \(Q_1\sim Q_5\sim n\sim \Lambda \sim J^{2/3}\). These are pretty nontrivial formulae – a strange fraction in front of it; a dependence on the number of fields; a strange shift of it – but string theory gets an A* from these logarithmic tests.

It seems obvious to me that string theory would also yield the right factors such as \(77/45\) for the four-dimensional Schwarzschild black hole if we would embed it in string theory and if we could make a controllable calculation which couldn't rely on supersymmetry. However, now we see that the numerical constant in the result is contrived which suggests that the calculation should be somewhat hard. And string theory clearly knows about the complexity of the calculation. It "chooses" not to tell us the right result too quickly and it's good for "her".

Loop quantum gravity as well as any other man-made would-be theory fails to know about the fact that non-extremal black holes are really "much more complicated" when it comes to the values of their numerical constants in similar formulae. So they may offer you some careless ways to "calculate" these coefficients and you may immediately see that these theories are just wrong.

I want to reiterate this point. Some people think that supersymmetric theories are "more complicated" because they add new fields, new interactions, and new charges for black holes etc. But the logarithmic corrections to the black hole entropy are just another way to see that the supersymmetric and extremal objects are actually more elementary and inevitably produce simpler calculations. The people who think that the right starting point is a non-supersymmetric theory or a non-supersymmetric black hole or a non-extremal black hole and the addition of superpartner fields or extremal charges "makes things more contrived" have it upside down. The simpler, more fundamental starting point is a supersymmetric theory and one needs a potentially artificial procedure – supersymmetry breaking – to get to non-supersymmetric or non-extremal objects. Making objects non-supersymmetric inevitably adds some complexity and the appearance of the fraction \(212/45\) is an example of the complications we add if we decide to go non-supersymmetric.

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :