Friday, May 29, 2020

Insightful corrections and the weak gravity conjecture

Natalie Wolchover wrote an unnaturally good update about the topic of "corrections to the Weak Gravity Conjecture and their far-reaching implications" for the Quanta Magazine:
Black Hole Paradoxes Reveal a Fundamental Link Between Energy and Order
I wonder whether she did it herself or there is a "real physicist" (one or many?) behind the article.



Centory: Take It to the Limit. 1995 was a really good year for Eurodance (and pop music).

She starts with saying that physicists like to investigate the properties (and events) in the extreme regime because qualitatively new things often happen and it's exciting. Black holes have defined extreme enough conditions for half a century or one century, depending on what you count. She mentions the Hawking-Bekenstein discovery of the thermal traits of black holes in the 1970s.

Then she focuses on our 2006 Weak Gravity Conjecture (to find a new deep lesson from a black hole, you gotta take it to the limit), explains some basic equivalent forms of it, and chooses two recent papers that made some advances.



One of them is the 2018 Remmen-Cheung-Liu that calculated some higher-dimension operators' corrections to the naively extremal black holes. They found out that the entropy goes up because of these corrections which helps them to decay.

Well, it was really our (or my) goal to start this industry of "corrections to black holes in WCG" before June 2006 when we finally released the paper with Megha Padi and Yevgeny Kats, two smart Harvard students. That paper recently surpassed 100 citations as well.



You know, the extremal black hole has \(M=Q\) in some rather natural units: the charge is equal to the mass. The charge may be quantized and doesn't get any "continuous" corrections but the mass isn't naturally quantized, it may continuously shift due to generic physical effects and interactions, and it really does so. Note that the black hole mass is equivalent to the energy via Einstein's \(E=Mc^2\). The energy is shifted by every new interaction, every new "higher-dimension operator".

So my basic hypothesis was that the quantum corrections should generally allow \(M=Q-\epsilon\), the situation in which the black hole looks "a bit over the extremality bound": the sign of the correction to \(M-Q\) should be universal and the right one. Yevgeny and Megha did most of the hard work, of course, and I think that their or our conclusions weren't too clearcut. The sign usually wants to be aligned with the WCG but sometimes it's left ambiguous because of some free parameters in the theory, sometimes the correction cannot be saturated near zero which is also surprising, and so on.

Of course I thought (and wanted) other people to do these analyses even more correctly and in the more relevant contexts (and effective Lagrangians) and this industry only exploded more than a decade later. Why didn't the same people already join us in 2006? Maybe they had better things to do and now they ran out of better things? Needless to say, these nice 2018-2019 papers may be close to what I wanted the three of us to write in June 2006, but we didn't quite find the beef. Surprisingly, my saying "dear Megha and Yevgeny, please write the paper about the tests of WCG from BH higher-derivative corrections that would otherwise be written in 12-13 years from now" wasn't enough. ;-)

OK, as Wolchover discusses, the black hole entropy \(S=A/4G\) as calculated by Hawking is also just the leading approximation. It has corrections due to higher-dimensional operators as pointed out by Wald in the early 1990s – and beautifully expanded to the supersymmetric stringy context by Ooguri-Strominger-Vafa and others in the early 21st century. The identity \(M=Q\) for the extremal black holes leaves the answer to the question "which of them is really larger" as "too close to call". Classically, you think that \(M\gt Q\) almost always; but quantum mechanically, it's important (as the Weak Gravity Conjecture explains) that \(M\lt Q\) is possible in a tiny (extreme) portion of the parameter space.

Fine. Wolchover's article discusses the 2-year-old Remmen-Cheung-Liu article as well as well as the 2019 Penco-Goon which was accepted to PRL two months ago. Penco and Goon finally claim the corrections to the entropy to match those to the charge-minus-mass. It is nontrivial because their "correction to the entropy" is purely thermodynamic while Wald's formula is purely field-theoretic.

OK, lots of people add their opinions and conclusions. Jorge Santos points out that it is really cool to see that this WCG principle – which naively talks about forces in general and has no special relationship to string theory – is nontrivially confirmed in one class of string theory situations after another. String theory really allows you to support the Weak Gravity Conjecture at least anecdotally or experimentally, in a very large number of diverse anecdotes. Because we end up saying that "electromagnetic-like forces are stronger than gravity", it is clear that the "pure gravity limit" of the theory into which other forces are being added as if they were small corrections or decorations must be inconsistent. That's why ideas like loop quantum gravity are dead. A consistent theory of quantum gravity must also have other forces and be a theory of unification that links the strength of all these gravitational and non-gravitational forces, like string theory does! (Of course, I have been making this point for over a decade, too.)



Another Jorge, namely Jorge Pullin – whose main work is AFAIK to regularly send me new nice videos of Rube Goldberg machines but who still seems to be a parttime defender of dead ideas such as loop quantum gravity – was quoted as saying that "dead" was too strong an adjective for loop quantum gravity. A more politically correct expression could be "resting" (as in the famous sketch above) or "life challenged" (or what was George Carlin's word for "dead"?). Some variations of loop quantum gravity with other forces may also exist – they are just not being shown to us because the LQG researchers are currently focusing on sending the Rube Goldberg machine videos instead. ;-)

(Don't dare to attack Jorge for sending these cool videos, their proliferation is much more intellectually valuable than all the work that has ever been done in loop quantum gravity combined.)

OK, Jorge, you don't seem to get the point at all. We're not saying that you can't fudge a crackpot theory or distort its conclusions; you can, you have done it for decades. Instead, we say that we can prove that your preferred theory is wrong. We really have strong evidence that the "starting point" of "pure quantum gravity" where some other forces are non-existent or weak should be inconsistent. So if your framework says that the other forces are just unimportant additions (that can be done later) and the model is equally consistent with or without electromagnetism, then your whole framework is just wrong. (We have dozens of other reasons to be sure that your framework is wrong.) You haven't really gotten anywhere in finding any actual correct answers about quantum gravity – which is generally allowed and what is generally forbidden when gravity and quanta get combined.

Gary Shiu of Wisconsin (who was not previously at Columbia, thanks, Bill LOL) also added a rather bizarre comment. "While stupider colleagues naively expect the entropy to go up, I can show you counterexamples. They are acausal but they are great and imply that I am smarter." I am sorry, Gary, but you are full of *it and the asterisk stands for "š". The increasing entropy is the second law of thermodynamics and it is valid totally universally, regardless of the spectrum of forces or the detailed list of higher-dimension operators.

I have written the proof many times but I think it is wrong to send you elsewhere. So let us post the proof again. The evolution \(A\to B\) between two ensembles of microstates \(A,B\) where \(S(B)\gt S(A)\) is more likely than the opposite evolution \(CPT(B)\to CPT(A)\) because the probabilities of both evolution may be written as the probabilities of the microstates' evolution \(P(a_j\to b_k\)) which are averaged over the initial microstates but summed over the final microstates. It's because the ensembles of microstates mean "or" and "or" translates as the "summing of probabilities" if "or" is inserted in between the final microstates; but the initial probability pie has to be "divided" (i.e. we are averaging probabilities) if "or" is placed in between the initial microstates.

For this reason, we have (I am assuming equal weights of the initial microstates; but they could also be unequal and we could have a weighted average below)\[

P(A\to B) = \frac{1}{N_A} \sum_{j,k} P(a_j\to b_k)

\] Similarly,\[

P(CPT(B)\!\to\! CPT(A)) = \frac{1}{N_B} \sum_{j,k} P(CPT(b_k)\!\to\!CPT(a_j))

\] We assume \(N_{CPT(B)} = N(B)\). Because\[

P(CPT(b_k)\to CPT(a_j)) = P(a_j\to b_k),

\] we see that the \(P(A\to B)\) and \(P(CPT(B)\to CPT(A))\) only differ by the numerical prefactor, i.e. by the factor of \(N_B/N_A = \exp(S_B-S_A)\). So if the entropy is increasing in \(A\to B\), then this process is \(\exp(S_B-S_A)\) times more likely than the time-reversed process. And because \(\exp(S_B-S_A)\) is an exponentially huge number and \(P(A\to B)\leq 1\), \[

P(CPT(B)\to CPT(A))\leq \exp(S_B-S_A).

\] In other words, the probability of a process with a decreasing entropy is exponentially tiny and "practically zero" for macroscopic entropy changes! Gary, I am not sure how you want to circumvent these totally universal laws. Of course, if you present "acausal" models where the identity of the past and future is obfuscated by some drugs you took, then you also obfuscate the question whether "the entropy increases towards the future or the past". But as a physicist, you shouldn't obfuscate such things or take drugs. The logical arrow of time must always exist because it is needed for consistent rules to deal with probabilities (and their sums and averages) and the thermodynamic arrow of time may be derived from the logical one and they just agree.

Of course, there are many directions in which this research may expand. Some of the results have been interesting but I don't really think that we have "659 full-blown new discoveries that followed WCG", as the citation count could suggest. The "right form to formulate WCG", and whether it exists at all, remains inaccessible, and so does the general proof. But at the end, I think that the WCG itself shouldn't be the final word, like the Heisenberg uncertainty principle (as an inequality \(\Delta X \cdot \Delta P \geq \dots\)) shouldn't be the true final thing we want to find.

Instead, we want to find an explanation that uses some deeper mathematics. In Heisenberg's case, the inequality really followed from \(xp-px = i\hbar\). I think that a similar "algebraic identity"-type understanding for the WCG – and other apparently true swampland constraints – should exist.

No comments:

Post a Comment