## Friday, July 21, 2017 ... /////

### Does weak gravity conjecture predict neutrino type, masses and cosmological constant?

String cosmologist Gary Shiu and his junior collaborator Yuta Hamada (Wisconsin) released a rather fascinating hep-th preprint today

Weak Gravity Conjecture, Multiple Point Principle and the Standard Model Landscape
They are combining some of the principles that are seemingly most abstract, most stringy, and use them in such a way that they seem to deduce an estimate for utterly observable quantities such as a realistic magnitude of neutrino masses, their being Dirac, and a sensible estimate for the cosmological constant, too.

What have they done?

In 2005, when I watched him happily, Cumrun Vafa coined the term swampland for the "lore" that was out there but wasn't clearly articulated before that. Namely the lore that even in the absence of the precise identified vacuum of string theory, string theory seems to make some general predictions and ban certain things that would be allowed in effective quantum field theories. According to Vafa, the landscape may be large but it is still just an infinitely tiny, precious fraction embedded in a much larger and less prestigious region, the swampland, the space of possible effective field theories which is full of mud, feces, and stinking putrefying corpses of critics of string theory such as Mr Šmoits. Vafa's paper is less colorful but be sure that this is what he meant. ;-)

The weak gravity conjecture – the hypothesis (justified by numerous very different and complementary pieces of evidence) that consistency of quantum gravity really demands gravity among elementary particles to be weaker than other forces – became the most well-known example of the swampland reasoning. But Cumrun and his followers have pointed out several other general predictions that may be made in string theory but not without it.

Aside from the weak gravity conjecture, Shiu and Hamada use one particular observation: that theories of quantum gravity (=string/M-theory in the most general sense) should be consistent not only in their original spacetime but it should also be possible to compactify them while preserving the consistency.

Shiu and Hamada use this principle for the Core Theory, as Frank Wilczek calls the Standard Model combined with gravity. Well, it's only the Standard Model part that is "really" exploited by Shiu and Hamada. However, the fact that the actual theory also contains quantum gravity is needed to justify the application of the quantum gravity anti-swampland principle. Their point is highly creative. When the surrounding Universe including the Standard Model is a vacuum of string/M-theory, some additional operations – such as extra compactification – should be possible with this vacuum.

On top of these swampland things, Shiu and Hamada also adopt another principle, Froggatt's and Nielsen's and Donald Bennett's multiple point criticality principle. The principle says that the parameters of quantum field theory are chosen on the boundaries of a maximum number of phases – i.e. so that something special seems to happen over there. This principle has been used to argue that the fine-structure constant should be around $\alpha\approx 1/(136.8\pm 9)$, the top quark mass should be $m_t\approx 173\pm 5 \GeV$, the Higgs mass should be $m_h\approx 135\pm 9 \GeV$, and so on. The track record of this principle looks rather impressive to me. In some sense, this principle isn't just inequivalent to naturalness; it is close to its opposite. Naturalness could favor points in the bulk of a "single phase"; the multiple criticality principle favors points in the parameter space that are of "measure zero" to a maximal power, in fact.

Fine. So Shiu and Hamada take our good old Standard Model and compactify one or two spatial dimensions on a circle $S^1$ or the torus $T^2$ because you shouldn't be afraid of doing such things with the string theoretical vacua, and our Universe is one of them. When they compactify it, they find out that aside from the well-known modest Higgs vev, there is also a stationary point where the Higgs vev is Planckian.

So they analyze the potential as the function of the scalar fields and find out that depending on the unknown facts about the neutrinos, these extra stationary points may be unstable because of various new instabilities. Now, they also impose the multiple point criticality principle and demand our 4-dimensional vacuum to be degenerate with the 3-dimensional compactification – where one extra spatial dimension becomes a short circle. This degeneracy is an unusual, novel, stringy application of the multiple criticality principle that was previously used for boring quantum field theories only.

This degeneracy basically implies that the neutrino masses must be of order $1-10\meV$. Obviously, they knew in advance that they wanted to get a similar conclusion because this conclusion seems to be most consistent with our knowledge about neutrinos. And neutrinos should be Dirac fermions, not Majorana fermions. Dirac neutrinos are needed for the spin structure to disable a decay by Witten's bubble of nothing. On top of that, the required vacua only exist if the cosmological constant is small enough, so they have a new justification for the smallness of the cosmological constant that must be comparable to the fourth power of these neutrino masses, too – and as you may know, this is a good approximate estimate of the cosmological constant, too.

Note that back in 1994, Witten still believed that the cosmological constant had to be zero and he used a compactification of our 4D spacetime down to 3D to get an argument. In some sense, Shiu and Hamada are doing something similar – they don't cite that paper by Witten, however – except that their setup is more advanced and it produces a conclusion that is compatible with the observer nonzero cosmological constant.

Jožin from the Swampland mainly eats the inhabitants of Prague. And who could have thought? He can only be dealt with effectively with the help of a crop duster.

So although these principles are abstract and at least some of them seem unproven or even "not sufficiently justified", there seems to be something correct about them because Shiu and Hamada may extract rather realistic conclusions out of these principles. But if they are right, I think that they did much more than an application of existing principles. They applied them in truly novel, creative ways.

If their apparent success were more than just a coincidence, I would love to understand the deeper reasons why the multiple criticality principle is right and many other things that are needed for a satisfactory explanation why this "had to work".