The Quanta Magazine published a report

A New Theory to Explain the Higgs Massby Natalie Wolchover that promotes a one-month-old preprint

Cosmological Relaxation of the Electroweak Scaleby Graham, Kaplan, and Rajendran. So far, the paper has 1 (conference-related) citation but has already received great appraisals e.g. from Guidice, Craig, and Dine – and less great ones e.g. from Arkani-Hamed.

The Higgs mass, \(125\GeV\) or so (and the electroweak scale), is about \(10^{16}\) times lighter than the Planck mass, the characteristic scale of quantum gravity. Where does this large number come from? The usual wisdom, with a correction I add, is that the large number may be explained by one of the three basic ideas:

- naturalness, with new physics (SUSY, compositeness) near the Higgs mass
- anthropic principle, i.e. lots of vacua with different values of the Higgs mass, mostly comparable to the Planck mass; the light Higgs vacua are chosen because they admit life like ours
- Dirac's large number hypothesis: similar large dimensionless numbers are actually functions of the "age of the Universe" which is also large (but not a universal constant) and therefore evolve, or have evolved, as the Universe was expanding; see TRF

What is the proposal and why it is a hybrid? Well, it needs inflation and an axion, possibly QCD axion, whose role is to drive the Higgs field to a region where its mass is low.

The axion \(\phi\) is coupled to the QCD instanton density \(G\wedge G\) and the theory dynamically creates the potential for this axion that may be schematically written as\[

V(\phi) = a(\phi/f) \cos (\phi / f) + b(\phi/f)

\] where \(a,b\) are very, very slowly changing functions of the axion \(\phi\); whether this slowness may be natural is debatable. So the field \(\phi\) has many very similar minima – like in the usual anthropic explanations. Around each minimum, the Higgs mass is different. But the right one isn't chosen anthropically, by metaphysical references to the need to produce intelligent observers. Instead, the right minimum is selected by cosmology in a calculable way.

As inflation continues, the Universe is trying numerous minima of the axion. Because it's the axion that drives the relaxation, it may be called the "relaxion". At some moment, this testing period stops and the Universe sits near one of the minima where the Higgs mass happens to be much lower than the Planck mass. At that time, we're left with the Standard Model physics around the minimum that looks like ours. No anthropic selection is needed in their picture and they also claim – controversially – that all the required coefficients in their model have technically natural values.

Their specific model should probably be viewed as a guess. I don't believe it's unique and even if it were unique, it could be modified in various ways. What's more sensible is to treat it as a paradigm. As I said, it's a mixture of the naturalness explanation – because the coefficients are said to be natural – with the anthropic explanations – because there are tons of minima to choose from – and with the Dirac's large number hypothesis – because the large numbers are linked to the duration of the cosmological eras, although it's only the inflation era (which already ended) which is relevant here.

Arkani-Hamed says that for the right minima to be chosen, the inflation has to take billions of years – much much longer than the tiny split second we usually expect. There are other features that you could consider big disadvantages of the model. For example, the extremely long range of the axion \(\phi\)'s inequivalent values which conflicts at least with the simple versions of the "weak gravity conjecture" arguments. The authors quote the axion monodromy inflation as another example of models in the literature that seems to circumvent this principle.

But where does the (huge) ratio of the Planck mass and the Higgs mass come from in their model? They need to get some large numbers from somewhere, right? Well, my understanding is that the particular values ultimately come from parameters that they need to insert mainly through \(g\ll m_{\rm Pl}\) and \(\Lambda \ll m_{\rm Pl}\). These hierarchies are said to be technically natural because the parameters \(g,\Lambda\) "break symmetries".

I tend to think that if you're satisfied with this narrow form of technical naturalness, you could find other, conceptually different, solutions. At the end, a complete microscopic model should allow you to calculate the ratio \(10^{16}\) in

*some way*– perhaps as the exponential of some more natural expressions – and as far as I can see, they haven't done so.

When it comes to the details of the model, I think that it's at most a guess, a "proof of a concept", that I wouldn't take too seriously. On the other hand, the idea that models may exist that explain the large numbers in ways that are neither "full-fledged, metaphysical, anthropic explanation" nor "naturalness with new physics around the electroweak scale" is a correct one. There

*are*other possibilities, possibilities that could make even the large dimensionless numbers "totally calculable" sometime in the future.

## snail feedback (0) :

Post a Comment