Low-energy lepton physics in the MRSSM: \((g−2)_\mu\), \(\mu\to e\gamma\), and \(\mu\to e\) conversionby Kotlarski, Stöckinger, and Stöckinger-Kim, by a Polish, German, and German-Korean ;-) trio. They revisit the 2007 Kribs-Poppitz-Weiner model known as MRSSM. The acronym stands for the same thing as MSSM with the extra "R-symmetric" inserted in between.

There's an extra \(U(1)\) R-symmetry in the model, the Standard Model particles are neutral but the superpartners are charged under it. They carry some special new superpartners. In this scenario, compressed spectra are assumed so that the LHC bounds aren't violated even though some superpartner masses are below \(200\GeV\). Instead of the LHC, they predict new phenomena to be seen at experiments "directly converting electrons to muons" such as COMET.

Now, the main paper I want to discuss is

String Landscape and Fermion Massesby Andriolo, Yan Li, and Tye. Henry Tye is of course a brilliant playful man and this paper – building on some previous papers by a similar group – shows that.

I have repeatedly discussed that a scientist who tries to go beyond the well-established theories, at least a little bit,

*always*needs "something like naturalness", a mental framework that tells him what values of parameters labeling the latest established theory may be considered natural. The word "natural" really means "likely" in general which means that there must be a statistical distribution. No special patterns that are "very unlikely" according to the distribution should appear in Nature.

The phenomenologists like the usual naturalness that is based on uniform or close distributions. So if an angle may between \(0\) and \(2\pi\), for example, they assume a uniform distribution on that interval. If that's the case, it just shouldn't happen that the angle, such as the QCD theta-angle, is below \(10^{-8}\) or something like that. If it is this small, you should better have an explanation.

Tye et al. also have a distribution but in many important cases, it is not uniform at all. Instead, they claim to derive some distributions from a picture of string theory. Such distributions are typically non-uniform, they claim. In fact, the probability distribution \(P(\Lambda)\) for quantities such as the cosmological constant \(P(\Lambda)\) is peaked near \(\Lambda=0\). Well, if that argument is generally true, and I haven't really understood it too well so far, it means that whenever we observe some unnaturally small values of parameters, they're evidence in favor of string theory!

That would imply a trade-off. If you were e.g. a nasty bitch who hysterically attacks naturalness, you would strengthen string theory whenever you do it, and vice versa. It's neat – with a rather big "If", of course.

OK, I want to encourage some people to read the papers more carefully – and I don't want to reproduce all the cool ideas from the papers. Instead, I want to throw some motivating excerpts that could make the reading of their papers more intriguing. They decide to have some arguments claiming that it's natural – likely according some distributions peaked near zero – that the cosmological constant is "exponentially small" in the Planck units, just like observed.

The cosmological constant has been considered in previous papers by Tye et al. In the newest paper, they focus on the fermion masses. Your time may be expensive so let me extract the wonderfully simple and almost childishly looking claim they're making. They're saying that the zeroth-order distribution \(P(m)\) for the masses is also peaked near \(m=0\) but because of some statistical evaluation of the string vacua, the individual fermion masses (the eigenvalues of the mass matrices, I guess) are ultimately independently (the lack of correlation surely makes me suspicious) distributed according to a distribution, a Weibull distribution, that has the form\[

f(m; k,l) = \frac{k}{l} \zav{ \frac{m}{l} }^{k-1} \exp \zzav{ -\zav{\frac{m}{l}}^k }

\] That's their equation (3.2). And to make you really breathless, they have determined that the right values of the parameters are \(k=0.269\) and \(l=2.29\GeV\). That's quite some high-precision numerology, indeed! They have extracted these values of the parameters from the simple list of the 6 quark masses and 3 charged lepton masses.

What's totally wonderful about this childish distribution is that the quark and charge lepton masses very nicely cover the interval of allowed masses. You may calculate the percentile \(Y\%\) for each quark or charged lepton masses and you will get 14.5%, 17.4%, 34.7%, 57.5%, 69.1%, 95.9% for quarks, and 19.1%, 58.9%, 85% for the charged leptons. If those aren't percentiles that quasi-uniformly cover the interval from 0% to 100%, nothing is.

I have cheated a little bit. The value of \(k,l\) listed above produce the aforementioned percentiles for the six quark masses. If you want to derive the percentiles for the charged lepton masses, you need to use different values of \(k,l\) for the Weibull distribution (they also propose another possible Ansatz of the distribution). You need \(l=0.164\GeV\) for the charged leptons and... \(k=0.269\). Holy cow, the optimum value of the dimensionless parameter \(k\) is the same for the six quarks and the three charged lepton masses. ;-)

How intriguing do I find this agreement between the values of \(k\) extracted for two groups of lepton masses? It's pretty intriguing, indeed. So while we can't produce the individual values of the 9 masses, we can describe their rather strange distribution by a functional form for the distribution that only depends on 2 parameters. All these claims sound like weird numerology but at some level, I do find it plausible that there is some truth behind these observations. The Standard Model has a relatively high number of elementary particle masses, \(N\gg 1\), which means that there could be an idealized description of their behavior that uses the \(N\to\infty\) "thermodynamic" limit. And in such limits, many things could simplify and many distribution could become peaked, indeed. Assuming some genericity of our vacuum, some statistical properties of the rather large number of fermion masses in our Universe

*could*become predictable, much like things become rather precisely predictable in the thermodynamic limit of statistical physics!

So I still do believe the default assumption that it's silly numerology that cannot work but the possibility that it's no coincidence is attractive, indeed.

Note that the observed fermion masses – like their Yukawa couplings – differ from each other by orders of magnitude. The Weibull distribution of theirs is capable to produce percentiles that are neither too close to 0% nor too close to 100% because \(k\lt 1\) if not \(k\ll 1\) and the distribution therefore allows masses that span many orders of magnitude. In effect, I think that they determine the scale \(l\) roughly as the geometric average of the mass eigenvalues and \(k\) is determined by the "standard deviation" of the number of orders of magnitude by which you should deviate from the mass scale \(l\).

Cute. 6+3 masses isn't too much but it's marginally enough to start to determine the value of the parameters \(k,l\) and maybe the question whether the eigenvalues want to repel each other etc.

They also apply similar "methods" to the three neutrino mass eigenvalues and remarkably enough, they claim\[

\sum^3 m_\nu = 0.0592\eV

\] with the error margin around 0.1%. Quite a prediction! The sum of the three neutrino masses is almost certainly smaller than \(0.066\eV\), they say, and all these statements pretty much mean that the lightest neutrino mass should be of order \(0.001\eV\), smaller than all the neutrino mass differences, roughly speaking.

Again, so far, I think it's unlikely that any of these claims are more than childish numerological games. But Tye is a smart guy. Those things are intriguing. It does seem plausible to me that because the number of parameters and elementary objects in the Standard Model is much greater than one, some statistical features of the distribution of these numerous parameters – such as the lepton masses – could be "calculable".

To understand their papers or even go beyond them, you primarily need to understand

- how these claims about the distributions may be justified by combining string theory with some clever statistical thoughts
- how they're applied to our Universe, given the known parameters, whether the agreement is good, and whether there is anything nontrivial about the fact that you may apparently fit the elephant.

As I have already mentioned, one of the suspicious claims they make is that the distribution for the mass eigenvalue gets factorized – there are no correlations between the individual quark masses. They have some argument in favor of this claim but why should it be the case? Is it "exactly" the case or just in the \(N\gg 1\) limit? There are obviously many questions.

At any rate, their paper shows that in the presence of a large landscape of solutions, one's expectations about the physics predicted by string theory depends not only on the uncontroversial calculational techniques of string theory but also on some "philosophical statistical principles or a probabilistic axiomatic system for the vacua". Tye et al. have presented a picture that may be considered a competitor both to the "uniform phenomenologists' naturalness" as well as the "generic string vacuum or Douglas' naturalness" – or the really fishy anthropic methods trying to maximize the number of observers. All these "statistical additions" to the mental framework how to thing about string theory with its set of solutions are vastly different from each other. But one of them – or another one that hasn't been clearly proposed yet – could be true. At this moment, we don't really know which, if any, is true.

## No comments:

## Post a Comment