Tuesday, May 15, 2012 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Focus point supersymmetry

The mass of the soon-to-be-discovered Higgs boson, \(125\,\GeV\) or so, is below the threshold of \(135\,\GeV\) which means that it is compatible with the MSSM, the Minimal Supersymmetric Standard Model, where the observed Higgs boson could be the lightest one among five faces of the God particle.

It is definitely the mass region that favors the SUSY particle content in its simple form – and it's this MSSM form that is known to lead to gauge coupling unification which means that there exist other reasons aside from "simplicity" (which is a problematic, aesthetic guide) why a significant fraction of the phenomenological research into supersymmetry should be spent with the MSSM. (Non-minimal models of SUSY allow the Higgs mass to be larger but they usually destroy the gauge coupling unification miracle and have other undesirable effects.)

However, \(125\,\GeV\) is still a bit larger than in the most naive attempts to incorporate SUSY via the MSSM.




In the MSSM, one may calculate the Higgs mass. At the tree level – i.e. when we ignore all Feynman diagrams with quantum loops – the Higgs boson must be lighter than the Z-boson which was known to be wrong for a decade. However, the one-loop diagrams usefully correct this tree-level estimate and especially because of the top-quark loops, they allow the Higgs boson to be heavier than the Z-boson.

However, if we want to achieve \(125\,\GeV\) in the MSSM, the mass of the stop squarks – the superpartners of the top quark – should be either very heavy or there should be a near-maximal mixing between the two stop squarks (see a Dec 2011 paper on these issues that just appeared in PRD), at least if we ignore other possibilities such as a large R-parity violation (see MFV SUSY, just appeared in PRD). Recall that the top quark is described by a Dirac spinor which may be thought of as a collaboration of two Weyl spinors or two Majorana spinors. The superpartner of each Majorana spinor is a complex boson scalar field so we have two complex Klein-Gordon fields that describe the stop squarks. They may have two different masses – mass eigenvalues – and the mass eigenstates may be rotated relatively to the basis "left-handed stop, right-handed stop". That's what we call the mixing.

If the stop squarks are as heavy as needed for their loop corrections to produce a Higgs boson that is as heavy as \(125\,\GeV\), well, they must be about \(10\,\TeV\) in mass which is a lot. With these heavy masses, unless we discover a new mechanism, the fine-tuning needed to produce a light enough Higgs is comparable to one part in 10,000 or more which may be a lot of fine-tuning according to some people's taste. It's still better than a quadrillion, the fine-tuning needed in the non-supersymmetric Standard Model, but even 10,000 is a large enough number to make many people feel uncomfortable. The probability that Nature got fine-tuned "naturally" in this way is just 1/10,000: it is very small.

How do we exactly quantify the fine-tuning? To discuss this question, let us already follow a fresh preprint

A Natural \(125\,\GeV\) Higgs Boson in the MSSM from Focus Point Supersymmetry with A-Terms
by Jonathan L. Feng and David Sanford. It discusses "focus point" SUSY which was previously ridiculed by Nima Arkani-Hamed but because a lethally low value of the A-term in focus point models was the only criticism in Nima's talk that I could really understand and because this paper explicitly says that it has large A-terms, I at least temporarily forget about all the criticism coming from Nima, regardless of his high caliber and my respect for it and him.

Fine. So let us repeat a question I asked before: How do we quantify the amount of fine-tuning? Fine-tuning is related to problems similar to the hierarchy problem, e.g. the puzzling question why the Z-boson mass is so much lower than some other scales where new physical processes exist, such as the GUT scale near \(10^{16}\,\GeV\) which is 14 orders of magnitude higher. Even if we don't know what's exactly going on near the GUT scale, we know almost certainly that something is going on over there. So there must probably exist an effective theory at this scale, assuming that QFT is valid at those high but slightly sub-Planckian scales. Its Lagrangian contains some parameters such as \(a\), whatever it is.



Mr Karel Gott: Oh Miss SUSY, one who is moving in a picturesque way, I experienced shock and awe out of you but I was just a smoke for you. ... I will change that and I will break the dams. I already have a plan for that. As soon as I find you, SUSY, I will rent a cottage and I will be there with you alone.

The real problem is that \(a\) – a parameter expressed relatively to the GUT scale – must be adjusted very accurately to a very particular nonzero value for the resulting low-energy parameters such as the Higgs boson or Z-boson mass to remain small and close to their values near \(100\,\GeV\) at least with a poor accuracy. The real problem is that the Z-boson mass very sensitively depends on changes of the GUT parameter \(a\). Because the Z-boson mass depends on almost exact cancellations between many terms, the parameter \(a\) has to be fine-tuned with a very high accuracy if we want \(m_Z\) to be what it is with a much more modest accuracy.

Feng and Sanford quantify the amount of fine-tuning via the sensitivity coefficient \(c\) given by \[

c\equiv \max\{c_a\},\qquad c_a \equiv \abs{\pfrac{\ln m_Z^2}{\ln a^2}}

\] So \(c\) is taken to be the maximum among the values of \(c_a\) calculated for individual GUT-scale parameters \(a\). Each quantity \(c_a\) is usually larger than one and when the fine-tuning is really bad, they're much greater than one. For example, if \(c_a=10,000\), it means that if we change \(a^2\) by 0.0001 percent, the squared Z-boson mass will change by 1 percent. So a very fine adjustment of the fundamental parameters is needed to obtain physics that is at least qualitatively similar to the physics we know.

(The logarithms in the formula above guarantee that we talk about the percentage changes of all the parameters. Some older definitions had \(\ln a\) instead of \(\ln a^2\) in the denominator.)

This fine-tuning makes particle physicists feel uncomfortable because it seems unlikely, by an ordinary Bayesian calculation of the odds, that Nature adjusted those things properly by chance, naturally. And because we don't want to introduce the anthropic excuses, we just think that a theory that requires the fundamental parameters to be adjusted very accurately (the percentage error has to be tiny) is contrived and therefore unlikely. This is just a different application of the same principle that leads many of us disfavor the literal interpretation of the Bible which implies that Jesus Christ should have accidentally violated many laws of hydrodynamics when he was walking on the water. It's plausible but it seems very unlikely, much like the flying saucers. The claim that the Universe was brutally fine-tuned at the beginning is analogous to the flying saucers, many of us think.

The Standard Model which assumes that the Higgs boson is an elementary particle at least up to the GUT scale has \(c\) comparable to roughly \(10^{30}\). It's hopelessly fine-tuned. The squared Higgs mass has lot of contributions comparable to the squared GUT mass but they must mostly cancel and only a leftover that is \(10^{30}\) times smaller is left. The GUT-scale parameters can't possibly be chosen this accurately "by chance". The only way out is to look at the values of the GUT-scale parameters that are compatible with the intelligent life. In this anthropic view, the required huge fine-tuning ceases to be a problem but we may feel that we have thrown the baby out with the bath water by allowing the anthropic selection as an argument.

Alternatively, people thought that the Higgs boson was composite. So it was connected with no parameters at the GUT scale. Instead, there was a compositeness scale, which is higher than \(10\,\GeV\) as we know today, and this compositeness scale was low because of some log renormalization group running similar to that in QCD. This running didn't depend on any parameters \(a\) at the GUT scale, at least not much, so no fine-tuning of such parameters were needed. However, because this compositeness scale is higher than \(10\,\TeV\) or so, as we know from various LHC searches and from other methods, the required sensitivity coefficient \(c\) is of order tens of thousands. It's bad.

SUSY in some form clearly remains the most promising mechanism allowing us to reduce the fine-tuning. Feng and Sanford claim to be able to find big regions of the MSSM parameter space whose fine-tuning is as low i.e. as tolerable as \(c=100\). They achieve so with the focus point (FP) supersymmetry. In general, it claims to reduce the sensitivity coefficient \(c\) about 30 times or so. What is the FP SUSY?

It's a choice of the parameters at the GUT scale which are correlated in such a way that they have an interesting property: if you extrapolate them by the renormalization group to low energies, you will discover the Z-boson and other particles' masses near the electroweak scale regardless of the detailed adjustment of the GUT-scale parameters. The RG equations "focus" the curves ("beams") to the right place regardless of the initial directions. We could say that the log-enhanced corrections to the Higgs mass automatically disappear in the result, an interesting coincidence that may have other interpretations and additional justifications. The result is summarized in their equation (15). If you pick universal values of the up-type Higgs mass, up-type quark singlet mass, quark doublet mass, and the trilinear coupling \(A\) – a parameter that has the unit of mass and that used to be chosen tiny in FP but is large in the new paper – at the GUT scale, you will find simple solutions of the renormalization group equations (4)-(8) in which the sensitivity is eliminated.

The detailed values of most (or all?) GUT-scale SUSY breaking parameters become irrelevant. But this mechanism still makes a prediction for the top-quark sector of the MSSM. It's because the four parameters whose dimension is that of mass are expressed in terms of two independent dimensionless parameters \(x,y\) and one \(m_0\), i.e. three parameters in total.

In particular, some FP regions of the MSSM parameter space allow relatively light stop squarks with a large mixing. Those can reduce the amount of fine-tuning to the 1-percent sensitivity which is acceptable to me and probably many others. Such a required "accident" is equivalent just to a 2-sigma "not so big miracle", roughly speaking. Such things may happen naturally.

The author also mention papers [59] and [60] which may contain the material needed to derive the required correlations of the GUT-scale parameters from string theory. They also propose their regions of the MSSM parameter space to be a new replacement or "simple update" of the mSUGRA/CMSSM parameter spaces – a few-dimensional somewhat ad hoc subspaces of the MSSM moduli spaces that have been rather strongly disfavored relatively to some other regions of the 105-dimensional MSSM moduli space by the 2011 LHC data and that many SUSY phenomenologists have already abandoned at least silently if not loudly ;-) even though even the exclusion of mSUGRA/CMSSM isn't quite complete and rock-solid yet. So they really want the future LHC searches to impose limits on their FP models rather than the mSUGRA/CMSSM.

At any rate, I want to emphasize a general point that the MSSM – the minimal supersymmetric extension of the Standard Model – remains compatible with all the observations and there are regions in it which are not only consistent with all the data but require small fine-tuning that seems tolerable to me. This is partly linked to the fact that among limits on models of new physics, the lower limit on superpartner masses are among the lighest ones – in particular, the stop squark may still be pretty close to the top quark and many other superpartners may be below \(1\,\TeV\). So supersymmetry is still the "least constrained" model of new physics which roughly speaking implies that it is capable of producing the least fine-tuned explanations of the unbearable lightness of the God particle's being.

And that's the memo.

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :