Thursday, July 22, 2010 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

SUSY and hierarchy problem

Previous pieces of evidence supporting SUSY:

Prev: Why string theory implies supersymmetry
Prev: SUSY and dark matter
Prev: SUSY and gauge coupling unification
Next: SUSY exists because the number 3/2 can't be missing
Experimental hints suggest that the Higgs boson is almost certainly lighter than 200 GeV and very likely to be lighter than 160 GeV if not 130 GeV: its mass isn't too much larger than the mass of the W and Z bosons which are close to 80 and 90 GeV, respectively.



A basic introduction to the Higgs mechanism...

In What a light Higgs would mean for particle physics, I also explained the theoretical reasons why the Higgs can't be heavier than 800-1,000 GeV. At those values, the quartic self-interaction would be very strong and would be running even stronger very quickly, eventually falling into the "Landau pole" trap of a divergent interaction that probably makes the theory inconsistent as a separate theory (and surely useless in its perturbative form).

Now, let us ask: do we understand why the Higgs mass is so light?

The Planck scale is close to 10^{19} GeV and chances are that an effective field-theoretical description - without strings etc. - should be valid at least up to 10^{17} GeV or so which is not too far from the Planck scale. But the Higgs mass is just 10^{2} GeV or so, fifteen orders of magnitude smaller!




Moreover, the Higgs mass term in the Lagrangian is proportional to "mass squared" so its coefficient in the Planck units is 10^{-30} rather than just 10^{-15}. That's a pretty small number. However, there are even more shocking news.

If you want to determine what coefficient you actually have to include for the Higgs mass term in an effective theory with a near-Planckian cutoff, it is not 10^{-30}. In the Standard Model, it is actually of order one but it must be fine-tuned very accurately. Why?

Because this "bare mass" that you directly insert into the Lagrangian is going to be corrected. For example, a Higgs boson may split into a virtual top quark-antiquark pair. This process is given by a Feynman diagram with a top quark loop (circle) with two Higgs external lines leaving the loop at the opposite sides, from cubic Yukawa interaction vertices.

Such a Feynman diagram is quadratically divergent (p^4 from the integration measure; 1/p^2 from the two top quark propagators), so it shifts the Higgs squared mass by something proportional to "Planck mass squared". This divergence is multiplied by the top quark Yukawa coupling squared.

Other quarks and leptons have other (and smaller) Yukawa couplings but they also contribute. Bosonic loops - with the Higgs external lines attached to a quartic vertex - give similar quadratically divergent contributions to the Higgs squared mass. Their coefficients are pretty random and opposite in sign from the fermions. But there's no reason for them to cancel because the bosons such as W,Z bosons don't know anything about the top quark Yukawa coupling.

It means that the bare mass must be a number of order one but it must be adjusted so that it almost cancels against the loop corrections above, and higher-order corrections, but not quite. You may imagine that the bare Higgs squared mass in the Lagrangian is
3.14159 26535 89793 23846 26433 83280
in the squared Planck masses. But the quantum corrections contribute
-3.14159 26535 89793 23846 26433 83279
so the total observed Higgs squared mass will turn out to be be
0.00000 00000 00000 00000 00000 00001
in Planck units, as expected from the experiments. It's plausible but it seems pretty crazy that such an accurate yet imperfect cancellation occurs in the high-energy effective field theory, right? This at least aesthetic problem with the Higgs mass is called the hierarchy problem and aside from the cosmological constant problem, it is the most important known example of "unnaturalness" in physics.

In Why naturalness should be expected, I explained why uncontrived theories without adjustable (and contrived) parameters predict all dimensionless constants to be of order one - unless there is a comprehensible explanation why they're not of order one. But I noticed that this point is just not "felt" by most laymen (and by some self-described physicists).

Estimating a random sine

Let me spend some more time with it. Imagine that someone tells you that a quantity can be computed from a fundamental theory and the right formula is actually
Sin[Pi Exp[Pi Sqrt[162]]]
i.e. some mess of the same kind that you expect to get from fundamental theories. (In reality, you rarely get analytical formulae for things you care about.) Your task is to estimate the value of the number above.

Now, you see that Sqrt[162] is a pretty random number between 12 and 13. It gets multiplied by Pi, so you get almost 40. That number gets exponentiated, so you obtain something between 10^{17} and 10^{18}. And this pretty random number must be multiplied by Pi before you compute the sine (in radians).

Well, the sine of a real number is between -1 and +1. And the sine of a random large number is a random number between -1 and +1, right? Well, in this particular case, the probability distribution actually favors results that are close to +1 or -1 because the sine is slower over there - and spends more time near the corners.

(If you care about the full formula, the distribution is uniform in the angle "phi" which means that it is "dP = k.d phi". In terms of "v = sin(phi)", assuming "phi" between "-pi/2" and "+pi/2", we have "dP = k.d arcsin(v) = k.dv / sqrt(1-v^2)" which is uniform for "v" closer to zero but diverges as "1/sqrt(1-v)" for "v" approaching "+1". So the probability that "v" - a random sine - is closer than 10^{-100} to zero goes like 10^{-100} but that it is closer than 10^{-100} to +1 or -1 goes like 10^{-50}; the latter is a bit larger.)

However, you know that it is unlikely that the sine will be very close to zero, right? I mean something like one billionth. Why should it be? Exp[Pi Sqrt[162]] would have to be a very close to an integer. And it's unlikely. So if you're rational, you will make an estimate that the sine above is a number smaller than one, but of order one. And indeed, the result is -0.177, not too bad.

Now, I modify the formula just a little bit. Let us change 162 to 163. You should calculate:
Sin[Pi Exp[Pi Sqrt[163]]]
The reasoning is identical, isn't it? It's a sine of a random large number. So the result will be a random number between -1 and +1. You surely don't expect such a compact expression (the absolute value of it) to be smaller than, for example, 10^{-11}, do you? The probability that it is smaller than 10^{-11} is comparable to 10^{-11} because near the vanishing values of the sine, the distribution of possible values of a sine is nearly uniform.

Because you happen to be a millionaire (or a billionaire) who wants to be even richer, you make a 100-to-1 bet against your humble correspondent that the result can't be smaller than 10^{-11}. Those 100-to-1 odds are much more attractive than the 10^{11}-to-1 odds that would seem fair to you. You offer $200,000 and want to win $2,000 from me. Well, you shouldn't have done it. (Please ask me in fast comments how you can send your contribution.)

The result of the sine with 163 is
-2.36 x 10-12 = -0.00000 00000 0236
or so. You lost. Do you agree that you will be surprised that you lost?

However, in this case, I can offer you a mathematical explanation of this bad luck of yours. In Excitement about numerical coincidences, I explained that 163 is the highest Heegner number (the list is 1,2,3,7,11,19,43,67,163). And for such numbers, Exp[Pi Sqrt[H]] can be re-expanded using the j-invariant functions that are known from toroidal, one-loop computations in string theory. You can actually find an explicit formula that makes it clear that such an exponential is an integer plus minus a small correction.

So there was some esoteric mathematics that did explain the qualitative fact that the sine-expression with 163 was insanely tiny. However, for the Higgs boson, such an explanation is not available. You bet that the Higgs mass has to be of order one at the Planck scale but it ends up being 10^{-15} or so.

Solutions: anthropic, compositeness, extra dimensions, SUSY

The first possible explanation is that you don't give a damn about the smallness of such numbers. Sensible, impersonal probability distributions mean nothing to you much like solid maths. The number simply has to be small for life to exist, so it's small, right?

The weakness of gravity relatively to the other forces implies that stars - results of a gravitationally collapsed Hydrogen - contain a large number of atoms and may burn for a long time to allow evolution. If gravity were much stronger and closer to the other forces, i.e. if the hierarchy (gap) were much more modest, the typical number of atoms in the stars would be much smaller - much closer to the natural value of one - which wouldn't be enough for evolution to take place.

This answer is an example of the anthropic reasoning. Things may be arbitrarily fine-tuned as long as they have to satisfy the condition that the intelligent life - and maybe even we - have to exist. After all, the very main task for the anthropic principle is to create humans to His own image; all other aesthetic details or calculations are secondary. According to the philosophical principle, curiosity (and science) is a heresy. You should never be surprised and ask why. Things are what they are because the holy anthropic principle has selected us to exist and some fine-tuning was necessary. Shut up and never calculate anything. ;-)

Such an answer can be used to answer questions about any curious observation or pattern in the world around us. Except that in almost all other cases, we know that such an answer is wrong because there's actually a better way than the anthropic reasoning to explain the patterns or calculate numbers. The better way is known as science.

So if you're dissatisfied with the anthropic answer, you want to search for better ones.

Compositeness

There exists another gap between the Planck scale and the scale of a gauge theory whose big magnitude we do understand. The 1 GeV protons are much lighter than the Planck scale, too. Why is it so? The proton mass is controlled by the QCD scale which is the scale at which the strong coupling constant is comparable to one - where the strong force starts to intensely confine the quarks and gluons.

It's about 150 MeV or, if you prefer the units of distance, 10^{-15} meters or so - the nuclear radius.

Why is the proton 19 orders of magnitude lighter than the Planck scale? Well, it's because - as we can explicitly calculate - the strong coupling constant is logarithmically running as a function of the scale. It is not quite a "constant"; instead, it slowly (logarithmically) depends on the energy scale where we measure it. The dependence is approximately
1/g2 = 1/g02 + K log(Lambda/Lambda0)
So if you decide that "g_0^2" is comparable to "1/25" at the Planck scale "Lambda_0" which is a pretty natural value for a coupling constant, you may ask what is the scale "Lambda" at which "g_0^2" becomes of order one and the strong force begins to confine. Because "Lambda/Lambda_0" appears inside a logarithm, this ratio is actually the exponential of something like (25-1), divided by "K", so it's not shocking that you may get 17 orders of magnitude.

However, the Higgs mass is not determined by any confining scale, at least not manifestly so, so we can't use the very same argument to show that it's unsurprising that the Higgs is much lighter than the Planck scale.

Well, actually, there exists a framework in which the previous paragraph is exactly false: compositeness and technicolor. In these pictures, the Higgs boson is a bound state of several smaller particles that are analogous to quarks. They're called techniquarks and the bound state itself is analogous e.g. to pions. The techniquarks would carry a new type of charge analogous to the color, the technicolor. The explanation of the Higgs-Planck gap would proceed analogously to the QCD scale.

Even though the filmmakers have used Technicolor to color old movies for decades, technicolor remains an unconnvincing story in physics. It just doesn't seem to work when you try to make these things work in practice. High-precision experiments imply that even if the Higgs etc. were made out of smaller particles, they must be damn small so there will still be a smaller yet unexplained hierarchy between the Higgs "radius" and the "radius" of its ingredients.

Also, it seems impossible to get a realistic spectrum of the techniquarks' bound state in any model that has been proposed. Technicolor also suffers from all the problems that "generically" appear in supersymmetry - such as the flavor-changing neutral currents (FCNCs). However, in the case of supersymmetry, they can usually be solved. In the case of technicolor, they seem to be lethal flaws.

Even though it would seem natural to argue that the Higgs is analogous to pions and the strong force is analogous to the weak force, it just doesn't seem to be this way. Quite obviously, the experiments are telling us that the weak force is much less strongly coupled than the strong force and its key particles seem to be elementary, indeed. So it's very likely that we need a different solution to the hierarchy problem.

(The strong and weak forces also have a different fate: one of them is confining, the other one is spontaneously broken. These two fates may actually be S-dual to each other in some moral sense so that the unbearable lightness of the Higgs could be explained along the "strong template", after all. But no one knows how to do it now.)

Extra dimensions

Any solution of the hierarchy problem is a classic non-trivial argument that can make a new model convincing - which is why model builders often start with the question "how do I solve the hierarchy problem?". If you propose a random model and hope that it agrees with the reality in detail, it's a wishful thinking. Why should your model be the right one? What does Nature get for allowing your model to imitate Her?

However, when you have a solution of the hierarchy problem, it is a nontrivial argument that your model could be right - or at least, it could be on the right track. It has passed a nontrivial test because the probabilistic expectation from a generic model would be that it will fail to produce the hierarchy. That's not enough to prove your model right but Nature may be at least more likely to recognize your model as one of Her chromosomes.

Extra dimensions may explain why gravity is weak but they can only explain it if these extra dimensions are either unusually large or unusually curved. If the extra dimensions are large, like in the ArkaniHamed-Dvali-Dimopoulos (ADD) models of 1998, and if the Standard Model is stuck on a brane so that its particles don't move in the extra dimensions, it's only gravity that propagates everywhere. By definition, gravity is dynamics of the curvature of space. So wherever you have space, you also find gravity.

If the total volume of the extra dimensions in the fundamental high-dimensional Planck units is large, the gravitational force gets "diluted" in this large volume while the other forces don't - they're stuck on a brane. Consequently, you naturally explain why gravity is so much weaker. The price you pay is that you have to explain why the extra dimensions are so large. The new mystery is not too much smaller than the old one - but it can be smaller. To say the least, the extra-dimensional perspective may offer you new tools how to solve both problems.

Another, more satisfying way to explain the hierarchy problem via extra dimensions employs the warped geometry i.e. the Randall-Sundrum (RS) models. Their Universe would have 4+1 pretty large dimensions, resembling a 5-dimensional anti de Sitter space (smaller dimensions that are tiny enough to be neglected in the RS approximation may always be a part of the picture). It can be visualized as a 4-dimensional Minkowski space fibered over the 5th coordinate "y". But all the distances in the 3+1-dimensional Minkowski space are rescaled by a factor that depends on "y", typically as "exp(k.y)".

If you move in the "y" direction, the same coordinate differences in the Minkowskian 3+1 dimensions get translated to very different proper distances and proper volumes. Most of the volume may be concentrated in the region where the proper distances are large - large "g_{tt}(y)", for example. That's also where "most of the power of gravity" is hiding. We may live at places where "g_{tt}(y)" is small, at an electroweak brane, and only feel a small fraction of the force of gravity.

An advantage of this picture over ADD is that the "weakness of the gravity" naturally comes out to be an exponential of a natural number: the exponential in the warping factor wasn't fabricated to make things look better: it can actually be derived from the Minkowski slicing of the maximally symmetric anti de Sitter (negative 5D cosmological constant) space. Much like in the case of the QCD-Planck gap, we explain the largeness of the ratio by showing that it is an exponential of another number. And of course, the argument of this exponential may be much more sensible - dozens instead of quadrillions.

The Randall-Sundrum models that would be able to solve the hierarchy problem would also predict that the LHC energy is not too far from the mass of the smallest higher-dimensional black holes that deserve the name - aside from strings and other exotic objects you expect from a fundamental theory of quantum gravity.

I personally estimate the probability of any large dimensions to be observable by the LHC to be below 2% but what do I know? To be honest, my main problem with these large or warped extra dimensions is that one has to sacrifice gauge unification and the fact that grand unification naturally occurs near the Planck scale in the conventional models with 3+1 large dimensions.

By the way, I looked at the Wikipedia article on the hierarchy problem and noticed that there aside from not-too-relevant foam, there are many formulations that looked crisp and meaningful to me. Of course, I eventually realized that before hundreds of edits were made, it was your humble correspondent who started that article, much like hundreds of others.

Supersymmetry

Supersymmetry remains the most natural solution to the hierarchy problem. For each boson, there is a partner fermion, and vice versa. If we talk about the supersymmetrized Standard Model sector, the masses of the two partners never differ by more than hundreds of GeV or 1 TeV or so - the superpartner scale (we assume low-energy SUSY).

This naturally explains why the Higgs doesn't want to be driven to the Planck mass. There are several related - and, essentially, equivalent - ways to see it.



First, if you calculate the loop diagrams explicitly, you will see that the contribution from the top quark diagram (up on the picture) that we have already discussed will be exactly canceled by the contribution from the stop squark diagram (down on the picture). More precisely, the cancellation would be exact if the top and stop masses were equal. Because these two masses differ by hundreds of GeV, as a result of SUSY breaking, you expect a similar "leftover" from that cancellation. The Higgs is naturally a sum or difference of many terms of order 1 TeV so it is not too shocking if the result is 115 GeV or so.

Second, the Higgs boson has a partner itself - the higgsino. It's one of the particles that are being mixed with others (photino, zino) to obtain the neutralinos. But the higgsino is a spin-1/2 fermion and there exists a natural reason why such fermions want to stay light. Note that the neutrinos are light for a good reason: for example, a strictly conserved left-handed (2-component i.e. Weyl) neutrino has to be exactly massless. There are no massive chiral fermions.

Although the masslessness is ultimately destroyed by the Majorana mass terms (or by ordinary Dirac mass terms, assuming that Nature completes the neutrinos into electron-like 4-component spinors by additional right-handed neutrinos), it can still be understood why it is a good zeroth approximation to assume that the higgsino is massless.

Because the Higgs boson is linked to the higgsino by supersymmetry, it would have to stay massless, too. In reality, supersymmetry has to be broken but the breaking only generates mass differences in hundreds of GeV, so the Higgs boson mass stays in this range, too. Well, all the neutralinos end up with similar masses, too.

That still doesn't quite answer why the Higgs is as light as it is - but it removes the "obvious" large contributions to its mass. You must still add a mechanism that makes the supersymmetry breaking scale - and also the superpartner masses - smaller than the Planck scale; but such an outcome is not "obviously contrived" and realistic models exist.

Supersymmetry therefore provides you with a new way to look at all these things - much like the j-invariant explanation why the sine involving 163 was so tiny. It was shocking to see the Higgs mass being so much lighter than the Planck mass before you understood SUSY. But with SUSY, if this framework is correct, the smallness becomes understandable.

That's how you may understand all hierarchies and their solutions. A dimensionless number that is vastly different from one - either smaller or greater - is simply something that should inflate and excite your curiosity because you don't expect something like that to occur by chance. If you divide observations to interesting, striking ones, and others, and you should, an observed large hierarchy is always striking. It's one of the hints you shouldn't overlook.

Because the small or huge parameters are "qualitative violations" of the probabilistic expectation that Nature produces lots of numbers that are unlikely to be too small or too large, i.e. because the unnatural numbers are really striking "fireworks" that offend our scientific eyes, there should also exist a "qualitative explanation" why (almost) each of these unnatural numbers occurs in Nature.

Aside from the much more speculative large or warped extra dimensions, supersymmetry is an excellent candidate tool that may explain how the world actually works and why it has some features that are surprising from different viewpoints.

Add to del.icio.us Digg this Add to reddit

snail feedback (2) :


reader coraifeartaigh said...

hi lubos, why is a reasonable model of susy breaking elusive? what is the nature of the problem?


reader Lumo said...

Dear Cormac,

I would probably disagree that a "reasonable" model is elusive. There are many reasonable models. What is elusive is a unique or simple model. Or the demonstrably right one, for that matter. ;-)

Why is it less unique than e.g. electroweak symmetry breaking? Let's compare. One can make a minimal thing that breaks the electroweak symmetry - the Higgs doublet - and a minimal thing that breaks SUSY - like a superpotential of your dad.

However, the main difference is that the Higgs doublet itself breaks the symmetry for the rest of the Standard Model automatically. The fields that break supersymmetry must operate "completely" outside the Standard Model: they're decoupled at first. And there are simply many ways how these hidden sectors may interact with the Standard Model indirectly - how the SUSY breaking is "mediated" to the SM: there are many types of mediation of SUSY breaking.

Maybe, we're missing the minimal one - and I am eager to believe that there exists a very simple model that no one manages to see. But we just don't see it, so we only have many pretty complex models of SUSY breaking.

I would like to emphasize that while the single Higgs doublet is the "minimal" way to break the electroweak symmetry, it's not the only one and it hasn't been shown to be the right one, either. So the only difference is the existence of a "minimal model" in the electroweak case that is missing in the SUSY case.

Cheers
LM