As always, Peter Woit wrote that he did not want to believe anything, and if he does not believe it, it also makes it natural that he does not believe anything else, and therefore whole particle physics and string theory is useless, and so forth. Well, in the case of the anthropic "calculations" of the supersymmetry breaking scale, those that lead to two totally contradictory "results" (Dine vs. Douglas), I would agree with his judgement. In other cases, Peter Woit's way of thinking does not seem to be relevant for anything in science.

The only technical point in his criticism is the two-loop error that "plagues" the gauge coupling unification. I've asked Lisa Randall and Nima Arkani-Hamed. Nima has tried to explain me the situation with the errors in detail. First of all, let me emphasize that Nima finds the agreement behind gauge coupling unification amazing. Let's describe the basic situation qualitatively:

- The Standard Model has the SU(3) x SU(2) x U(1) group with three independent coupling constants
- This group can be embedded into a simple group like SU(5) or SO(10) or E_6 and physics is described by Grand Unified Theory in which the big group such as SU(5) is spontaneously broken to the Standard Model group at some high energy scale (GUT scale), much like the electroweak symmetry is broken to the electromagnetic U(1) at the electroweak scale
- Such a simple group has a single coupling only, and this must therefore be the unified coupling constant of all the factors in the Standard Model groups
- You must be careful about the way how the Standard Model group is embedded. This leads to a natural normalization of the couplings (which is slightly different from the simplest convention for the hypercharge) - this natural normalization is identical for SU(5), SO(10), E_6 because all of them contain the SU(5) subgroup
- The values of the coupling constants depend on the energy scale; the beta-function tells you how quickly
- The beta function has contributions from one-loop diagrams, two-loop diagrams, and the higher ones can be neglected, at least with the accuracy of current experiments; the running of each coupling is affected by all particles in your particular theory that carry charges under that group
- You can extrapolate the naturally normalized coupling constants, as measured at low energies, to high energies, using the running that you've calculated

What will you find at high energies?

- If you calculate the running in the nonsupersymmetric GUT theory, whose spectrum resembles the Standard Model at the energies between the GUT scale (10^16 GeV or so) and the low energies, you will see that the three lines don't quite meet at the same point. There are three indepependent intersections of the three pairs of lines
- If you do the same thing

in a supersymmetric GUT theory, whose spectrum resembles the minimal supersymmetric Standard Model (MSSM) at energies below the GUT scale, you will see that these lines meet at the same point.

- Two lines nearly always intersect, but three lines rarely intersect - there is one non-trivial prediction that is successfully tested. This non-trivial fact is a powerful argument in favor of SUSY combined with grand unification, especially because the energy scale of the unification is also pretty close to the Planck scale 10^19 GeV whose fundamental character is expected in a theory of quantum gravity
- Moreover, the agreement is not just about one number. One can create a whole unified theory, Grand Unified Theory, that naturally combines all quarks and leptons into "efficient" representations of the grand unified group - for example, 10bar+5 of SU(5), or the 16-dimensional complex spinor of SO(10), or the complex fundamental representation 27 of E_6. In the supersymmetric version, all the fermions also have bosonic superpartners

- The numerical value of the GUT scale, roughly 10^16 GeV, leads to a natural prediction for the netrino masses - by the seesaw mechanism. According to the seesaw mechanism, the electroweak scale (roughly 250 GeV) should be the geometric average of the tiny neutrino mass scale and the huge GUT scale - a numerical prediction of the neutrino masses comparable to 1 eV - something that seems to be confirmed experimentally

OK, now I want to discuss the impressive precision. Nima recommends the following paper by Ghilencea and Ross:

http://arxiv.org/abs/hep-ph/0102306

First, all such papers are based on calculations of one-loop diagrams and two-loop diagrams, with a reasonable estimate of the threshold corrections - these threshold corrections carry the information about the precise structure of the particles and their interactions near the GUT scale where the running slows down and the full GUT symmetry is getting restored.

The gauge coupling unification is clearly non-trivial. The usual parameters to describe the unification are alpha_3 and sin^2(theta_W), both measured at the electroweak scale (m_Z, more precisely). The former is the QCD running coupling, while the latter (Weinberg's angle) encodes the ratio between the two electroweak couplings (whose value above the GUT scale is completely fixed, probably 3/8 if I remember well). Look at page 4 of the paper cited above, namely

http://arxiv.org/PS_cache/hep-ph/pdf/0102/0102306.pdf

You will see that the unification only allows the parameters alpha_3 and sin^2(theta_W) to belong to a very thin vicinity of a hyperbola-like curve, and the observed values fit there even though the allowed area is just 0.2% - 2% of the a priori plausible region of the parameter space. The supersymmetric Standard Model passes where the Standard Model fails.

If you calculate the value of sin^2(theta_W) from unification and the known value of alpha_3, you will obtain a result that differs from the measured value by roughly one percent. Note that sin(theta_W), for example, would then differ by approximately half a percent, and so forth. Remember that the value of sin^2(theta_W) is roughly 0.234.

That's an amazing, better than 1 percent accurate result, and with reasonable assumptions about the threshold corrections, one can get the exact agreement with experiments. One may then face the problems that the appropriate threshold corrections typically predict e.g. a slightly faster proton decay than desired, and therefore there is some potential for residual tension, but it certainly does not look like a disaster.

So what is the 10-15% error that Peter Woit (and Gordon Kane, in his TASI lectures) mentioned? It is the error of alpha_3 at m_Z, predicted by the unification, assuming the known value of sin^2(theta_W). Why are the errors parameterized in two different ways so different? Denote y=alpha_3 and x=sin^2(theta_W). A good toy model is to imagine that the relation between x,y is y=x^{-7} where 7 can be replaced by another large number. You then see that the relative error of y will be 7 times bigger than the relative error of x.

It's not hard to see why alpha_3 at low energies depends so strongly on sin^2(theta_W). Look at the picture with the unification. You see that alpha_3 inverse at low energies is pretty small (QCD is slowly becoming strongly coupled), and therefore the same absolute error in alpha_3 inverse at both energy scales has a bigger effect at low energies. Also. alpha_3 runs sort of faster than the other two, and therefore a small change of sin^2(theta_W) changes the unification scale by an amount that has a rather big effect on alpha_3.

The criterion by Ghilencea and Ross seems like a reasonable tool to resolve these contradictory numerical values of the errors, and the probability that the agreement works by coincidence is, according to their counting, roughly one percent (by measuring the area in the two-dimensional parameter space).

Peter Woit may have decided that he would never believe SUSY, but other physicists usually prefer to believe their eyes. Look at the picture, click it to zoom in, and judge whether gauge coupling unification is a serious hint, or a piece of shit, as Peter Woit tries to present it.

Lubos,

ReplyDeleteSorry, this is unrelated to the topic at hand. I have a question, and

here is my best attempt to formulate it.

String theory starts with embedding the worldsheet of a string in a

N dimensional space + time, quantizing, discovering unique anomaly

cancellation conditions, etc. But perhaps we'd like to start with the

string as fundamental, and space-time as somehow constructed from

string. Now, in (1,1) dimensions of the worldsheet, what distinguishes

physically distinguishes the space-dimension from the time-dimension?

Is it simply that we can impose periodic boundary conditions on one

dimension but do not think of doing so on the other? If there is a

microscopic arrow of time, should it not emerge from the string itself?

Why should the signature of the emergent spacetime be (-1, +1, +1, ...+1)?

Why not (-1,-1,-1,...,+1,+1,....+1)?

On the mathematical side: Is it true that any 2-surface with a metric signature of (-1,+1) can be embedded in a space of signature (-1, +1, +1, ...., +1) ?

Perhaps we need ( -1, -1, +1, +1, ....)?

-Arun

Hi Arun!

ReplyDeletePerturbative string theory *does* start with strings being the fundamental starting point, and spacetime (and locality in spacetime and target space effective theory) is a derived concept.

The worldsheet conformal field theory describing large dimensions is a theory of scalars (nonlinear sigma model, if the spacetime is curved), and what distinguishes the time direction is that it is the scalar with a "wrong" sign of the kinetic term (the opposite than others).

The Polyakov action for strings in flat space background is

S = integral d^2 z eta^{mu,nu} partial_a X_mu partial^a X_nu

You simply see that it splits to different directions, and the time differs by the sign of the action from the spatial ones.

One can analytically continue to different spacetime signatures. Although there is a Lorentz symmetry between space and time, there are subtle differences between them - for example, we don't want to have closed timelike curves even though spacelike ones are OK.

In a sense, string theory predicts the signature with one time. It is the maximal number of time dimensions that leads to a ghost-free theory. Ghosts have two meanings; now I mean the bad ghosts which are vectors in the Hilbert space with a negative norm (squared). The conformal symmetry on the worldsheet is able to "cancel" the set of negative mode oscillators coming from a single time dimension (plus one associated space directions) - the same power as gauge symmetry in spacetime which kills 2 polarizations of the photon, leaving the two physical ones.

String theory with no time dimensions has not "used" its gauge symmetry efficiently, and if you have more than 1 time dimension, you find negative probabilities.

Locally, you can embed any worldsheet with the right signature to a spacetime with sufficiently many space and time dimensions. Globally, there can be constraints - for example, the smooth surface embedded into k+1-dimensional spacetimes will never have closed timelike curves, even though a general 1+1-dimensional "manifold" can have closed timelike curves.

Yes, it may be true that if you allow 2 time dimensions, then your statement may be correct globally, but it is irrelevant because the topological expansion based on 1+1-dimensional worldsheet would be confusing anyway. We usually use the mixed signature of the worldsheet for actual calculations in the light-cone gauge only where the topology is simply splitting and joining cylinders.

These things may seem more confusing than necessary. The path integral in string theory has much more transparent topological behavior in Euclidean worldsheet and Euclidean spacetime, which is what we prefer. It is analogous to field theory where we also prefer the momentum integrals in the Euclidean momentum space. In the Eu. approach to string theory, we integrate over all Eu. worldsheets embedded into the Eu. spacetime, and they indeed can give you all possible 2D shapes - and vice versa, in a sense, even though the embedding into a spacetime carries of course more information than just the induced geometry, and all this information about embedding is important physically.

Best

Lubos

Hi Lubos,

ReplyDeleteThanks for clearing that up. I figured Witten must have some reason for quoting the 1% accuracy figure, even though the prediction for \alpha_3 was off by 10%.

I guess I'm still not completely convinced that the "1% accuracy" claim is more justifiable than the "10% accuracy" claim. As you point out, given a calculation of one quantity with a certain error, you can always reduce the error by looking at it as a calculation of another quantity related non-linearly to the first.

This calculation is certainly by far the strongest evidence for supersymmetric grand unification. Still, even if it were accurate to 1%, the other problems with the whole picture seem to me so severe that I'll be extremely surprised if superpartners show up at the LHC.

Peter

Hi Peter,

ReplyDeleteyes, it is a matter of conventions how you parameterize the space of possible values, and the relative error depends on these conventions.

Nevertheless, the counting mentioned in that article sounds reasonable to me. You just take all variables (alpha3 and sin-squared-theta is enough here), define a more or less natural measure on that space, and then you count the percentage of the volume/area of that space that is covered by all points that have at most the same error from the theoretical value as the observed point.

Using this counting, you will see that it is a 1% result.

Do you want to make a bet about SUSY? I've already bet 1000 dollars for SUSY seen at the LHC. ;-)

All the best

Lubos

Lubos,

ReplyDeleteDigesting your answer, slowly...

So, "multi-fingered time" is ruled out in string theory because it would lead to negative-norm worldsheet states - that is neat!

Later,

-Arun

Arun--

ReplyDeletehttp://arxiv.org/abs/hep-th/0008164

Well, Arun, be cautious before you accept everything that Itzhak Bars says about two times. It's a potentially interesting stuff, but I don't think that it is directly related to your question - which seemed to be focused on "real" string theory and its obvious generalizations to different signatures, while Itzhak seems to talk about slightly different theories.

ReplyDeleteArun,

ReplyDeleteOne funny thing about 2D metrics of signature (-,+): the 2-sphere doesn't admit one! (If it did, we could use it to construct a nowhere zero vector field on the 2-sphere, which we know doesn't exist.)

What this says about string theory calculations for genus zero, I am not so sure. Many things are done with Euclidean signature, and then there is some hand waving and citing Mr. Wick's name...

arkadas ozakin