Vijay Kumar and Wati Taylor wrote a new paper,

String universality in six dimensions,in which they argue that every low-energy supergravity action (or set of equations of motion) in six dimensions is realized (and UV-completed) as a stringy compactification.

Although the stringy constraints may look very different than the field-theoretical constraints, especially the anomaly cancellation conditions (with the cute number 243 that you will see again if you read the paper), it turns out that they are mostly equivalent. String theory reproduces all the constraints of field theory and it may add additional ones - those that exclude the "swampland".

Except that the six-dimensional setup is an example where the set of these additional constraints may be empty, as they claim. If true, that would mean that without a loss of generality, you may always assume that a six-dimensional supergravitational action is actually a part of a stringy compactification: string/M-theory is universal, in this sense.

**Discrete classification of vacua only**

Now, one should appreciate the limitations of this conclusion. They're only talking about the strict infrared limit - the massless fields and their interactions. So the supergravitational effective theories they consider can't possibly contain any information about the spectrum of massive states, e.g. the continuous masses themselves. The latter - including excited strings, branes, and black hole microstates - inevitably carry the stringy/M signature.

Also, the case of six dimensions is special and differs from the four-dimensional theories that are directly relevant for phenomenology. In six dimensions, which is of the form D=4k+2, there are many potential types of anomalies, and the field-theoretical consistency constraints are consequently very strong.

I've mentioned that the "field theories" contain no information about the massive states and their physics. But in six dimensions, it seems that there are really no "dimensionless couplings", additional continuous parameters that we are familiar with from four dimensions, either. For example, the counterpart of the scale-invariant "lambda phi to the fourth" theory in four dimensions is "lambda phi cubed" in six dimensions. But such a theory is unstable because the third power is unlimited from both sides: it is incompatible with supersymmetry (and has other problems).

In fact, the non-gravitational six-dimensional field theories of the (2,0) type are determined only by discrete data, too. Recall that the N=4 d=4 gauge coupling arises from the shape of a two-torus on which one compactifies the (2,0) theory.

In six dimensions, the minimum supersymmetry is (1,0), carrying one chiral spinor which has 8 real components or 8 supercharges. That's enough for the moduli spaces to be exact - both in field theory and string theory. They give a continuous degeneracy to the qualitatively "different models".

That's also the source of many differences between the 6D case and the 4D case. In the latter case, when we're doing phenomenology, we care about stabilized vacua with fixed values of the scalars (by a potential that is often forced to vanish for 8 supercharges, e.g. in 6 dimensions).

**Differences between 4D and 6D**

So the "countable" sets of stringy vacua can actually match the countable classes of six-dimensional effective field theories that are only classified by the matter contents and other discrete or "qualitative" data. In my opinion, this makes it impossible to generalize such conclusions into four dimensions because in four dimensions, the anomaly constraints are much weaker and most of the interesting physics affects the particles and fields that are massive, at least a little bit.

The countable set of stringy vacua clearly allows at most a countable set of electromagnetic U(1) fine-structure constants, among other parameters. So one surely can't cover the "whole parameter spaces" of the low-energy parameter spaces of four-dimensional field theories.

The strongest variation of the Kumar-Taylor conjecture would have to talk about a "dense subset" and it is very questionable how "dense" the subset of the parameter space would have to be (and how dense it actually is in reality, and how you even define the density). If the density is low, the "quantitative" differences in the coupling constants may begin to look "qualitative", after all. And I feel that there's no real, qualitative difference between "quantitative differences" and "qualitative differences" between four-dimensional effective field theories. ;-)

Also, to make such a weakened conjecture relevant for phenomenology, it would have to include some light fields to the effective field theory - because the strictly massless limit of the theory of our Universe only contains gravitons and photons, while other particles are massive. But it would be very subtle to divide the "light" particles from the more general "massive" ones: there is no sharp boundary in between them. So I doubt that there is any simple yet robust extension of their conjecture to the four-dimensional case.

**Predictivity in general**

Kumar and Taylor stress that whether or not a candidate theory of quantum gravity (manifested primarily at the Planck scale) implies new predictions for low-energy physics has absolutely no impact on the probability that this candidate theory is valid. Well, no doubt about that. We or someone else may "wish" to derive new predictions for other contexts, besides the regimes where the new dynamics obviously manifests itself as new effects of order 100% (i.e. besides the Planck scale), but this attitude may remain a wishful thinking, not a criterion of validity.

(Except that particular technical work, e.g. in the F-theory phenomenology, shows that it is not just a wishful thinking but a reality. The recent arguments that the realistic vacua come from an E_8 gauge group is a great example.)

But I think that there is a subtle linguistic or logical point at which the authors (K+T) are fooling themselves when they interpret their findings (in six dimensions, and morally extended to four dimensions without much evidence) as implying that "string theory" has no consequences for low-energy physics. What they have actually collected is some evidence supporting the statement that, using the jargon of Hartle and Srednicki, the triple

{string theory, a probability distribution for the vacua, a xerographic distribution}implies nothing about the low-energy physics: however, in their sloppy jargon, they use the term "string theory" for the triple above. Needless to say, they mostly assume the anthropic probability distribution. It's equivalent to the assumption that all vacua of string theory must be considered equally "real" and "possible": this assumption includes the assumption that no new vacuum selection mechanism will be found in the future.

**Unwanted predictions that appear, anyway**

In fact, their omission is worse than that. If the number of vacua is very large, and it arguably is, even the (unmotivated) "democratic" probability distribution may lead to "peaks" in the low-energy parameter space, analogous to those that you encounter when you use the saddle-point evaluation. So even if you start with distributions that seemingly imply no predictions, the large number of vacua is able to make some predictions, after all.

They realize this "problem" - although one would usually view this observation as good news. But it kind of doesn't fit their desired, preconceived conclusion that the low-energy physics should be unaffected by the Planck scale considerations, ;-) so they want to "overlook" the mechanism from the previous paragraph. In other words, they're not really using an a priori well-defined anthropic probability distribution for the vacua. They're modifying their distribution on the run, i.e. a posteriori, and their preferred distribution is one that is compatible with their desired conclusion - i.e. assumption - that the Planck scale physics won't influence the low-energy physics. :-)

That's why their conclusions are pretty much vacuous for the vacuum selection discussion - they only "deduce" as much as they assume from the beginning. The authors are unfortunately trying to "hide" some of their assumptions. The "actual" probability distribution for the landscape and the xerographic distribution is not quite known at this moment. It is needed for conclusions of the type that Kumar and Taylor want to make. Because the conclusions depend on the distributions, Kumar and Taylor simply can't make such "decoupling" conclusions before the actual distribution is actually known. Unless they're making a logical mistake, which they probably do.

And that's the memo.

**Bonus: a nice string field theory paper**

The first hep-th paper is about the bosonic string field theory. Martin Schnabl and Ted Erler construct the closed string vacuum solution of bosonic string field theory in a new gauge, the (nonlinear Erler-Schnabl) "dressed B_0 gauge", which seems to lead to much shorter expressions and verifications of Sen's theorems about the vacuum condensation than the original Schnabl gauge solution.

They explicitly construct the transformation proving the equivalence of the solutions, too.

## snail feedback (0) :

Post a Comment