Today, I consider the first paper by Jacob Bourjaily, Nima's student at Princeton,
National Academy of Sciences (yes, I took the picture), it's natural to discuss this topic again.
Why local models
Because the gravity is so weak, i.e. because the Planck energy scale is so high, it makes sense to consider the non-gravitational forces in our world to be approximately decoupled from gravity. If this decoupling has a simple interpretation within string theory, the non-gravitational forces and matter must arise from a local segment of the compactified geometry. It means that the vicinity of the relevant locus of the compactification may be approximated by a non-compact geometry.
In order for the visible sector not to be decomposed into decoupled pieces, the whole sector must arise from a single singular locus. In M-theory, we consider real 7-dimensional manifolds at each point of the visible space. In F-theory, we study real 8-dimensional (complex 4-dimensional) manifolds at each point: in this counting, the two real dimensions of the toroidal fiber are included.
The gauge group always arises from real co-dimension 4 loci (think about the standard ADE singularities); in the F-theory case, two of these transverse dimensions belong to the toroidal fiber so the co-dimension is 2 on the base space (recall Kodaira's classification of singular fibers). This counting means that the gauge group lives on real 3-dimensional (M-theory) or real 4-dimensional (F-theory) cycles in the compactified geometry; in the F-theory case, imagine a gauge multiplet carried by a del Pezzo surface.
Bourjaily (see his scary CV) argues that in both cases, E_8 is the only possible "master" gauge group - or singularity type - that is powerful enough to incorporate the known forces and matter. He claims that the whole matter spectrum arises from the adjoint representation of the "large" group by a decomposition under a subgroup, a statement that could be technically wrong.
He explains that the light chiral matter imposes strong constraints on the geometry which in turn implies nontrivial constraints on the matter spectrum, especially in the case of M-theory where one N=1 chiral multiplet has to arise from each real co-dimension 7 point in the geometry. In F-theory, one gets full N=2 hypermultiplets from co-dimension 6 places where the matter fields live (they're therefore massive, unless the right fluxes change this conclusion), and he is more free to choose the multiplicities etc.
It is also being explained that many couplings tend to be naturally zero in M-theory and F-theory, even though you would think that there is no (symmetry or related) reason for such a vanishing in field theory. M-theory and F-theory also give you new methods of "non-unification" that nevertheless links the properties of different sectors of matter. These models naturally incorporate a heavy top quark and many other things.
I think that this guy is extremely smart, creative, and broad. It is very conceivable (and likely) that his knowledge of algebraic geometry is less comprehensive than the knowledge of the big shots in this subfield - the AGTs (algebraic geometry technocrats), a group that includes many ex-students of Cumrun and ex-students of other senior people. ;-)
On the other hand, I am absolutely convinced that this guy sees the big picture and the links between geometry and physics much more clearly than most of the AGTs do (but I am certainly not including Cumrun himself among the AGTs at this point!). So the AGTs should do their best to organize their ideas about the model building according to Bourjaily's template because he's simply a more intuitive, organized thinker who doesn't get easily distracted by irrelevant details. At the same moment, he should probably study algebraic geometry more systematically. So should I. ;-)
I am sure that many AGTs think that much of the nice stuff that Bourjaily is writing is about marketing. But they're mostly wrong. This "marketing" is actually the basic framework that has to exist in physics. What many AGTs are doing is to work out some details in very local, relatively small pieces of the framework. That's important but it's clearly not the sufficient activity that this part of physics research could be composed of.
Sociology: anthropic people
Bourjaily's papers also exhibit a striking contrast with the anthropic papers. Bourjaily is still applying "physics inference" that has been so crucial in all major, conceptual developments of science. He observes (or "induces") some qualitative features of the real world and tries to guess which qualitative features of the underlying (stringy) description are likely to be correlated with the observed patterns.
The models that have the right qualitative properties are subsequently investigated and classified in detail, leading to predictions and new possibilities. What's important is that the research is concentrated under the lamppost - namely the lamppost of the rational, qualitative arguments. One is not searching in a haystack, trying to find a needle by chance. He or she has to search mostly in the places where the concentration of promising models is high.
As our knowledge gets more extensive and more accurate, the light from the lamppost is getting ever more focused. We know what we're doing increasingly accurately and the "measure" determining where we are searching is getting increasingly non-uniform.
What the anthropic people apparently like is the idea that a random (according to a quasi-egalitarian measure) vacuum in the landscape reproduces the observed world "by chance". And they think that their "chance" is higher if they have many candidate vacua. They clearly disagree with the common sense: common sense (and Bayesian inference) implies that if a model agrees with the observations just "by chance", it probably means that the model is wrong. ;-)
A model is only promising if it agrees with the aspects of observations that are likely to be more than just a coincidence!
So having many candidate vacua cannot possibly increase their probability of being correct. In the Daily Show about the LHC, LHC alarmist Walter Wagner calculated the probability of the world destruction by the LHC to be 50% because there are two possible answers, Yes and No, so the probability of each had to be 1/2 = 50%. Even John Oliver, the journalist, was able to guess that this was probably not the way how probability works. ;-)
However, Wagner's is basically the same understanding of "probability" that the anthropic people are implicitly or explicitly using all the time. It's breathtakingly stupid, too. The number of possible answers has nothing to do with their overall probability. Sometimes, a higher number of detailed realizations of an answer may increase the probability of this answer. In other cases, it's the other way around: having too many choices implies that none of them is right. If you're searching for the king in a country, it's likely to be the guy who looks different than 10 million of other people rather than one of those 10 million people who look alike! In some cases, more is definitely less.
A rational approach to determine which answers are correct must obviously care about the detailed physical questions - qualitative features of the spectrum, exact and approximate symmetries, smallness and greatness of individual couplings, and the correlations between all these features, among other things. Whoever is trying to find the right theory or the right vacuum while paying no attention to these real dynamical issues is clearly not doing physics properly.
And that's the memo.