Jacques Distler, our blogging string-theoretical friend, just spoke about

- The fun with random polynomials

to the case of arbitrary supersymmetric renormalizable (quartic) potentials for the "N" fields that play the role of the redundant anthropic superstructure. Recall that in the picture of Nima, Savas, and Shamit, there are "2^N" vacua because each quartic potential for one of the "N" scalar fields has two minima. For "N=400" or so, this is a large number of vacua. Some of them will have a realistic (i.e. very small) value of the cosmological constant and the Higgs mass, but the observation of Nima et al. is that under certain assumptions, all other parameters may have nearly constant values across the "friendly neighborhood".

Jacques considered generic cubic superpotentials of "N" chiral superfields that are constrained to preserve a "Z_4" R-symmetry under which the fields are odd, and therefore the superpotential must be an odd function. Renormalizability implies that there are linear and cubic terms only. He fixed the "GL(N,C)" symmetry in such a way that his model only differs from the superpotential of Nima et al. by an extra trilinear term

- sum_{i smaller j smaller k} b_{ijk} phi^i phi^j phi^k

You essentially know how to do it if you know that the coefficients in a quadratic equation are the sum and the product of the roots, respectively, and similar rules apply for higher degree polynomials, even if they are polynomials of many variables. It requires some neat linear algebra and characteristic polynomials.

This was the less controversial - the mathematical - part of the talk. The more controversial part of the talk was the physical interpretation. Jacques intended to avoid the anthropic principle, but he could not. He did not avoid it simply because he was talking about "generic values of b_{ijk}" for which some conclusions apply, and so forth. This very fact has essentially separated the room into two political parties of theoretical physics defined - more or less - by their relation to the anthropic principle:

- Jacques and Nima defended the straightforward and somewhat ad hoc procedures.
- The rest of us who participated in the discussions - which means especially Cumrun Vafa and Nati Seiberg who is visiting us because of the Sidneyfest on Friday and Saturday - were dissatisfied with the vague anthropic rules of the game.

- First, one talks about various distributions in the "b" space - where "b" are the coefficients of the trilinear terms in the superpotential.
- Second, one decides about some values of "b", and he talks about the "landscape" of different values of "phi's" and the distribution in this landscape assuming fixed values of "b".

Cumrun complained about the vague and arbitrary sense in which the word "generic" was used in the sentence "generically, the qualitative conclusions of Nima et al. are not changed". He argued, and I agreed with him, that there are infinitely many "measures" that can decide what choice of the couplings "b" is generic and what choice is not. For example, Cumrun's "generic" choice is to choose "b" in such a way that the resulting superpotentials "W" are distributed according to any distribution we like. We believe that both of these choices are equally (un)justified.

Moreover, I was trying to convince others that generically, for a "naturally generic choice of b" one will be very *close* to one of the bad regions where the approximation breaks down and "b" imply a very different picture from the case of Nima et al. (where all "b" equal zero). More concretely, the statement that the other couplings have a small variance will fail. It's simply because there are roughly 1 million different coefficients "b" that one works with in Jacques' superpotential, and it is more or less guaranteed that at least one of them will be in some very special interval whose width is 1/100,000 of the allowed interval for values of any "b". I think that Nati was trying to convey a similar point, in a sense.

The simplified point of the anti-anthropic people is that there exists a huge number of ad hoc procedures with different outcomes, and one should use physical, not sociological, principles to select the right rules. Of course, according to Cumrun, me, and probably others, the preferred physical principles that should decide which choices are generic and natural should be based on the actual dynamics of our theory itself, as exemplified by the Hartle-Hawking wave function in Cumrun+Hiroshi+Erik's paper about the entropic principle. They should not be based on some extra external (or political correct) assumptions about "uniformity" or "democracy" of something.

Once again, the viewpoint that I share with Cumrun is, once again, that no arbitrary assumptions about the selection etc. should be added to our theory. If we know what dynamics of our theory is - something equivalent to its action - all physically meaningful predictions should be obtained simply by an appropriate mathematical manipulation with this action (or whatever it is replaced by). Dividing fields and constants to fields and constants of 1st category and 2nd category and assuming various randomly chosen and unjustified probability distributions for these variables is simply not a scientifically satisfactory approach.

## snail feedback (4) :

I agree absolutely with Lubos that the OVV procedure is the way to go. It's notable that JD's fear and loathing of the wavefunction of the Universe has led him to this pass. It would be great if you could write in more detail about the OVV stuff.

Lubos:

You don't get it. Surely I know the Nastase paper was in January. But who would care a paper on ARXIV. 99% of publication on ARXIV is just junk. But the story is a little bit more interesting once it spill over to the public media. Like SlashDot and BBC

Things are always a little bit more interesting once it stirs some disturbance in the public media. Whether they are legitimate or not is another story.

Dimopoulos gave a good talk here at Caltech some time ago in which he discussed the landscape approach to fundamental physics. Allow me to paraphrase some of his remarks: The fact that we live in a planet that has just the right properties to allow complicated life to develop is not so surprising, if we know that there are many planets out there. The probability of some of them having the right properties to sustain life is not vanishing, and of course if we are here it's because we live in one of them. We shouldn't try to explain the specific properties of our planet in terms of fundamental laws, because the only thing special about our planet is that we happen to live in it. Dimopoulos recalled how Giordano Bruno in the 16th century very controversially declared that there was not one Earth but rather innumerable Earths orbiting innumerable suns. At the time this was just a bald assertion, but eventually telescopes revealed that those suns and planets did exist.

Dimopoulos said that it's not unreasonable to think that maybe the fundamental laws of physics produce innumerable universes with different physical properties, and that we shouldn't worry so much about explaining the specific properties of our own universe (like the size of the cosmological constant), since we just happen to live in a universe where those properties are such that we were able to exist in the first place. The problem, he emphasized, is that we are at the Giordano Bruno stage where this is just an assertion: there is no equivalent of the telescope.

I've been thinking about the value of anthropic arguments for a while, ever since Steve Hsu, Michael Graesser, Mark Wise and I wrote a critique of Weinberg's anthropic explanation of the size of the cosmological constant (hep-th/0407174). I think it's not fruitless to think anthropically, as long as we keep in mind that we have no telescope, and that we don't have, as yet, any truly compelling argument that the fundamental laws of physics predict innumerable universes (a "landscape").

In our paper we explained that Weinberg's argument works well if only the cosmological constant varies between universes. If other cosmologically-relevant parameters like the parameters of the inflaton field potential also vary, then our observed universe looks anthropically unlikely, because life would still be possible if we change several parameters by a lot, in such a way that galaxy formation remains possible. That is, our own universe would not be typical of universes where life is possible

The paper by Arkani, Dimopoulos, and Kachru that you mention tries to deal with this criticism by giving a toy model of a landscape in which only the cosmological constant and the Higgs mass vary significantly between universes. Again, I don't think that this is a fruitless thing to think about, because it goes to the problem of whether anthropic explanations for things like the cosmological constant could work even in principle. But until we have good reason to believe that something like that landscape is predicted by fundamental physics (or, in some way I can't imagine, is experimentally observed), we should remain a bit wary of taking this too seriously.

Of course, this leaves us with another philosophical issue: Why do we expect our universe to be typical of those in which life is possible? (Recall that what our argument in 0407174 boiled down to was that if things beside the cosmological constant vary over the landscape, then our own universe is not typical of anthropically-allowed universes). It’s like drawing one poker hand without knowing what cards are in the deck and concluding that what we got is typical. Once again the problem is having a single data point: our own universe.

-Alejandro Jenkins (Caltech)

Lubos,

Thanks for the post - your description of talks at Harvard and especially discussions afterwards are a public service that I very much appreciate.

I wanted to address one of the "anti-anthropic" groups' objections, the question of what kind of principle is used to decide about "genericity" and "naturality". It is useful to begin by defining our scenario and what we mean by the above two phrases. We assume (as a phenomenological scenario) that there is a separation of scales between the physics that determines the couplings of a landscape sector and the physics of the sector itself. We do not attempt to understand the very high energy physics, but instead make the assumption that it results in some (unspecified) distribution of coupling constants. In our models, the space of couplings is algebraic and in this spirit, our notion of "genericity" is an algebraic (not measure theoretic!) one, that a property is not generic if it occurs in complex codimension one or greater in the space of couplings. Our philosophy is to try to use algebraic methods to understand as much as we can about the space of vacua, for ANY input distribution. One might now object that the distribution could well be far from smooth or simple, and might even be so singular that the algebraic notion of genericity is a poor guide to questions of naturality. But that's the point - naturality is a different question, one that can be answered by carefully applying our machinery to a given input distribution. That is, our output has to do with naturality, and algebraic genericity merely a tool towards that end.

Now, we do go further and ask if there are features that are broadly independent of the nature of the input distribtuions - under certain very mild assumptions on the shape of the input distribution, we can use limit theorems that arise for large numbers of random variables (following Nima and Savas) to make some fairly universal predictions. Of course, these "mild assumptions" may well be violated by our underlying physics, so these results are less solid.

Regardless, it is clear that as a string theorist, I would be much happier if I knew something about the underlying physical dynamics. Actually, as a baby step in this direction, Willy Fischler and I have recently been thinking about trying to apply Wheeler-DeWitt, Hartle-Hawking like methods to these scenarios.

Uday

Post a Comment