Tuesday, January 11, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Predictive landscapes

Tonight, a new paper by Nima, Savas, and Shamit - one that will definitely be very exciting for many of us - will appear:

They propose the following "compromise" between the predictive vacua and the anthropic unpredictive landscape:

The landscape is divided into "countries" (my word) which they call "friendly neighborhoods". In each country, the dimensionless constants such as the gauge couplings and the Yukawa couplings are effectively fixed, but the dimensionful parameters - namely the cosmological constant and the Higgs mass - take many different values and are subject to the anthropic selection.

You know, the cosmological constant has been a big problem in particle physics and people are more ready to accept Weinberg's anthropic argument for this parameter. On the other hand, the hierarchy problem has been solved without the anthropic lack of principles, but it's still OK to view it as a problem. These two numbers - the C.C. and the gap between the Planck scale and the electroweak scale - can both be described as dimensionful parameters.




More quantitatively, they consider a large number "N" of fields, and the number of vacua grows exponentially with "N". On the other hand, the relative fluctuation of the dimensionless couplings go to zero, namely as "1/sqrt(N)" or even faster. They consider various "countries" that describe the Standard Model, the MSSM, and Split SUSY.

The anthropic part of their argument is constructed to have the following results:
  • the existence of atoms constrains the ratio of the QCD and the electroweak scale
  • because this ratio turns out to be quite small, as in the real world, it solves the hierarchy problem - once again, here it is solved by an anthropic argument
  • the mu and the doublet-triplet splitting problems are also claimed to be solved in this setup
  • another anthropic argument solves the hierarchy problem by requiring the existence of baryons and vacuum stability
The models predict new physics at a TeV, which includes a dark matter candidate. The simplest model predicts light Higgsinos, and the Standard Model couplings unify near 10^14 GeV.

Do I understand it?

Not quite. I'm probably too slow, but the difference between the input and the output is not quite clear to me so far. In field theory, one could have always fine-tuned the parameters - especially those that are potentially problematic, namely the C.C. and the Higgs mass. These constants may be tuned in field theory - the rules allow it. The field theorists have always done it, before they tried to look for explanations of the small values of these parameters.

We may add a completely new decoupled sector - the landscape sector - that gives you some "microscopic feeling" why the different values of these dimensionful parameters are scanned. But from a scientific viewpoint, we're not getting any new prediction beyond what we've inserted, and Occam's razor tells us that we should not add structures if they're not necessary.

We may separate these "nicely, reasonably behaving" parameters from the "bad, hierarchical, strange, dimensionful" parameters, but in field theory we can do it by hand anyway. Surely their mechanism is not meant as a solution to something in field theory where tuning is allowed. It is meant as a toy model for what can arise from a deeper theory - namely string theory.

But the relation of their picture with string theory seems even more confusing to me. They realize very well that the known ensembles of stringy vacua do not fit their definition of the "friendly neighborhood" - i.e. classes of vacua where the "nice dimensionless" parameters are kind of fixed, while the "bad dimensionful" parameters are scanned. They're right that it's probably possible to find classes of vacua which are "friendly". These classes have the advantage that the "nice" parameters can be predicted in them, while the "bad" parameters that normally come out incorrectly are not predicted. ;-)

But I think that the obvious question is then "Why should we be living in a friendly neighborhood?" Is it because Nature is nice and She wants us to be able to calculate? Did She deliberately create us into a special friendly class of vacua - where the things we seem to understand in 2005 are fixed, while the things that we don't understand in 2005 vary - so that we can eventually describe something about Her beauty? This would be really like the proverb about looking for the keys under the lamp only, I think. Well, sure, if we can't do better, it could still make sense to look under the lamp, at least, with some chance to succeed.

Unless there is some other independent mechanism or reason to pick these priviliged vacua - beyond the argument that they're nicer and allow us to calculate a little bit more - I think it is not rationally justified to favor them. Also, I don't quite understand how the specific models they propose differ from the usual minimal models of various types that are still compatible with the observations. Surely that if anyone ever constructed realistic models, she was always checking whether they're compatible with the parameters we see in the Universe, including the mechanisms that had to work in the early and later cosmology. If these things are checked, then - of course - one also satisfies the galactic, atomic, and even the anthropic principle: if all the constants have the right values, we're here.

They use the anthropic argument to replace the "minimality argument" - but I don't quite understand how can the anthropic reasoning ever tell you that the things should be simple. Moreover, the things in the real world are not as simple as possible. Life could probably exist without the third generation of fermions, for example. There may be essentially nothing beyond the Standard Model, and there can be a lot of new physics. In the past, the predictions that there was no new physics were often incorrect - for example if they believed that neutrons, protons, electrons, neutrinos were the only elementary particles.

Both options - no new physics as well as a lot of new physics - lead to viable theories that not only admit life, but that can even agree with the real Universe. I don't understand what the question "how much new physics is there" has to do with the anthropic reasoning, and what is the new justification that tries to twist the answer in some specific direction.

I suppose that the main point is that we should be using a different "measure" to decide which new models are natural and which models are not - but I still don't understand what's exactly the difference of this anthropic approach from the "classical" approach that the models should be constrained by the data that we know and understand well, and they should have freedom about the parameters that we don't understand well, while we're trying to look for the "simplest" models.

Also, I thought that the argument would imply a very strict separation between the dimensionful parameters and the dimensionless ones - the former being a subject to the anthropic principle, while the latter not - and this separation would be respected. But at the end, they also admit that the first family of fermions is rather light, and even though these are described by dimensionless parameters that should be predicted, they also propose an environmental/anthropic explanation for the first family (whose Yukawa couplings are small).

Isn't it then fair to say that we simply demand the anthropic principles and mechanisms - or God - for all things that we don't understand yet and that are unnaturally small, while we expect that the other things will be subject to the old-fashioned rules of predictable physics? Is not such an approach just a less transparent way to admit our ignorance?

One could generalize this type of thinking by allowing a continuous label "S" (strangeness) for each parameter. The higher "S" is, the more the parameter would vary in the neighborhood, and the more it would be a subject to the anthropic selection "rules". Obviously, the things that we find unnatural - especially unnaturally small - would have a high value of "S". Is not such a label equivalent to covering the "success stories" by gold and the "stories of failure" by fog? :-)

Happy end

I want to wrap this up with a happy end. These guys are so smart that I actually believe that one of their models has a significant chance to be confirmed experimentally. However, I believe that the children in the 22nd century will learn the funny story how Nima, Savas, and Shamit obtained their model by thinking about some weird anthropic ideas, much like Maxwell constructed his equation while he was thinking about the luminiferous aether. ;-) Yes, this statement of mine also implies that with my current understanding, I could not believe the anthropic reasoning even if one of these models turned out to be right.

Add to del.icio.us Digg this Add to reddit

snail feedback (26) :


reader Quantoken said...

Lubos wrote:

"The landscape is divided into "countries". In each country, the dimensionless constants such as the gauge couplings and the Yukawa couplings are effectively fixed, but the dimensionful parameters - namely the cosmological constant and the Higgs mass - take many different values and are subject to the anthropic selection."

How could you vary any of the dimensionful parameters at all, once all the dimesionless ones are fixed? You can't. Dimesionless parameters are all the physics there are. Once you take that away, what remains are just what's arbitrarity of unit selections, purely depends on random events of human civilization evolution, like exactly how much is one kilogram, and how much is one meter.

In another civilization, they will have completely different unit sets and all their dimensionful parameters will be different from ours. But I bet they will measure the same alpha, the dimensionless fine structure constant, as well as all other dimensionless parameters exactly the same as ours.

So dimensionless parameters are the only thing physically meaningful, and not affected by the bias of human civilization's random selection of unit sets.

You can go to Paris and chop half off that standard Kilogram, then all of a sudden all our dimensionful paremeters, like the numerical value of hbar, would have changed, but the physics of nature will not change, nor will any dimensionless physics parameters.

Quantoken


reader Lumo said...

Hi Quantoken, your point is OK, but you might have misunderstood the terminology a bit.

The dimensionful constants are those that affect the low energy physics a lot - the very relevant operators, in the technical sense. It includes the Higgs mass and especially the vacuum energy. You may choose your parameters in any way you want, but it still makes sense to ask what are the dimensions of the operators.


reader George said...

From Wikipedia ( http://en.wikipedia.org/wiki/Anthropic_principle )

"Proponents of the anthropic principle suggest that we live in a fine-tuned universe, i.e. a universe that appears to be "fine-tuned" to allow the existence of life as we know it. If any of the basic physical constants were different, then life as we know it would not be possible."

I think the most we can say is the existence of life and consciousness impose constraints on the physical laws that we find apply to our universe; not that some transcendental intelligence "fine-tuned" the parameters that otherwise could have had any values.

Physics should be consistent with and ideally able to explain consciousness but clearly it's not there yet. Buddhism, and of course Hinduism and Taoism though, have been studying consciousness empirically for millenia (it's called meditation.)

Buddhism, in for example the Surangama Sutra, seems (to me at least, and expressing it in modern terms) to be saying that life and consciousness are fields. There are even some fairly clear parallels to the Heisenberg uncertainty principle and (in the Avatamsaka Sutra) t-duality.

My interpretation of the Sutras is that the universe is integrated, of one piece, that "nothing exists by itself," as the Lotus Sutra puts it. Nothing irrational about that, in my opinion, as long as we filter out the sixth-century b.c. science!


reader Plato said...

The higher "S" is, the more the parameter would vary in the neighborhood, and the more it would be a subject to the anthropic selection "rules". Obviously, the things that we find unnatural - especially unnaturally small - would have a high value of "S". Is not such a label equivalent to covering the "success stories" by gold and the "stories of failure" by fog? :-)----------------------------------------

The color glass condensate might have been a high "S" to some without this physics:), but we know there is physics to it.

Now on Peter's site I ask a question about summing over all topologies.....and ask that we focus on a specific area.

Such mathematical forays obviously are highly abtract would seem even Higher "S" without the physics, to orientate that math. So would it be fair to say, that S is only as valuable as the physics in which the math is pointed?:)

Hope this made Sense.


reader Quantoken said...

Lubos said:
"The dimensionful constants are those that affect the low energy physics a lot - the very relevant operators, in the technical sense. It includes the Higgs mass and especially the vacuum energy. You may choose your parameters in any way you want, but it still makes sense to ask what are the dimensions of the operators."

I did not misunderstand the terminology. Except for measurements of dimensionless parameters, measurements against ANY dimentsionful parameters has to be measured aginst some sort of "rulers".

To measure Higgs mass, you need to first pick your ruler, which could either be the standard Kilogram in Paris, which is arbitrary and none-physical, or you can pick a natural ruler, like electron mass, then the Higgs mass will be expressed as a dimentionless parameter, the Higgs/electron mass ratio, times the electron mass.

You can express all particle masses as a dimensionless parameter, times the electron mass. Then, once you fix all dimensionless parameters, it is no longer physically meaningful to "change" the only dimensionful parameter left, the electron mass, since it really makes no difference whether the electron is heavier or lighter, if all the dimensionless mass ratios have been fixed. Actually variation of electron mass will be undetectable because itself is its own ruler!!!

That's what I meant that dimentionless parameters are all the physics there are. Once all of dimensionless parameters have been completely fixed, there is no longer any physical meaning what it means to "vary" the remaining dimensionful parameters.

Quantoken


reader Anonymous said...

I agree with the sentiment of your last sentence, Lubos. It is dangerous to dismiss something worked out by people as smart as them. However, my discomfort level with their work is high. I find the observation that a scanning mechanism for only the super-renormalizable interactions could explain their smallness intriguing and I have no doubt that this is correct. But in order to *explain* things, a unique mechanism would have to be found that leads to scanning of superren. couplings but not the others. It is fair to say in the present paper this is not accomplished. Instead, they use words like "friendly" for regions in moduli space they like. I cannot help saying that this reminds me of loop quantum gravitists who exclude "inconvenient" configurations from their path integrals.

Best,
Dan


reader Lumo said...

Dear Dan,

we probably agree about most of this stuff. Initially I thought that there would be an independent sound mechanism that makes the superrenormalizable terms vary, and others would stay fixed. It does not looks so now.

Nima explains that the whole point is extending Weinberg's reasoning about the C.C. - and one's position towards their approach directly reflects how much seriously you take Weinberg's argument. If Weinberg's reasoning is any good, then it must be true that only the C.C. is allowed to vary while the other things are kept fixed - this is how Weinberg was looking for acceptable values of the C.C. So they create a framework in which Weinberg's assumptions are part of the assumptions about the actual dynamics.

There are two basic attitudes: Weinberg's argument that only allows the C.C. to vary is not sound because it only varies the C.C. while the natural thing would be to vary everything. The other attitude is that Weinberg is a king, and because he only varied the C.C., it must be very important in Nature that only the C.C. is varied in the anthropic thinking, and we end up in a friendly neighborhood. The "proof" that we are in a friendly neighborhood goes as follows: Weinberg is smart, and his argument would be unconvincing unless it's true that the other parameters don't vary. Because smart people like Weinberg can only produce convincing arguments, then it must be true that the other parameters don't vary. ;-) It just looks as a sociological argument to me.

Because the success of the anthropic reasoning for the C.C. - and the relative failure of the C.C. for other parameters - seems to be the only argument to separate the parameters, it looks like one is making the same number of assumptions about the (classes of) vacua that we want to study as the number of assumptions that are necessary for an argument to work. It does not quite seem to reduce the number of independent unknown features of the Universe.

All the best
Lubos


reader Quantoken said...

Peter said:
"The problem with the quantized superstring is not that it can't describe interesting physical degrees of freedom, but that it can describe almost anything. It's too ill-defined for anyone to be able to show it is inconsistent, but it is increasingly clear that it is VACUOUS ."

I find it very amusing in the last word. Thanks for the humor, but string theoreticians are over-qualified languists in inventing new words:-)

ST doesn't just describe "almost" any thing. It describes much much more than everything in the world adds up. That's exactly the problem. When the complexity of a theory allows it to describe more than what this universe can hold, that's no longer a physics theory since physics strictly limit itself to the observables within this universe only.

I keep seeing STers sing the praise how ST is rich of mathematics structures and how much beauty you can find within the theory, and they can't believe that a mathematical frame work so rich in content could have nothing to do with the real world at the end.

What they don't realize is it is exact the problem when you have a theory too rich and provides too much structure than what is needed to encompass the whole observable universe!!! It makes the whole thing "vacuous" when you realize just what an insignificantly small portion of that theory actually describes the universe :-)

One could propose a theory living in 137 dimentions, it surely will provide even richer structures than the current 11-D string theory. It would also be more "vacuous".

I think even a flat 4-D theory is alittle too bigger than what the universe can hold, as we already see. A theory of flat, infinitely extending 4 dimentional spacetime of precise coordinates provides more structures and information than what the actual universe holds. The result is we see a universe that is limited in size and has fuzzy coordinates (quantum effects). The richness of the universe is bounded by boundary, so it is described by the conditions of the 3-D boundary of the 4-D spacetime. That's the holographic view.

Quantoken


reader Lumo said...

Dear Quantoken,

I assure you that you cannot find any 137-dimensional theory that is comparably meaningful and consistent as M-theory. Especially if you still don't know how to spell the word "dimension".

String theory is a very sharp, well-defined, and unique mathematical structure, and it allows one to calculate most things within its framework quantitatively, and gives sharp conclusions about many things. For example, one can sharply say that your comments are silly.

Best
Lubos


reader Anonymous said...

Hi Lubos, thanks for your comments.

I probably should not give you any advice because clearly you are smart enough to get along without it, but why in the world have you stopped deleting quantoken's rubbish?!? ;))

Barring the lack of quality of his comments, I have a developed dislike for guests who need to be told more than once that they are not welcome. Also, judging by the number of litterings in your blog, (s)he feels quite encouraged by any response of yours.

Best,
Dan


reader Lumo said...

Dear Dan,

keeping Quantoken's research ;-) is a way to minimize the tension and censorship. If you can't live with a posting of him or her, you may order the posting to be deleted!

All the best
Lubos


reader Quantoken said...

Lubos said:
"I assure you that you cannot find any 137-dimensional theory that is comparably meaningful and consistent as M-theory. Especially if you still don't know how to spell the word "dimension".

Thanks for your teaching I know it's a "s" not "t" in "dimension". It's habitrary that I keep typing it wrong. But I guess it is OK for me to mis-spell a word if it's OK for string theoretician to keep inventing new English words all the time, like vacua, moduli, landscapius, etc:-) Next time you see me mis-spell it again, just say "Hacuuna Matata" :-)

Lumos also said:
"String theory is a very sharp, well-defined, and unique mathematical structure, and it allows one to calculate most things within its framework quantitatively, and gives sharp conclusions about many things."

Fine, give me just one very sharp and well defined prediction of nature, then any one can just do experiments to confirm or rule out. I have not seem any thing like that coming from the super string camp yet so far. That is still OK because developing a theory may just take some more time. The problem is many people in your camp is now giving up any hope of doing so altother, and still clinge to the religious belief that string theory is still right after all.

It would be OK if string theoreticians can come up with just one landscape, one unambiguous and well defined description of the nature, and say "this is it and I bet all my money on it". But that's not the case. You can up with 10^122 different interpretations. Whatever the nature came up to be, you can always find one set that outfits it. So no one knows exactly which one of the 10^122 outfits is right. No one could even say your theory is wrong. But such a thing really do not have any predictive power and is useless as far as science is concerned.

Quantoken


reader torbjorn said...

On a tangent (or rather orthogonal): regarding scrap posts there has to be a balance between humouring and tension as lumo says.

Some posters are anyway encouraged by fantasy alone, some by being able to post. Or by making some offtopic post if not able to contribute ontopic. ;-)

anonymous him/herself makes quite some number of postings :-) but may find it as hilarous as I that 'quantum' is 'kvanta' in Swedish with 'kvant-' in conjunctions and 'idiot' is 'tok' with '-en' for definite form; 'quanttoken' is 'the quantum idiot'. (Slightly perturbed, of course. ;-)

Well, 'Hakuna matata'! (Swahili for 'No problem'.)


reader Quantoken said...

Torbjorn:

I do not think people should be picking on each other's name. But since you started picking my name first it becomes necessary for me to return favor. Sorry Lumos I have to do this. If you feel you need to delete such messages, for fairness delete his also, or else keep both.

First, Torbjorn, I do not speak Swedish. And it is one of the least spoken language on this planet. Second, do you know what your name sounds like in the language most widely spoken by the most people in this world? (No, not English) Tor-Bjorn would sound something like Head(Brain)-Flat. It's to-bien, i.e., flat brain, or Tofu brain.

Take it easy, you are not flat brain after all. Hakuna Matata!

Quantoken


reader Anonymous said...

Hi Lubos,

I was looking at your blog for fun (after seeing the Smolin paper--I usually look to see your comments when there is a LQG paper :-) ). I noticed some of your comments on the predictive landscapes. I just want to clarify a few things about it:

(1) The argument that ``data" seems to point toward a predictive landscape isn't sociological. Rather, it notices that Weinbergs argument predicts the correct magnitude for the CC only if you assume the other parameters aren't varying. You can then take two attitudes: (A) it is ridiculous for only the CC to vary, therefore the landscape approach must be wrong and the apparent success of Weinberg is an accident, or (B) The success of his argument in predicting the order of magnitude of the CC is not an accident, and suggests that there is a landscape, but one where only the
CC (and perhaps the Higgs mass) is also varied. We are taking attitude (B). The question then arises: is it crazy to just have the CC and Higgs masses scan? The answer is NO. The very simple field theory landscapes we construct provide examples of theories where those are the only parameters that scan. In non-SUSY examples, this still requires a small accident, which as we discuss MUST happen for the relevant operators. In our SUSY
examples with discrete R symmetries, no additional assumptions are needed, you can prove that the CC and higgs mass are the only things that can scan. Depending on detailed properties of the landscape sector, you can get a variety of low-energy theories, such as the SM with only the CC and Higgs mass scanned, the MSSM with only the CC scanned, and Split SUSY with only the CC and Higgs mass scanned. I stress again that in these cases, symmetries of the landscape sector guarantee that these and no other parameters scan.

As you know we are *not* claiming that the entire stringy landscape is like this--indeed we comment that the parts of the IIB flux vacua that have been well-studied definitely are not like this. Why should we end up in a predictive neighborhood when there are non-predictive ones out there? I don't know; this is a ``global" question about the landscape. But it is going to be a hell of a long time before we understand the landscape from the top down to answer this question--and as far as I'm concerned, we have exactly three years, till the LHC turns on! So we are trying to short-circuit the process, to take a hint from data about what our local neighborhood of the landscape looks like, should it even exist.

(2) To me the really interesting thing about the paper is that, once you accept the idea that only the relevant operators scan, you are left with a very rigid set of rules for building models, that lead to small theories with few new particles and parameters that are very predictive, and can be confirmed or excluded at the LHC. We give a number of examples of theories like this, two of which have a new way of solving the hierarchy problem using very basic environmental requirements. The last model I like especially, since it is so minimal yet accomplishes so much: it provides an explanation for the Higgs mass being exponentially smaller than the cutoff using the requirement of baryonic structure, gives baryogenesis, a weak-scale dark matter candidate, and has gauge coupling unification!

(3) A general comment about the philosophy of model building in general and this sort of landscape model-building in particular: you likely know all this stuff but perhaps other readers of your blog might find it helpful. One might complain that our approach doesn't predict a unique theory--this is a complaint model-builders often hear from string theorists too long removed from data, (and also from crackpots whining from the sidelines). The complaint is that if one model is ruled out we'll just build another. Yup. We model-builders will happily stick our necks out and have them chopped off again and again--until we nail the right model. What is not apparent to people who don't build models for a living is how hard it is to build consistent theories that address problems and aren't manifestly ruled out by experiment. This is why in the 25+ years people have been trying to extend the standard model, only a few approaches have been found--strong dynamics, SUSY, extra dimensions, now landscape motivated models. It isn't for lack of trying--it is a tough business. But successful theories of nature have often been built by model-builders trying to build concrete theories that address puzzles, and taking hints from experiments to start approaching the correct theory. Maxwell was a model-builder. So were the architects of the Standard Model.

In this enterprise, having a general framework, such as 4D field gauge theories, higher D effective field theories, and now this new landscape framework, can be very helpful in focusing us on qualitatively different possibilities for attacking the big problems (such as the CC and hierarchy problems) that we have a hunch hold the keys to constructing correct theories. A framework is a good one if it helps you build concrete models with sharp predictions. This is exactly what happened in the case of the Standard Model. As you know, a big realization leading to the SM was that gauge theories should be important to the description of nature. This was of course ridiculed by most people in the 60's--all field theory was in bad repute. Nonetheless, a small group of people took this general idea seriously; it was their framework. But it did not uniquely predict 3-2-1 gauge theory! Only after many hints from data and lots of back and forth did this particular theory emerge; there were a number of competitors: the Georgi-Glashow model, the Prentki-Zumino model, an SU(3) \times SU(3) model of Weinberg, and many others, especially during a period in 77 or so when the experimental situation got very confusing. Critics of gauge theory model-building (and there were many of them!) said ``Its unconstrained! You can build so many models!". But that doesn't matter! What *does* matter is that all the *specific* models made *specific* predictions, that could be confirmed or excluded experimentally. Which is what happened. When the dust settled, we had 3-2-1.

I think in this sense the new landscape framework for building models is really good: ironically, in all my years of model-building, it is the most constraining and predictive framework I have seen. The reason is that every *specific* model of this type has tiny particle content, few new parameters, and a qualitatively new character of prediction for the values of dimensionful couplings. Split SUSY was the first example of a model of this sort, and as you know it has several sharp quantitative predictions; it can most certainly be spectacularly confirmed. It can also be totally excluded if e.g. a single scalar superpartner is found, or any other hints of a natural theory (technicolor, low-scale gravity etc.) are found. In our new paper, we propose a number of other models like this--each model in itself is highly predictive. You can read the paper, section 3, for descriptions of the models, (since this is the first paper on the subject all the details aren't fully worked out, but they can be and will occupy many graduate student hours :-)). These models are the analogs in the landscape framework of trying 3-2-1, Georgi-Glashow, Prentki-Zumino, SU(3) \times SU(3) and so on in the gauge theory framework, to try and scope out the set of possibilites. It also allows us to explore the pattern of predictions for the LHC. Of course a given model makes a given set of predictions, but interestingly, all the landscape motivated models have signals that are qualitatively different from naturalness motivated models. For instance, in naturalness motivated models, one expects large jet multiplicities at the LHC + missing energy, and in particular a large excess of top quark production, given that some partner of the top is cutting off the top loop quadratic divergence to the Higgs mass. In the landscape motivated models, there is no such thing. Instead, one expects to make electroweak particles, which either decay to or are produced in associatation with a Higgs, + missing energy. It looks very different at a hadron collider. And, in the landscape models, if there are extra colored states (such as the gluino in split SUSY), they will always be long-lived--leading to spectacular signals such as decays many meters displaced from the beam-pipe.

Amusingly, it has turned out to be important to explore these signals even for experimentalists! For instance, there are specific hardware triggers at the LHC that are about to be finalized; if they had been finalized, it would have been very hard to find displaced decays. But we're on touch with experimentalists and they will be modified.

As you know I, and most other professional model-builders, am a pragmatist in physics. When I am working on a set of ideas, it is psychologically useful to believe they must be true, so I work harder on them and strive more to overcome obstacles, so I can push them to a point where they can makde concrete enough predictions to be ruled in or out experimentally. When I feel this has been done to my satisfaction, and the subject is in the capable hands of other people, I move on, and drop my psychological belief that they must be true, since it no longer serves a purpose. After all, experiment will ultimately decide what is going on at the TeV scale! I did this with large dimensions, and also with the little Higgs. Right now, I am in the midst of landscape thoughts. But I have to say that for the first time, I feel that I am working with a possible complete picture for much of physics, at least as related to the tuning problems. I'm not struggling with solving the hierarchy problem while ignoring the CC, I'm not fiddling with model-building tricks to solve the ``little" hierarchy problem. There is one, big, answer to almost all the riddles that have vexed me and my friends for 20+ years. Of course it could all be wrong, it could still be that there is a ``deep" mechanism for the CC. I have failed many times to find one; doesn't mean one doesn't exist, and I will very likely go back and think about it again soon. But again as a pragmatist, I can't resist taking an approach that *did* work--Weinberg's--seriously, especially since it further leads to
promising ideas that allow me to make very concrete predictions. The LHC will turn on, and the predictions will be proven right or or wrong. If right, great! If wrong, we'll all have more clues needed to construct the right theory. The fun really starts in 2008.

All the best,
Nima [Arkani-Hamed]


reader Peter Woit said...

First a historical comment on the analogy between the present situation and that in the seventies

"But it did not uniquely predict 3-2-1 gauge theory! Only after many hints from data and lots of back and forth did this particular theory emerge; there were a number of competitors: the Georgi-Glashow model, the Prentki-Zumino model, an SU(3) \times SU(3) model of Weinberg, and many others, especially during a period in 77 or so when the experimental situation got very confusing."

I was a student at Harvard in 1977, just starting to sit in on or take graduate level courses in particle physics and attend seminars and colloquia about particle physics. SU(3)xSU(2)xU(1) with the SM particle content was not just one of a huge array of competing models floating around. It was pretty well set in stone by then, with the model builders already hard at work on GUT and supersymmetric extensions of the SM. Some phenomenologists were certainly actively working on checking the consistency of the newest experimental data with the SM, and the phenomenon of confusing initial data from some experiments was not unknown. But the SM was already being called the "Standard Model" for a reason. It already had such a canonical status that theorists were hopeful that a new experimental result would falsify it and make the game interesting again. The other models mentioned certainly had nothing like the status of the SM in 1977.

Some general comments on model-building. It's certainly a good thing that model-builders are pursuing as wide a range models as possible in preparation for the LHC, especially if they're talking to the people who are working on the LHC triggers. People trying to do bottom-up phenomenology and model-building have been in a tough situation for years, with no experimental guidance to go on. Hopefully the LHC will change all that.

The real problem with particle theory these days is that the top-down approach has been nearly completely destroyed by the concentration on one idea of how to do this (string theory) and the refusal to acknowledge the colossal failure this has led to. Embedding model-builder's work in the vacuous ideological picture of string theory with its 10^500 or whatever vacua doesn't help with this disastrous situation.


reader Quantoken said...

Peter said:
"The real problem with particle theory these days is that the top-down approach has been nearly completely destroyed by the concentration on one idea of how to do this (string theory) and the refusal to acknowledge the colossal failure this has led to. Embedding model-builder's work in the vacuous ideological picture of string theory with its 10^500 or whatever vacua doesn't help with this disastrous situation."

No, contrary to what you believe, Peter, there hasn't been a colossal failure on the string theory. So string theoreticians really have no failure to acknowledge at this point.

To me, failure of a theory means it makes a concrete prediction, expecting it to be confirmed. But instead it is falsified by experiments. That I would call a failure. In that sense a failure would be a good thing since you know what is wrong and not working, and you can move on to something else.

String theory has NOT meet such a failure. It has not made a concrete prediction that can be experimented. So there is not even a failure available. In principle it still could be potentially possible that if they explore it along that direction long enough, they could still succeed at the end of day. The problem is no one can know for sure.

25 years of lifetime of some of the smartest people have been devoted to this quest fruitless so far. One does not have many 25 years to live. The question is how could one continue to throw his own life into this, knowing full well that the previous 25 years hasn't been very successful and who know whether the next 25 years will be different or not.

I say make up your predictions now, instead of wait until 2008. Then LHC may answer once and for good whether you were right or wrong. If you wait until the first experimental result is out before you correct you model, you can always continue building models this way and you would never fail, because you always have the next super string model to hinge your hope for success, you have plenty (10^122) different landscapes to try, don't you? But it also may take longer than what your lifetime can afford.

Quantoken


reader torbjorn said...

Since the blog owner is refreshingly insistent on facts I would like to correct even offtopic posts:

"I do not think people should be picking on each other's name."

Agreed! I was making a (mostly) general observation on something I took for a badly choosen handle, not a given name. My apologies!!! :-(

"..., I do not speak Swedish. And it is one of the least spoken language on this planet."

Initially I thought so too, but since most countries have fragmented language sets, apparently the some 10^7 speakers in Sweden and Finland (10 %) is a medium size language. (This is because Sweden briefly was an European superpower, and kicked some Russian butt.)

Furthermore it is fairly well understood in the Nordic countries as well as Iceland and Greenland. (This is because the Nordic Vikings not so briefly were an European superpower. And again kicked some Russian butt, amongst others. These poor Russians! :-)

And funnily enough, since being Germanic languages, Swedes and Dutch can understand the gist of their respective written languages. (This is probably just a random convergence on the landscape of languages.) Now we are talking! ;-)


reader torbjorn said...

Aargh! "since most countries have fragmented language sets," was incorrect of course.

I meant to say that most of our 3000+ languages are small and fragmented over the surface of Earth. China may have some 50 languages but Mandarin is still the largest of them all...


reader Paul Frampton said...

I agree with Nima on the status of the 3-2-1 GSW model
in the year 1977 as there was experimental confusion
from the atomic parity-violation experiments which failed to detect the parity-violating coupling of the Z
to the electron and this was resolved in favor of 3-2-1 only in the year 1978 when the data on polarized electron - deuteron scattering from the Prescott-Taylor SLAC experiment settled the issue - by a very clever technique they found the P violating Z-electron effects. This all came together at the Rochester conference in Tokyo that summer.


reader Peter Woit said...

Well I remember this history pretty clearly and quite differently. The atomic parity violation experiments were a very new type of experiment of a difficult kind, just barely sensitive enough to be able to see the effect they were looking for. Since these experiments dealt with atoms, not elementary particles, there were potentially complicated atomic physics effects confusing the issue. Particle physicists in general certainly didn't see these experiments as conclusive falsification of the standard model. I was working at SLAC on an experiment during the summer of 1978 and attended the announcement of the Prescott-Taylor results. People were very impressed with the experiment and the way it definitively removed the doubts raised by the atomic experiments, but they weren't surprised at all that the results agreed with the standard model.

Here's a quote from an interview with Weinberg on this topic about what happened to the standard model when he and others started looking for alternatives that would conserve parity in this situation: "In an odd way that made it more convincing. You really couldn't do anything much to it without dreadful consequences" (from Crease and Mann, The Second Creation").

It's just not true to say about the standard model in 1977 that "Only after many hints from data and lots of back and forth did this particular theory emerge". It had emerged back in 1973-74, and by 1975 Glashow and others were already referring to it as the "Standard Model". Sure, in 1977 people were looking to see if there were alternatives that would be consistent with the atomic physics results, but none of the ones people were finding were convincing.


reader Jim Graber said...

Peter,
I appreciate your historical perspective (and was even willing to accept your take on the history until I read Paul Frampon's coment), but either way I think you are missing the real point. Nima wrote "1977 or so"; perhaps he should have written "1972 or so". In either case, the real point is the emergence of 3-2-1 from a small "landscape" of competing models.
Best,
Jim Graber


reader Peter Woit said...

Hi Jim,

If there's any particular thing about my take on this history you don't believe, let me know and I can provide you some references to back it up. What I wrote is based on clear personal recollections, as well as some research I've done in recent years for a project that involves some history of the standard model.

If Arkami-Hamed and Frampton referred to "1972 or so" instead of "1977 or so", I wouldn't be writing in here to make an objection on a point of history. For one thing, in 1972 I was in high school and had no idea what was going on in particle theory. But things were very different in particle theory in 1977 than they were in 1972.

I'm not sure precisely what Arkami-Hamed was trying to claim was analogous to the handful of alternatives to the standard model that he mentioned, so I wasn't complaining about that. If it's the entire "Landscape", that's pretty ludicrous. If he's able to use the ideas in his new paper to narrow things down to four or so possible different predictions for what the LHC will see, and one of them turns out to be right, he'll well deserve his Nobel Prize. I wouldn't bet on it.

Peter


reader Paul Frampton said...

My earlier blogspot was cryptic so here's a clarification. Steve Weinberg has told me he was certain of the SM not later than 1973 and Shelly Glashow in the same year was assuming it in SU(5) grand unification so the SM was by then already established in the minds of its inventors.

Nevertheless, experimental verification of the electron NC was confirmed only 5 years later. The electron NC is dominated by electromagnetism so only shows up as a 1 in 10^7 interference effect. The SM could be, and was, modified but only inelegantly to agree with the null atomic experiments.
It is a good point that the atomic wave functions were so badly known that comparison with theory could be at best inconclusive. It was nevertheless very curious that neither Fortson in Seattle nor Sanders in Oxford could find any non-zero atomic parity violation and this caused serious doubts, as I mentioned, throughout 1977 until the Prescott-Taylor data appeared the following year.

It was 5 more years to 1983 and discovery of W and Z but the verification of the NC couplings was sufficiently complete in 1978 to discard competing models from the "landscape".


reader Anonymous said...

>
This is why in the 25+ years people have been trying to extend the standard model, only a few approaches have been found--strong dynamics, SUSY, extra dimensions, now landscape motivated models. It isn't for lack of trying--it is a tough business.


Has there been any concrete result in model building in the last 25 years that will ultimately appear in a textbook ?


reader HCG Diet Plans said...

There are a lot of metaphysical concepts, which cannot be explained by science. They can only be felt and perceived, but cannot be seen physically