Monday, July 25, 2011 ... Deutsch/Español/Related posts from blogosphere

Theory vs phenomenology in the days of experimental reckoning

Exactly twenty years ago, in August 1991, Paul Ginsparg launched the arXiv - which used to be known under the main URL for many years.

The first hep-th paper would be a black hole paper by Horne and Horowitz. However, Paul Ginsparg himself and Sheldon Glashow succeeded in constructing a time machine in 1994, returned to 1986 and submitted the essay Desperately Seeking Superstrings in 1986, five years before Ginsparg created the arXiv and 8 years before Al Gore invented the Internet (1994) and Ginsparg invented the time machine.

Congratulations, Paul (both for the birthday and the time machine)!

The only decent celebration of the arXiv in the mainstream media, as far as I know, was published 10 years ago in the New York Times. Your humble correspondent couldn't avoid some humble memories haha. ;-)

Using the modern terminology, Paul's background is one of a clear and serious theorist. Much like others, he was able to see two loosely separated, partly overlapping - but nevertheless distinct - groups of non-experimenters in high-energy physics.

It seemed reasonable for him to separate the archives of preprints dedicated to these groups because many people only wanted to follow one of the groups. He had to invent clever names for the two archives. So he reserved the word "theory" for "more formal theorists" such as himself - even though the other group could also call itself "theorists" if there had been no Ginsparg - while he invented a new funny name for the other group, "phenomenologists".

Phenomenology: terminology

Much like the term "The Big Bang", the word "phenomenology" was meant to be derogatory or humiliating. It was a friendly way to make fun out of the other group. Why was it funny? A science-oriented person may think that "phenomenology" is related to "phenomena" so it is surely focused on observations so the word must be associated with the objective, cold, hard scientific way of looking at the real world.

However, if you check an encyclopedia, you will find out that the term "phenomenology" had been known and widely used in philosophy and psychology. Despite its seemingly "objective" flavor, phenomenology is actually all about the focus on subjective perceptions, consciousness, and all the philosophical mumbo jumbo that the philosophers actively want to be confused about. ;-)

But during the years, the term "phenomenology", when used by particle physicists, has lost the funny connotations, much like "the Big Bang". It's been used as a serious word.

Differences between hep-th and hep-ph

So the theorists such as Ginsparg himself would be given the hep-th archive - "high-energy physics, theory" - while the phenomenologists would be sending papers to the hep-ph archive - "high-energy physics, phenomenology". I still haven't explained what the difference is.

Well, phenomenologists are not experimenters: their main devices are pen, paper, and computer rather than the experimental apparatuses. In this sense, they are "generalized theorists". But they still think that the main purpose of physics is to study the phenomena that were recently observed or that could perhaps be observed in a very near future. So they think of themselves as the people who constantly interact with the experimenters.

One also uses the term "bottom-up approach". High-energy physics is a modern name for particle physics which tries to study the elementary particles that everything in the world is made out of. The main content of the "high-energy" terminology is that the higher energies you're able to squeeze into a particle, the deeper it may penetrate into another particle, and the shorter distance scales you're able to resolve.

So the high energy is an important "currency" in our knowledge. We need the energy of the collisions at the accelerators to be high if we want to learn very new things. The phenomena in which the energy per particle is low belong to the "low-energy physics" which is known from day-to-day life and that is studied by more ordinary disciplines of physical sciences such as the condensed matter physics.

On the other hand, the phenomena in which the energy per particle is "high" - and the boundary between "low" and "high" is flexible, of course: you may think that 1 TeV is the current boundary (but "low" and "high" Higgs mass are separated by 135 GeV and there are other special contexts in which the boundary is elsewhere) - are those potentially new phenomena that we haven't seen so far and where there is some room for adjustments of our theories or completely new discoveries.

The bottom-up approach is the approach that assumes that the optimum strategy (and, if you're a hardcore phenomenologist, the only strategy) to increase our knowledge is to start with the solid low-energy physics we know from many previous experiments and to gradually increase the energy. Experimenters should be looking at experiments with a gradually increasing energy; theorists (well, phenomenologists) should try to guess what will be seen tomorrow. In this way, we may be getting to ever higher energies and ever shorter distances.

On the other hand, the hep-th "formal theorists" are mostly "top-down theorists". The top-down approach has always assumed - and now acknowledges - that there are certain basic things we may learn about the real world that are true even though the corresponding energies are much higher than the "frontier" where the particle colliders have been able to get.

In particular, general relativity shows that if you collide particles whose energy (vastly) exceeds the Planck energy, they will create a black hole. So even though this is a very high-energy scattering, with much higher energies than what the LHC may achieve, we actually do know what happens. So there is another known "island" or "continent" at the top - with so high energies that another description, general relativity with black holes, becomes applicable. Even the classical general relativity is the best approximation in the limit of very high energies.

Now, from this new island, collisions with very high energies, we may try to dig a tunnel towards particle physics that can be studied by the colliders. Because we're going from the (known) regime at high energies down to the (unknown) regime at intermediate and then lower but already inaccessible energies (by the colliders), the approach is called the "top-down approach".

Aspects of hep-th research

Those comments about black holes wouldn't tell us much about particle physics in general. But of course, the main key insight is that people could have figured out that the reconciliation of the black-hole-dominated high-energy scattering with the low-energy physics described by quantum field theories (and the Standard Model as the key quantum field theory for the phenomena we may observe today) is a very hard, constraining problem.

And what is more important is that physics has found a consistent solution to the problem - moreover, one that seems to be unique. It's still called string theory. It makes perfect sense and it is the only "big surviving theme" in the hep-th approach to particle physics that actually offers some new insights about genuinely high-energy physics. Other approaches are either applications of low-energy effective quantum field theory - which includes things such as Hawking's amazing semiclassical analysis of the black hole radiation - or they have been proved wrong or they have been fads that could only look promising but never led to any convincing or conclusive results (and not even interesting mathematics with lots of consistency checks and/or surprising exact pattern and relations).

String theory has been and still is the most important theme underlying the hep-th archive and it is fair to say that a large majority of the valuable results published as hep-th papers during the two decades has depended on string theory in one way or another. The list of successes and fundamental breakthroughs in string theory during the recent 30 or 40 years - and even during the recent 10 or 15 years - is extensive. All those things have radically transformed the ways how we can think - and how we do think - about all kinds of questions and those insights won't be unlearned. These insights have affected not only "unification in physics" but even many other disciplines of physical sciences, including superconductivity, heavy ion physics, fluid dynamics, and others.

Aspects of hep-ph research

On the other hand, the hep-ph research has been motivated by the contact with experiments that are doable in the near future. It wasn't appreciated but it was always the case that this has been an extremely risky strategy, especially if many people invest millions of their man hours to this research - simply because there didn't have to be any new physics "behind the corner".

Even if there were new physics, it could confirm at most 1 model in the literature. But the hep-ph archive is filled with hundreds or thousands of models of what could be seen right behind the corner. It's true that out of 1,000 distinct models of new physics below 1 TeV, at least 999 of them had to be wrong - although many people apparently failed to realize or appreciate this trivial observation which reduces the expectation value of the value of a paper by 3 orders of magnitude. And the newest LHC data are increasingly pointing towards the conclusion that the right number is not 999 but 1,000 so it is more than 3 orders of magnitude. ;-)

Extinction of models

Some people seem to be shocked. (Our commenter "M" is not among them because he was being ironical while he largely agrees with my attitude.) Why? The hep-ph literature is full of papers that study various kinds of fireworks below 1 TeV so "M" and apparently others seem to be convinced that Nature has to agree with the "consensus of the papers" as well. But the LHC increasingly clearly and conservatively says something different: there don't seem to be any new fireworks below 1 TeV. Physics of the Standard Model works pretty much flawlessly and the Higgs sector is the main portion of the physical laws whose existence seems almost inevitable and whose details are still being awaited by the particle physicists.

The extermination of the models is fair and color-blind. Think about any random buzzword - leptoquarks, W' bosons, Z' bosons, preons, very low-mass superpartners (clearly, the most convincing representative for new physics in the past as well as today), light black holes, fourth generation quarks, and so on, and so on. All these things and many others - as long as their proponents linked them to a sub-TeV energy scale - are approaching extinction. People may be unhappy but a scientist should ultimately be happy about any truth he or she learns about Nature.

So Nature begs to differ: it doesn't want to join a "consensus" with the hep-ph arXiv. The only hep-ph papers it agrees with are those that modestly studied the Standard Model which probably looks boring to other phenomenologists - and sometimes to the researchers of the Standard Model itself.

The statement that Nature doesn't give a damn about the random distribution of some papers written by a particular group of humans should be obvious and understandable. But I want to make one more related point. Even if the LHC found and confirmed one of the new sub-TeV models physics, it wouldn't mean that Nature would join the hep-ph consensus. Why? Because there's simply no consensus among the models on the hep-ph arXiv to start with. The models are inequivalent so they disagree with each other.

You may say that many of them agree when it comes to the question whether the Standard Model should be superseded or extended by new physics below 1 TeV. In this particular war, the Standard Model faces a diverse group of foes :-) and the proponents of any of these foes could "unify" to fight against the Standard model.

But this logic is irrational because the question - whether the Standard Model is right up to a TeV - is completely a cherry-picked, contrived, and unnatural one. There's no reason to introduce the polarizations in which the Standard Model stands against everyone else. One could also ask different questions in which the Standard Model would have some allies with the same answer - or it would even be a part of a majority facing a smaller group of foes.

The only special thing about the Standard Model is that it's the "minimal" theory (when it comes to counting of the fields etc.) that is compatible with data we had known before the LHC. And indeed, so far it looks like that this minimal theory is the most accurate one even up to a TeV or so: any qualitative change with a low enough mass seems to make it incompatible with the observations. Trying to extrapolate your known theories as far as you can is the obvious strategy you should try - and you should only give up when you discover some inconsistencies (internal or with the observations). This is pretty much a strategy of the top-down hep-th theorists and indeed, the LHC data seem to support that this strategy is wise.

The "null" sub-TeV data that keep on coming from the LHC kill not only individual models from the hep-ph literature that wanted to offer "spectacular" predictions right behind the corner and to make the authors of these predictions famous in the case that these predictions are confirmed. The "null" sub-TeV LHC data kill and exclude whole philosophies, whole ways of thinking.

For example, particle physics has been talking about the hierarchy problem - why the Higgs is so much lighter than the Planck scale even though it could be heavier and quantum corrections naturally want to make these two masses very similar. And some phenomenologists extended it to the "little hierarchy problem" which is effectively the claim that there can't even be a small gap - like one order of magnitude - on the energy scale between the Higgs mass and the new physics that protects its smallness.

It's becoming increasingly clear that the statement that the "little hierarchy shouldn't be allowed in Nature" is wrong: Nature doesn't respect this law. And of course, as the LHC is continuing to push the lower limit on the energy of new physics towards higher values, it is making not only the little hierarchy problem but even the normal hierarchy problem more questionable.

So the LHC has the capacity to de facto exclude the whole philosophy of the "little hierarchy problem" and many other propagandist paradigms whose purpose was to irrationally justify the phenomenologists' sensationalism, their assumption that Nature was obliged to offer us new physics right behind the corner.

I am convinced - and, unless new breakthroughs will occur, will be convinced - that the naturalness arguments are fundamentally right. But one must be careful and avoid its versions that are not really robust and that resemble black magic or numerology. The little hierarchy problem is the statement that Nature doesn't want to cancel things with the relative accuracy of 1/50 or higher because it's "unlikely" that this would occur by chance.

Well, its odds could be calculated to be 2% in some straightforward way but 2% is extremely far from zero. It's just extremely dangerous to build your world view on such extremely weak arguments of a statistical character, especially if there's no rational justification of the a priori probability distribution that you have used.

Many people would be expecting that the LHC would be producing lots of new data incompatible with the Standard Model and there would be lots of interactions between the hep-ph archive - and hep-ph researchers - on one side and the experimenters on the other side. However, the final outcome seems to be that there is no interaction at all. Despite the phenomenologists' wishful thinking and their sometimes hysterical effort to be as close to the experimental frontier as you can get, their work in the recent decades seems to be irrelevant for the observations at the LHC.

(David Gross's "Oskar Klein and Gauge Theory" is a nice historical example showing that even the big shots of physics of the 1930s such as Heisenberg suffered from the disease to expect that everything we know breaks down behind the corner - including quantum mechanics. They would expect the postulates of QM themselves to break at the Compton wavelength of the electron, and so forth. It's crazy, it's been shown wrong but physicists still suffered from this disease even in the 21st century.)

The bulk of the model building work has been irrelevant for deeper and mathematical questions as well because most of the hep-ph research has been mathematically shallow.

Hep-th is different

The situation is very different in formal theory because formal theory wasn't developed to address the experiments that would be done next year or in the next 5 years. Hep-th theory is a successful effort to extract qualitatively new and sometimes quantitative and accurate insights about Nature from a careful mathematical reconciliation and analysis of the empirical insights that were being accumulated in the centuries and millenniums in which the humans observed Nature.

No hep-th theorist has ever claimed or boasted that the bulk of his work had too much in common with the data produced by the next-generation collider so of course, the hep-th work isn't really affected by the "null" results from the LHC. Everyone who has at least a clue about modern physics - aggressive crackpot fans of their fellow crackpot Peter Woit are surely not among them - knows that the majority of the string-theory phenomena that are being investigated is associated with the Planck scale, \(10^{19} {\rm GeV}\), which is clearly not directly accessible by doable experiments. Nevertheless, this research is tightly connected with observations (made decades or centuries ago) because the phenomena above this Planck scale are dictated by general relativity.

Many theorists and many string theorists - but not all - would feel more excited if the LHC were generating totally new phenomena and their phenomenological friends would be really thrilled. However, it's still true that the theorists don't care as much as the phenomenologists do.

What I really want to say is that most of the phenomenological work has been a waste of human resources and time. Instead of producing 1,000 models that could be relevant for the sub-TeV observations, those people could have just waited for a few years and let Nature speak. And it seems that Nature has spoken - and it may still speak in an ever clearer language - and so far, the answer is that the right model of these phenomena is called the Standard Model.

If you speculate about future observations, you're always speculating. If you make a guess, it is a guess. You can't force Nature to produce some phenomena of a certain kind just by a wishful thinking. It doesn't work this way and it can't work in this way. Nature does whatever She likes to do. In effect, bottom-up phenomenologists have never had any rational reason to think that they're any closer to the future observable phenomena than the top-down theorists.

And because the "apparent proximity" to experiments was the only reason for them to believe that they knew what they were doing, they really didn't have any rational reason to believe that they were on the right track.

Meanwhile, the top-down theorists realized that they didn't have any qualitatively new data and they didn't know what the next qualitatively new data could have been if any. In fact, a defining feature of the top-down approach is that one does expect that the known observed theory with small additions - such as supersymmetry at a few TeVs - works all the way up to extremely high energy scales such as the GUT scale, near the Planck scale, and this high-energy scale is where most of the interesting things that should be studied takes place. You introduce new physics at the intermediate scales only if you are forced to by the consistency of the high-energy and low-energy phenomena - you're not supposed to do such things just for fun.

Indeed, the GUT scale is unobservable directly but the new direct observations are simply not the only way how to learn new things about Nature. A more mathematically interconnected and accurate analysis of the known observations is relevant, too.

So during those decades, hep-ph model builders have constructed lots of rather shallow models that were meant to be interesting because of the hope that their new phenomena could have been observed soon. If you just place these new phenomena to 50,000 TeV, they're not too interesting. It's really because the sensation doesn't come from some extraordinary features of these models per se - but from the cheap hype connected with the (probably wrong) expectations that these new phenomena could be seen soon.

So if the hep-ph archive had stocks, the price of the stock probably dropped by 80% or more during the recent week and the decline may continue.

On the other hand, the hep-th archive was largely unaffected simply because the interactions with the next collider have never been the driving force of the hep-th research. Hep-th theorists never tried to speculate about things they didn't know and that could have had millions of possible answers - including the most obvious answer (the Standard Model) - and instead, they were working hard on aspects of physics that they actually had a chance to pinpoint by some clever arguments and hard mathematical work.

It's ironic but the most valuable parts of the hep-ph research after the "extinction" are those things that overlap with the hep-th research. In the future, phenomenologists may continue to play and use the tools and new ideas that were inspired by string theory or overlap with it - extra dimensions, gauge theories with complicated quiver diagrams, and a few other major examples. But all the analyses of the detailed models that depended on the new physics' energy being almost equal to the electroweak scale - and indeed, the "new physics" part of the hep-ph archive is pretty much dominated by these things - are becoming worthless at a dramatic rate. The probability that a random physicist would be going to read one of those papers dropped by an order of magnitude between the last week and today.

Meanwhile, string theory has produced fascinating insights that didn't go away and will almost certainly never go away. A continued confirmation of the Standard Model by the LHC will lead many people to rethink what is well-motivated in research and what is not. I sincerely hope that the almost everyone will start to appreciate that making bets on the expectation that a particular new phenomenon will be observed next year - even though you don't have a glimpse of a proof - is not necessarily the wisest way to organize your time and priorities.

We know quite a lot about the Universe but we still need to know how those partial insights fit together. So I hope that instead of shifting the energy scales from 200 GeV to 1,400 GeV and continuing in random guessing, many phenomenologists will buy some string theory textbooks and begin to think about the Universe at a slightly deeper and less sensationalist level.

And that's the memo.


There is some deliberately propagated confusion if you want me not to use the word "lies" among the hacks on the blogosphere. A notorious, immoral, and professional demagogue named Peter W*it (sorry for the rude language) says that someone is "throwing supersymmetry under the bus". I am surely not throwing supersymmetry under the bus. I am convinced that supersymmetry is a part of Nature and it is broken at a scale that is much lower than the Planck scale. What I have thrown under the bus are models that have been experimentally excluded because that's what scientists do with the results from the experiments. I would do the same with supersymmetry as a principle if there were some powerful evidence it is not valid in Nature.

For example, the LHC has shown that the strongly interacting superpartners (at least gluinos and a majority of squarks) can't be lighter than a TeV. Because I don't see any reason whatsoever to think that these experiments are invalid, it simply implies that there are no gluinos below a TeV. They have to be heavier if they exist.

However, the LHC's "null" findings are not just about supersymmetry - and Peter W*it is just using his usual nasty demagogy when he cherry-picks SUSY.

The LHC has eliminated all major claims that a new physics would appear at this stage, in a color-blind fashion: and be sure that there have been lots of people at various level of competence who have boasted that their theory had "testable predictions at the LHC" and most of those are gone (some of them were gone as soon as they were proposed because they disagreed with things that were known long before the LHC).

W*it himself was among the deeply counterproductive and misguided individuals who were deliberately trying to spread the atmosphere in which people have to claim that they have some testable predictions for the next experiment - a machinery that was shown to produce exactly nothing in the whole world because this is not the right way to do physics at this point. Instead of being ashamed and disappearing from this scientific world where he has no business to verbally oxidate, this despicable man continues to expose his dishonest rants to the Internet.

Some sub-TeV physics may still be seen with a higher integrated luminosity but some sub-TeV physics is already excluded. A scientist doesn't have a problem to immediately learn the lesson. And I have never believed that something has to exist below a TeV to solve the hierarchy problem. 5 TeV is good as well.

My guess that SUSY will ever be found by the LHC are about 50-50 at this point. The recorded luminosity of the LHC is almost exponentially increasing and chances to find new physics are approximately logarithmically spread on the luminosity times energy scale, so the possible discoveries will keep on arriving as a Poisson process. Also, the Higgs sector is yet to be fully analyzed and its structure may provide us with indirect signs that support or disfavor the simplest version of SUSY, the MSSM.

We will see. You may also check What if the LHC doesn't see SUSY.

Add to Digg this Add to reddit

snail feedback (0) :

(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//','ga'); ga('create', 'UA-1828728-1', 'auto'); ga('send', 'pageview');