Just offices and cemeteries
It's refreshing that I may sometimes fully agree with a text by Matt Strassler:
BBC's Pallab Ghosh has quoted Christopher Parkes of LHCb who has said "Supersymmetry may not be dead but these latest results have certainly put it into hospital."
Even if one (or two) gets into a hospital, it doesn't mean he's not a supersymmetric hero (a superhero for short). A shoulder surgery isn't the end of the world.
But nothing like that is possible in science.
Supersymmetry is a symmetry, a principle added to the list of conditions we expect from models. But it is not a particular model. There are many supersymmetric models or supermodels for short.
Some of them, like Petra Němcová, were named envoys for Haiti and they're much more than richly decorated hollow skulls. ;-)
But let's not get distracted too much. What I want to say is that one must carefully distinguish the confirmation or falsification of a particular model; and the confirmation or falsification of the whole principle or framework. These are totally different things. Moreover, in contrast with the opinion of Pallab Ghosh and his sources, the framework doesn't get falsified or "nearly falsified" if you falsify 80% or 90% of the particular models.
Of course, I have discussed the same issue many times, e.g. in Bayes and SUSY (May 2012).
If you falsify 90% of the models (SUSY models) in a framework (SUSY), and it's very hard to quantify the fraction because we deal with continuous, noncompact parameter spaces equipped with an ill-defined measure, the belief that the framework is still right requires the believer to think that he had a bad luck that only occurs in 10% of cases.
But it's not such a big deal to believe this modest "bad luck". It's equivalent to less than 2-sigma "deficit" arguing against your general point – in this case, I mean supersymmetry. But less than 2-sigma excesses and deficits are almost everywhere. They're just not terribly strong arguments for assertions either way. The other, partly theoretical arguments for and against SUSY are arguably much stronger than that.
The idea that theories – in this case SUSY – may be sent to hospitals is based on an application of "collective guilt" to models. One assumes that supersymmetric models represent a nation or a family and they share the pain of each other. So if some of the relatives – models – are killed, the others suffer.
But this ain't the case. Two models may share some properties – for example, they may respect the principles of broken supersymmetry – but their truth values are completely independent. In fact, they are negatively correlated because if one model is right, we know that the other, inequivalent model must be wrong! This negative correlation in the truth values is there regardless of any similarities in the assumptions or technical properties of the models.
Despite all the similarities between the models, the death of another model is always a good news for a model that hasn't been killed simply because the competition gets less severe. Moreover, the "clustering" of the models into the "families" that someone may prefer is artificial and there can be many other ways how to organize physical models into "families". The experimental signatures that models predict (e.g. lots of events with many leptons) may often be more important than their deepest assumptions (such as SUSY). No reliable conclusion may depend on an arbitrary way how one arranged the models into "families".
Let me give you two similar examples showing why the "hospital" idea is logically flawed. You take a bus to go to a nice trip. In a car accident, 90% of the passengers in your bus get killed. You were a bit lucky and avoided all injuries. Now the question is:
Should you be taken to the hospital?
The answer is obviously No. You shouldn't be treated as an ill person. The past proximity to several people who were killed by a truck going in the opposite direction isn't a disease – let's ignore the psychological shocks you may have experienced (but frankly speaking, I don't really believe that the treatment of people with such shocks is too sensible or useful, either).
Your life goes on even though you have belonged to a group of people whose majority is gone. After all, all of us belong to a group – all people who have ever been born – whose majority is already dead. While the current world population is 7 billion people, the total number of people who have ever walked on the globe is significantly higher, over 100 billion. So over 90% of the people who have ever been born is dead by now. Does it mean that your existence and health is a contradiction? I don't think so. The whole life is about the circulation of the material between dead and alive organisms, it's about the selection.
Science is totally analogous to life. Evidence falsifies some theories – counterparts of life forms, species, and individual organisms – that were not sufficiently viable and it focuses the confidence and probability – a counterpart of the resources on the Earth – to those that have survived. In this way, the theories are getting more accurate, more sophisticated, more viable – much like the species and organisms.
It's completely incorrect to say that the people who live today are not viable just because they belong to the group of 100 billion people most of which are already dead.
Higgs search and elimination of possibilities
My second example is the search for the Higgs boson. Let's look at the situation we were experiencing months before December 2011 when the confidence level for the 126 GeV Higgs boson surpassed 4 sigma and sensible people became pretty much sure it was there.
Before December 2011, experimenters were only able to eliminate intervals of masses that the Standard Model Higgs boson couldn't have (let's assume the Standard Model is right – or at least a relevant approximate step in our improving knowledge).
A priori, the Higgs boson mass could have been anything between 0 GeV and 1,000 GeV. The prior probability that the mass would be above 600 GeV was already small, for various reasons. So let's shrink the window to 0-600 GeV. By 2011, a vast majority of this interval was eliminated. Around Summer 2011, only the interval 115-130 GeV remained viable. But no Higgs was discovered by a stronger-than-3-sigma signal yet.
Now you may think about it and say that it was strange. The Higgs had not been discovered yet and only an interval of width 15 GeV – 1/40 of the overall interval 0-600 GeV – remained as possible. Pallab Ghosh could have said that 39/40=97.5% of the Higgs boson idea had been falsified. The Higgs boson as an idea should have been taken to an intensive-care unit, the BBC should have written.
(If you decide that it's natural for the Higgs mass scale to be any number between 0 and the GUT scale, 97.5% could even be replaced by 99.99999999999999999%. Well, it should be over thirty digits "9" because the squared mass is what could be uniformly distributed)
But we know it would be a completely wrong conclusion. The Higgs boson was there, somewhere in the remaining interval. There has never been any good reason to doubt that some Higgs boson had to exist. The gradual shrinking of the "habitat" wasn't a sign of the Higgs boson's deteriorating health. Instead, it was a gradual improvement of our knowledge of Higgs' properties.
Elimination is easier than discovery
If you think about the numbers, you will easily understand why it's pretty ordinary that new particles are usually discovered after a big majority of the parameter space has been eliminated. The reason is simple: it's easier to eliminate a point in the parameter space (assuming that the point is really wrong) than to discover something in it (assuming that it's there). Why?
Well, the reason is simple. Physicists are usually satisfied with the 95% confidence (2-sigma) level exclusion but they demand a 99.9999% (5-sigma) level discovery. Now, 5 sigma is 2.5 times greater than 2 sigma but the number of collisions (or something else) scales as the second power so you need 6.25 times more "data" for a discovery than you need for the exclusion at the same point.
So if you assume that Nature sits at a generic point, you may make the following estimate. Find the moment at which about 50% of the parameter space is excluded. Multiply the amount of data collected by that moment by the factor of 6. And you will get an estimate of the amount of data that's needed for the discovery.
Now, this is just an estimate, not a strict rule, of course. The actual amount of data you may need may be 10 times smaller or 10 times greater than data and it's still not shocking. But if you apply those numbers to the Higgs boson or supersymmetry, you will realize that there was no reason to be "worried" about the general Higgs boson idea in the Fall 2011; and there's no reason to be "worried" about the general idea of supersymmetry today.
People should try to think a bit rationally and realize that the "collective guilt" principle can't be applied to physical models because the "clustering of theories into collectives" is completely artificial, man-made, and inconsequential for the validity of individual models. The fates of individual models are independent. And if you need some strong enough negative evidence against a whole framework, you need to eliminate 99.7% (3-sigma equivalent) or 99.9999% (5-sigma equivalent) of the parameter spaces. The elimination of 90% of a parameter space doesn't give us much useful information. It is only as powerful an argument as any other 1.5-sigma bump seen anywhere.
And that's the memo.
Exactly two years ago, I described a Danish research focusing on Tycho Brahe's remains in Prague. He could have been murdered by mercury etc., perhaps even by Johannes Kepler himself. Today, BBC tells us that the Danish+Czech research is over. There was mercury in the beard but it was normal, not deadly. Moreover, Kepler's description of Brahe's declining health "matched a severe bladder infection".
Kepler has been great but I, for one, wouldn't consider the stories written by a prime murder suspect as uncritically as they did.