Friday, July 13, 2012

ATLAS: a 2.5-sigma stop squark excess

A week ago, I discussed an intriguing excess of events in the CMS' search for top partners. Some graphs showed clear excesses that were not discussed in the paper and unusual arguments to ignore parts of the parameter space were employed.

What about the CMS' competitors, ATLAS? Do they see some stop-squark-related excesses in the published preprints, too?

ATLAS recently released several searches for stop squarks. Let me look at the following one which is 2 weeks old and based on 4.7/fb of 2011 collisions (much like many newest papers by both collaborations, including the tantalizing CMS paper a week ago):
Search for direct top squark pair production in final states with one isolated lepton, jets, and missing transverse momentum in \(\sqrt{s}= 7\TeV\) \(pp\) collisions using \(4.7\,{\rm fb}^{-1}\) of ATLAS data
The CMS paper has used events with missing transverse momentum and many jets. The ATLAS paper discussed in this blog entry requires one isolated lepton in addition to the jets and missing energy.

These events are selected by 5 filters – signal regions A,B,C,D,E. As you march towards higher ASCII codes of the letters, you are increasing the required minimal magnitude of the missing energy and other things. The inequalities are imposed on \(E_T^{\rm miss}\), the missing transverse energy; its ratio with the square root of \(H_T\), the scalar sum of the four selected jets' momenta; and the transverse mass \(m_T\) defined in footnote on page 3.

The conditions for the signal regions are as follows:\[

\,& {\rm SR A} & {\rm SR B} & {\rm SR C} & {\rm SR D} & {\rm SR E}\\
E_T^{\rm miss} [\GeV]\gt &
150& 150& 150 & 225 & 275\\
\frac{E_T^{\rm miss}}{\sqrt{H_T}} [\sqrt{\GeV}]\gt &
7& 9& 11 & 11 & 11\\
m_T [\GeV]\gt &
120& 120& 120 & 130 & 140

\] Now, what are the results? The Standard Model predicts a background, mostly events with \(t\bar t\), sometimes perhaps with an extra vector boson, or perhaps a single top, or multijet events. The total background expectations and the actual number of observed events are summarized in the table below:\[

\,& {\rm SR A} & {\rm SR B} & {\rm SR C} & {\rm SR D} & {\rm SR E}\\
\!\!\text{tot. bg.} &
42\pm 6& 31\pm 4& 13\pm 2 & 6.4\pm 1.4 & 1.8\pm 0.7\\
\!\!\text{obs. ev.} &
38& 25& 15 & 8 & 5\\
\!\!p\text{-value} &
0.5& 0.5& 0.32 & 0.24 & 0.015

\] By the way, you should adjust the font size in your browser so that you see all columns but there is not much space left after column SR E. (This only holds for the dark green template!)

You see that most of the columns are compatible. But look at the last column, the signal region E. There were \(1.8\pm 0.7\) predicted events but they observed \(5\) events. That's an excess. They have quantified it by the \(p\)-value of \(0.015\) which means that if it is noise, such noise (or larger noise) should only appear approximately in every 67th measured number. As evidence for a signal, it's almost 2.5-sigma evidence.

I find it irresistible to amuse you with the exclusion plot that ATLAS could derive from the signal region E.

Click the picture to magnify it.

The black dashed line is the expected exclusion curve (we expected to exclude the half-disk inside the contour). The actual exclusion derived from the observed data is represented by the red line. Oops, where is the red line? ;-) ATLAS seems a bit more calm than CMS when it answers this simple question in the caption:
Note that in SR E there is no observed exclusion limit due to an excess in data.
Nevertheless, the section "Results" still concludes that "No significant excess of events is found", despite the 2.5-sigma excess in the signal region E. Is that legitimate to say that a 2.5-sigma excess in the last channel isn't significant? It depends on your conventions. Time will be giving us an increasingly clear answer on whether or not Nature forgave them this cavalier attitude in this particular case.

Now, the ATLAS and CMS searches are different in details so we shouldn't combine them. One should still mention that if one got two 2.5-sigma excesses, the overall significance level would be about 3.5 sigma. And that's just the combined 2011 LHC data in one channel. If one also adds those 7 inverse femtobarns already recorded in 2012, the combination from both detectors could easily make a 5-sigma discovery if there's something to be discovered...

If the signal is real, the ATLAS+CMS detectors in 2011+2012(part) have already witnessed about 25 events in the signal region E, instead of the expectation of \(9\pm 1.6\) or so. And that would be quite some evidence of new physics. After the 2012 run, the numbers from previous sentences will hold for each detector separately while the overall LHC score in SR E will/would be 50 events instead of the predicted \(18\pm 2.3\).

If some of those five events are due to stop-squark-related physics, note that the new physics only kicks in for missing energy above \(275\GeV\); see the definition of the signal regions at the top. That could be compatible with the idea that the missing energy is stored in two neutralinos. If the mass i.e. minimum energy of one neutralino is \(140\GeV\), as extracted from the Fermi gamma-ray line and other guesses, the new events should indeed emerge above \(280\GeV\) or so of missing energy.

A Higgs joke by Alex Read, Oslo

Remotely related fun

Looking for the truth covered by some noise is fun once it's clear that there's some "nearby signal" represented by the noisy data. Here's an everyday-life example.

Minutes ago, I was asked to find two addresses in Ontario on Google Maps. I was told that they're in Cambridge, a neighborhood of Toronto, and the addresses were
xy Holland Cr, Canbridge, Ontario, NC3OE3
uv Baric St, Canbridge, Ontario, NIS-3A1
Search engines have no clue how to "automatically" correct all the mistakes – and believe me, the spelling of "Cambridge" is just the beginning. So I had to find out that Cambridge isn't a real neighborhood of Toronto, unlike the case of Boston's Cambridge: it' 50 miles away!

And the right addresses were
xy Holland Cir, Cambridge, Ontario, N3C-0E3
uv Barrie St, Cambridge, Ontario, N1S-3A7
Note that a larger number of corrections had to be made almost simultaneously. I decided it was easier to start with the postal codes. Still, I needed to permute "3" and "C" in the first one, and reinterpret "O" as "0", and add a dash.

In the other address, "I" had to be reinterpreted as "1" – something I tried deliberately because I know that Czechs can't possibly understand that a simple vertical line of North Americans means "one". For a similar reason, "1" had to be changed to "7" because Czechs can't possibly understand that if the vertical line has an extra broken part on the left side, it can no longer be a "one" under any conditions: it becomes a "seven" regardless of the precise direction. ;-)

With these postal codes fixed and the neighborhood checked, I could also find out that "Cr" should have been "Cir" and "Baric" (probably a brother of Mladič or Miloševič) should have been "Barrie". :-) Time will tell whether similar final steps towards finding the truth also await us in the case of the stop squarks soon.


  1. Ha ha, I just LOVE the last exclusion plot LOL ... :-D !

    Can Phil not do a combination plot for both detectors ... ;-) ?
    Now I`m exploding with impatience to see what happen when all data available are used in the analysis ...

  2. Wow, this looks like a great sign! I mean, sure its 2.5 sigma only, but it definitely seems like there is a coherence among these low-statistics excesses, between CMS's strange chart, the diphoton rate and now this. I can't wait to see what 20/fb at 8 TeV holds!

  3. Canadian postcodes are always in the form letter-number-letter number-letter-number to eliminate any ambiguities such as 1 vs I or 0 vs O. Very practical, us Canadians!

  4. Dear Dilaton, it would be great to do combinations except that you can't add apples and oranges. I don't see the same search by CMS - the two papers I mentioned use different signatures, different channels...

  5. Hm whoops yep, seems I was too enthusiastic (beause I really want something to be there !) therefore too fast. Good that the weekend is near and I'll have time to look at and consider things more carefully as it is appropriate (feeling some kind of embarassed about myself ...)

  6. Dear Dilaton, you have no reason to be embarrassed. And maybe you will even find the corresponding papers by the other collaboration studying the same signals! ;-)

  7. Hey Lubos, as long as we're toying with conspiracy theories, let me get your opinion on this. I was trying to figure out why I didn't see the ATLAS paper in my RSS feeds, since I try to monitor all the relevant ones...

    I just noticed that this document is filed under the "Detectors and Experimental Techniques" subject category. Is that as weird as it sounds like it should be?

  8. Nice one, Cliff. ;-) Maybe you're right and it's intelligent design. :-) Just to be sure, below are the standard URLs where I look for the new notes by CMS, ATLAS.

    You may want to bookmark them, look a few times a week, and perhaps ignore the other source of yours...

  9. Yes, I've added this one now. I had just thought myself into a knot on that one. Thanks!

    One other random question: I dont suppose any nice logic exists for why decays with a single lepton should be special?

  10. Very interesting hopefully we will have SUSY updates at SUSY 2012 next month in beijing if not before
    The present Implications meeing is also worth watching

  11. I don't know, Cliff! Maybe there's a reason, maybe there's none. In the latter case, the true reason is that this channel exists because it's different and it's being reported before the more natural channels because the more natural channels have much higher numbers of events and they are afraid of publishing the bold claim they will eventually have to publish. ;-) But again, maybe that's a wrong interpretation.

  12. Stop squarks, WAHOO!! Raise the roof. Stop squarks and the Higgs boson in one year, what is next, a legitimate TOE! What do you think Lumo? How long would it take to get 4 or 5 sigmas of data for the Stop?

  13. Hi Lubos, Thanks for the illuminating post. Perhaps you could clarify a few things for me (assuming the signal is real of course).
    1) Do they know that original particle is spin 0 from the decay products?
    2) How is that channel only seeing a stop and not something else. (i.e. is this channel SUSY skeptic proof :)
    3) The paper talks about the LSP as a decay product. I though stop was the LSP?!?


  14. Dear Lubos,

    It is true that a modest excess in the signal region SRE has been observed, however it is somewhat unlikely that the excess is due to the stops. The problem is that the excess, if interpreted as a signal, points to the region of somewhat heavy stops, between 450 and 500 GeV. This is manifestly shown on Fig. 2, where you see that the exclusion is worse than expected precisely in this high mass range. Unfortunately the range between 450 and 500 GeV is mostly excluded by a different (all-hadronic) search of ATLAS, see
    So I would be very cautious before I would interpret this excess as a signal, unfortunately it is most likely a fluke.

  15. Dear Jan, good questions.

    1) Nope, I think that they can't prove what the spin could be. In particular, the missing energy may carry any spin.

    2) All searches - including the recent Higgs discovery - start with a falsification of the "null hypothesis" without new particles and establishing that it has the right properties is extra work. With this being said, they probably have a reason to say that this channel is extra sensitive to these particles. It can no longer be e.g. the gluino because they're looking near 400 GeV or something like that.

    3) Stops are definitely not LSPs. LSPs, in R-parity-conserving SUSY, is stable, so it behaves as dark matter. Dark matter can't be electrically charged because it wouldn't be dark. ;-) In these scenarios, stops are just the lightest strongly interacting superpartners. The LSP is either neutralino or gravitino - or the superpartner of some completely new particles in new sectors. In these discussions on light stops, the assumption is that the LSP is a neutralino and this paper is pretty consistent with the interpretation that the LSP is a 140 GeV neutralino.

    Yes, neutralino and many other particles may still easily be lighter than 200 GeV. It's not easy to clearly produce them - and/or exclude them - with hadron colliders.

  16. Dear AGK, the link you posted is exactly the paper with single lepton I just discussed. I think that you meant this paper:

    But the exclusions in this paper only apply to neutralinos below 100 GeV, see Figure 6 in the paper linked on the previous line, while the filters in Signal Region E suggest that those are from events in which one has a 140 GeV like neutralino, something I expect for other reasons, anyway. So I don't think you have offered any valid evidence it is a fluke.

  17. Thanks Lubos. I can see that the ATLAS and CMS teams need to be very careful about what they say. Ideally they would like to spend many months to be sure before announcing something important like a SUSY partner. So who is the analogue of Peter Higgs for SUSY ... i.e. whom should they invite to the (hypothetical) announcement? Jan

  18. Dear Jan, Peter Higgs wasn't the only theorist-founder invited to the Higgs discovery, either, but you may have overlooked this "detail". ;-)

    SUSY has more founding fathers because it's much broader an industrial structure than the Higgs mechanism. This comparison SUSY-Higgs is very similar to the Quantum-Relativity comparison. Relativity could have been found by one person because it's really a much smaller subject than quantum mechanics.

    In a similar way, SUSY has lots of aspects and complicated history:

    The text above doesn't really discuss the true beginnings in the West, linked to Pierre Ramond and fermions in string theory. However, it does discuss the Russians who found SUSY for mathematical reasons. Then Wess and Zumino were important for SUSY phenomenology, but so were much later Dimopoulos and Georgi with the MSSM. Applications of SUSY in theory are uncountable because SUSY is really the pillar of most of the modern theoretical high-energy physics in the last 30+ years. One could get carried away - there's lots of supercool stuff, Seiberg+Witten, SUGRA fathers, and lots of BPS stuff etc. One must be restrictive. So in some sense, if a particle of the MSSM were found, one would probably have to look for the MSSM folks who first discussed it. In that way, one would easily realize that these were necessarily the most important folks because the true idaes behind SUSY are much deeper than ideas about a single ordinary new particle or several particles.

    It's a good idea to be careful. If the stop squark rumors from late January 2012 were right, and of course I don't really know whether they were, they've been already working on it for half a year.

  19. Your excitement is cool. ;-) We're not there yet.

    As I wrote, if this were really due to new physics with no noise modifying it, 5 sigma would already be included in the combined ATLAS+CMS data for the signal region collected so far, and after the 2012 run, one detector itself would be enough to declare 5 sigma from their combined 2011+2012.

    It's a big If. ;-)

  20. Dear Lubos, G. Kane has advocated quite "universal" predictions from SuperStrings scenarios born in a Top-Down way while just requiring as a constraint that they match Standard Model as a low-energy limit and respect cosmological observations. This was also restated recently, as in arXiv:1204.2795, for instance.
    On the other hand, he used to say: "Theoretically the only well motivated values for squark masses are very large, tens of TeV, because they are generically predicted in compactified string/M-theories when the associated moduli satisfy cosmological restrictions."
    This seems quite paradoxical that a light stop could emerge from LHC in a highly "nonuniversal" scheme.

    My question is: is there any top-down scenario (or classes of them) from superstrings able to acomodate a so light stop (~500GeV) while being not in conflict with cosmology? If there are no such scenarios yet, how could low-enery data from LHC help one to prune most general compactifications schemes that would lead to wrong physical expectations in the other hand ?

  21. Dear NumCracker, I know, Gordy has presented this scenario on this very blog. I think it's a very important possibility to keep in mind so its probability of being true is surely not tiny. On the other hand, I don't understand the proof of uniqueness or inevitability of this scenario if there exists one which is why I am keeping open mind.

    Of course there are stringy constructions with a light stop, see e.g.

    I am among those who pay the maximum attention to lessons that string theory could be teaching us. On the other hand, I don't think that the bottom-up arguments are irrelevant and the straightforward enough interpretations of the naturalness I am really able to understand do seem to predict a light stop.

  22. Good links and context, Phil. Today's talks at the present meeting have been published, as PDF, they seem to repeat some of the same recent findings in a rather coherent way, e.g. by unifying the channels with 0,1,2 leptons into colorful exclusion charts etc.

  23. From a certain slide in Gordy Kane's talk I concluded (maybe wrongly?) that my new friend with the very nice CKM matrix would not have problems with a light stop ... :-P :-D ;-)
    But Gordy Kane said they have other troubles, so I dont really know ...

  24. Dear Lubos, while reading some literature it makes me feel that the "cosmological moduli problem" is something universal (needing fine-tunning to be avoided) and plagues all string compactifications except by the ones taken à la G. Kane. Is this right ?

    For instance, there is also no explicit model able to deal with this features. As stated in hep-ph/0512081: "A particular compactification yielding the MSSM
    spectrum in this sub-class of models is yet to be found. In this paper we
    take the bottom up approach and assume that this has been achieved."

    So I would ask you: are there any explicit, while universal, moduli stabilization scheme which doesn't conflict with cosmology but allows this light stops you mentioned to appear ?

    Sorry by being somehow annoying, but it is a hot and interesting topic!

  25. Dear Numcracker, cosmological moduli are a potential threat for all theories/compactifications with moduli. If one assumes 4D supergravity, like Gordy et al., one may show that all scalars are multi-10-TeV heavy.

    I don't quite believe yet that it's necessarrily true that 4D supergravity is a good approximation for those questions - in other worlds, various braneworlds may circumvent those things and produce light stops and many other things that are nongeneric in 4D SUGRA. In this sense, I feel that Kane's models are excessively constrained and not too stringy.

    As far as I can say, there are other viable solutions, especially Randall-Thomas weak-scale inflation

    to solve the cosmological moduli problem.

  26. One of the things I was thinking about when I asked this question was the possibility of R-parity violation.

    Just in case you haven't seen it, Raman Sundrum had an interesting presentation at ICHEP. I wish I could hear the actual talk, but the slides are interesting too (to me at least, quite possibly less so for yourself):