The University of Michigan is organizing an interesting conference in the biomedical sciences' Kahn Auditorium at exciting times:
On Tuesday, the participants will hear the two key talks about the first LHC data as analyzed by the ATLAS and CMS detectors. We will see whether the folks have been very fast and whether Nature decided to behave as an exhibitionist during the first 47 inverse picobarns collected by each detector.
However, Tom Wright of the Fermilab - whose older talk was previously discussed on this blog in the context of the bottom-quark-related Higgs rumors - is going to give an interesting talk tomorrow, at 11:20 local time:
What is it?
Well, the slide says the following:
MSSM Higgs search results:Three graphs are attached. D0 with 4.3/fb of data shows the b-tau-tau curve within the 2-sigma band. The excess is just 1.5 sigma or so - a possible (but not necessary) problem discussed in the fast comments. One more problem is that the interpretation below requires tan(beta) above 70 or so - not too attractive - as a well-informed commenter mentions in the comments.
* no significant excess observed in the b-tau-tau channel
* each experiment sees a 2+ sigma excess in the bbb channel at 120-140 GeV
* will be interesting to see this with full Run II data sample
However, let's be a bit more excited for a while: the two bottom graphs show something else: they are about the "b phi goes to bbb" channel. CDF with 2.2/fb shows a 2-sigma excess or higher between 120 GeV and 160 GeV; D0 with a larger 5.2/fb shows a more than 2-sigma excess between 110 and 130 GeV.
Correct me if I am wrong but two independent 2-sigma excesses may be combined to a 2 x sqrt(2) i.e. 3-sigma excess. So I think that a simple fusion of the data would produce a 3-sigma excess, at least for the mass of the MSSM Higgs boson approximately between 120 GeV and 130 GeV, a very sensible interval.
The graphs seem somewhat confusing because the CDF x-axis is called "m_H", suggesting a CP-even particle, while the D0 x-axis is called "m_A", suggesting a CP-odd particle. But there probably exists a simple explanation of this difference between the graphs; I suppose that both CDF and D0 actually allow the intermediate Higgs to be of any type.
Note that Tom Wright (CDF) is the good speaker who completely omitted any references about the bbb MSSM Higgs channel in the previous talk we carefully followed; only Marco Verzocchi (D0) had discussed the hints. Now we see why: what he had to say pretty much confirms the rumors that existed and gives them a very specific flavor. As far as I know, it's only now when we're learning that the CDF sees their more than 2-sigma excess as well, despite having analyzed 2.2/fb only.
Confidence level: some discussion
Note that 3 standard deviations translate to the 99.7% confidence level that something new is going on. Now, I have a policy that quantities I have never considered important and that I have never carefully followed often exhibit 3-sigma deviations by chance. So I am not interested in 3-sigma bumps in graphs of quantities that I see drawn for the first time. Most of them will go away because they were chosen - cherry-picked - from a large bath of possible sources of bumps. It's statistically guaranteed that someone finds some bumps somewhere if he tries many combinations and disciplines.
On the other hand, there are a few - much less than 300 - quantities that I consider "canonical litmus tests". The three bottom quarks are among them. Let me just clarify what the quantity that sees an excess really is.
The hypothetical process
Two protons collide and sometimes may create an MSSM Higgs boson together with the bottom quark. The bottom quark will continue as a "b-tagged jet", a stream of many hadrons going in the same direction; the b-tagging is a method for the experimenters to decide that the jet almost certainly included a bottom quark.
Meanwhile, the Higgs boson may decay into two bottom quarks, producing two extra b-tagged jets. So at the very end, you have three b-tagged jets.
Now, what do they do with the data to draw the graphs? They're essentially trying to show that the intermediate Higgs boson didn't exist at all in the events that ended with the three b-tagged jets. Of course, experiments can never show such a conclusion "strictly": the experimenters can only deduce "upper bounds" from their data.
Upper bounds on which quantity they want to derive from the data? Well, they want to derive the upper bound on the cross section - a measure of probability of outcomes in collisions - for two protons colliding and producing 3 bottom quarks, with the additional assumption that 2 of them came from a decay of an intermediate MSSM Higgs boson. So they really want to derive the upper bound on the following quantity:
σ(pp → H + bjet) x BR(H → bb*).The sigma factor is the cross section for two protons creating a Higgs and a b-tagged jet; the BR factor is the "branching ratio" i.e. the percentage of the Higgs bosons that decay into a bottom quark-antiquark pair (b* is a local TRF symbol for the antiquark). By multiplying the factors, you are computing the probability of the whole process, including the intermediate Higgs and the decay.
Calculating the upper bound on the probability
How big upper bound on the product above they can derive? Well, it depends on the mass of the intermediate Higgs boson. For example, if it is assumed to be very heavy, one can show that almost no bbb events actually contained the superheavy intermediate Higgs because the b-tagged jets would be predicted to have much higher average energy than what has been seen. So the upper bounds will be very small cross sections for very high values of the Higgs boson, corresponding to the insight that it is "really almost excluded" that the events were due to the (superheavy) Higgs.
However, there are two ways how you may derive the upper bounds: by looking at the actual data, and by the climate science method. The climate science method means to deny all the observations and to run a computer model many times instead. Of course, the climate science readers are thrilled by the question whether the two methods actually agree. ;-)
If you run a computer model - that denies not only the observed data but also the theoretical existence of any natural MSSM Higgs particle :-) - then you can calculate the upper bound on the product above for any value of the hypothetical Higgs mass. For any value of the mass, the upper bounds will depend on the computer run and they will paint a distribution that can be considered normal.
Instead of running the computer model that denies any natural MSSM Higgs effect, you may also deduce the upper bounds from the actual observed data. If the assumptions of the computer models - such as the non-existence of an MSSM Higgs - are correct, the two methods should agree, within the margin error. It's unlikely for them to differ by "many standard deviations".
The 3-sigma statement above is nothing else than the assertion that the upper bound derived from the actual data significantly differs - for some values of the Higgs masses - from the upper bounds that you should be able to derive from the data if the reality followed your computer model.
In particular, we could only derive a higher - less strict - upper bound on the cross section multiplied by the branching ratio from the observed data than from the computer models because we did observe an excess events that actually looked like coming from an MSSM Higgs. Unlike the climate models where everything is shaky, the only truly controversial assumption that these models are making about particle physics is that there is no other i.e. new particles that could influence these processes - in particular no intermediate MSSM Higgs boson.
So if there is a large deviation of the computer models from the observed data, or vice versa, it is evidence for the statement that the most shaky assumption of the computer model is wrong - and the MSSM Higgs boson actually does exist. The combined measurement indicates that at the 99.7% confidence level, the MSSM Higgs boson exists and that its mass could be in the 120-130 GeV range. ;-)
Of course, there can be many problems and we want much more solid proofs. So I am not yet asking Jester of Resonaances to send me those ten thousand dollars that I will win, according to the terms of our bet, one year after SUSY is discovered if the discovery claim survives. (He is sure that SUSY doesn't exist and he is also willing to throw up whenever he sees a sign of SUSY: 2011 may be remembered as the year of Jester's constant vomiting.) But the moment could be closer than he thinks. ;-)
In particular, his stomach could already be in trouble on Tuesday if the LHC happened to have found even clearer signs of what the Tevatron's detectors are just silently suggesting. :-) Well, sleep well, Jester, ATLAS on Tuesday will only show one tri-jet event with 100 GeV missing transverse energy and one single-muon event with 118 GeV missing transverse energy which is still edible for SUSY deniers. :-)
But don't sleep too well because the data they have evaluated only come from 0.35/pb i.e. less than 1% of the current ATLAS data. Moreover, there's another detector, the CMS, that could have done more work than ATLAS and whose SUSY talk is not yet online. ;-)
And that's the memo.
P.S. Once the SUSY CMS talk appeared online, it became clear that it only discusses methods and takes 0.011/pb into account which is why I didn't find it important enough to copy-and-paste the URL here.