The confidence level exactly matches the ATLAS contest top scores
The MIT released a cute press release 12 hours ago (which promotes a new paper in Nature Physics):
Fresh evidence suggests particle discovered in 2012 is the Higgs bosonMarkus Klute of MIT and his collaborators looked for traces of a Higgs boson decaying to a pair of tau leptons,\[
Findings confirm that a particle decays to fermions, as predicted by the Standard Model.
Evidence for the direct decay of the \(125\GeV\) Higgs boson to fermions (Nature Physics)
h \to \tau^+\tau^-
\] See also a related CERN press release. Note that in July 2012, the Higgs boson was originally discovered by looking at processes when it was born and decayed either to two photons or two Z-bosons:\[
h \to \gamma\gamma, \quad h\to Z^0 Z^0
\] The processes involving a pair of fermions in the final state are a little bit less frequent. Note that the fermions get their masses from the God mechanism which means that the heavier fermions have stronger interactions with the Higgs. That's why the 3rd generation fermions are reasonably easy to be seen.
Why did I say that the press release is cute? It's because of the numbers that the team associates with their confirmation of the Standard Model. The Higgs mass reconstructed from the tau decays is \(125\GeV\). But what about the significance level?
Let me offer you the following quote from the press release:
They were able to confirm the presence of decay to tau leptons with a confidence level of 3.8 standard deviations — a one in 10,000 chance that the signal they saw would have appeared if there were no Higgs particles.LOL. That's cute for those who follow the ATLAS' machine learning challenge. This contest is focusing on a real-world-like set of computer-generated events describing a potential decay of the Higgs boson to a tau pair, indeed.
And what about the significance level they're able to get? The leaderboard shows that the current leader, a hybrid of the Northern Lights #5 and X Haze marijuana seeds, has score 3.793, pretty much touching the very same confidence level 3.8.
It's the same process and the same confidence level. Is that a coincidence that the significance level pretty much exactly agrees? It's plausible that it's no coincidence at all and Klute et al. were working with a dataset that is pretty much indistinguishable from the dataset given to the ATLAS machine learning contest, including the normalization of the weights that decides about the strength of the signal (the confidence level).
Maybe, it's even correct to interpret the agreement by saying that Klute et al. is as good in extracting the signal-rich regions of the parameter space as the leading contestants of the challenge! ;-)
There are some subtleties that the real-world experimental particle physicists have to deal with and that were suppressed in the contest – some (rare) negative weights, some extra systematic errors that were pretended not to exist, and so on. But otherwise what the contestants were given to work with is almost exactly like the real-world data. The contest is highly realistic and it was a good idea to choose it as similar to the real world of the LHC experimenters as possible.
Well, I made the agreement look better than it is. Klute et al. used the CMS data, not ATLAS, and the 3.8 significance is actually a combined significance both from the tau and bottom decay channels.\[
h \to b \bar b
\] So their separate significance for the tau channel is lower and they couldn't compete with the leaders of the Higgs ATLAS challenge such as your humble correspondent if the normalization were the same! ;-)