Monday, May 30, 2016

The change of the significance level by 0.7 sigma doesn't justify any "big words"

Rajesh: the next, tenth season of TBBT may be the last one

Stops: CMS sees an excess over 2 sigma in their search for top squarks, see page 3, Figure 2, lower left. \(600\GeV\) stops may be "indicated" by that picture.

Adam Falkowski often acts as a reasonable (and educated) man. But I think that his title
CMS: Higgs to mu tau is going away
is a typical example of the insane exaggeration or distortion that I would expect from unethical journalistic hyenas, not from a particle physicist.

What's going on? Both ATLAS and CMS have shown some weak hints of a possible "flavor violation", namely the possible decay of the \(125\GeV\) Higgs boson to a strange lepton pair\[

h \to \mu^\pm \tau^\mp

\] Note that the muon and the tau are "similar" but so far, we've always created a muon-antimuon or tau-antitau pair. The individual lepton numbers \(L_e, L_\mu,L_\tau\) for the generations have been preserved. And the Standard Model makes such a situation natural (although one may predict some really tiny flavor-violating processes even in the Standard Model).

Because the muon of one sign is combined with the tau with another sign – with a particle from a different generation of leptons – the process above, if possible, would be one of the so-called (and so far unseen) flavor-violating processes.

ATLAS and CMS have seen excesses in the 2012 run. They have large error margins so nothing is conclusive at all but the branching ratio (the percentage of Higgses that decay according to the template) for \(h\to \mu\tau\) was measured "somewhat positive" by ATLAS and CMS.

In 2012, ATLAS and CMS had \[

{\rm ATLAS:} & B(h\to \tau\mu) = 0.53\%\pm 0.51\%\\
{\rm CMS:} & B(h\to \tau\mu) = 0.84\%\pm 0.37\%

\] which are 1-sigma and 2.3-sigma excesses, respectively, combining to 2.5 sigma or so (that's Falkowski's figure). Now, in the 2015 dataset which is 6 times smaller or so, CMS found\[

{\rm CMS:} B(h\to \tau\mu) = -0.76\%\pm 0.81\%

\] The mean value of the branching ratio is reported to be negative. It's a sign of a deficit except that we know that the branching ratio cannot be negative. So a treatment that acknowledges the asymmetry of the distribution and the error margins would probably be highly appropriate here.

But OK, let us ignore this complaint and just combine the excesses and deficits from the assumed Gaussians blindly.

If you combine the 2012 and 2015 data, you get something like\[

{\rm combo:} B(h\to \tau\mu) = +0.55\%\pm 0.30\%

\] or so. I calculated it using my "brain analog computer". When switching to the combo, the mean value has dropped from 2012 and the significance of its being nonzero is some 1.8 sigma. It's less than 2.5 sigma but it's still an excess, a well over 90% confidence level that something is there.

When the significance level decreases from 2.5 to 1.8 sigma, it's a decrease but it's in no way a decisive decrease. 1.8 sigma will be found attractive by a smaller number of physicists than 2.5 sigma but not a "dramatically smaller" number (perhaps 3 times smaller?). In particular, the title
CMS: Higgs to mu tau is going away
is just rubbish. CMS has recorded a deficit but in a smaller amount of data. In this small dataset, no confirmation or refutation of the previous excess could have been expected, and it wasn't given. If this small amount of data were enough for a (5-sigma) discovery of the flavor-violating processes, then the 2015 data would pretty much sharply contradict the 2012 data. If the 2015 data were enough to "safely rule out" the flavor-violating decays around 0.5%-1%, then they would need a big deficit that would disagree with the 2012 data (and the Standard Model), too.

It just isn't possible to decide about the tantalizing signal quickly, and it couldn't have been decided in this way.

What's really untrue about the title is the tense, "is going away". This title pretty much explicitly says that (according to Adam Falkowski), there has been an established downward trend that will continue. But this is just a plain lie. In the short run, the confidence level behaves as a random walk

So whether the latest changes in a very small dataset (or amount of time) have increased or decreased the confidence level is a matter of chance. Both possibilities are equally likely for a short enough period of time – and this was clearly an example of a short enough period of time. The decrease in this short period of time does not imply the decrease in the future because Nature's random generator in individual collisions (or their small enough sets) acts independently.

I am not saying that the flavor-violating decay is there. I really believe it's unlikely, perhaps it has a 10% probability at this point. But even if the claim is untrue – or reasonably believed by an informed physicist to be untrue – I can still see that somewhat isn't describing the evidence honestly.

The point is that the 2.5-sigma hint is a weak one, and Falkowski has all the good reasons to be skeptical because flavor violation is a somewhat extraordinary claim (although not as extraordinary as some people like to suggest). However, the decrease from 2.5 sigma to 1.8 sigma is a decrease by 0.7 sigma which is even (much) smaller than 2.5 sigma.

If someone is laughing at 2.5 sigma but a change by 0.7 sigma is enough for him to say that the "debate is over", he is simply not acting fairly.

Needless to say, the misinterpretation of some random wiggles in a random walk as some "long term trend" or a "law of physics" isn't typical for particle physics. An identical discussion has repeatedly – and much more characteristically – taken place in the weather data. In a 1-year or 10-year or 40-year period, a change of the global mean temperature was observed and some people misinterpret it as some "long-term trend that has to continue".

For a long enough period of time, there could be some reason for such an interpretation. But if you pick a 6 times shorter period and start to pretend that the trend in this 6 times shorter period may be trusted as much as the trend from the longer dataset, you are simply a demagogue. The shorter periods of time (or smaller collections of collisions) you consider, the more likely it is for the excesses or deficits to be due to chance.

Falkowski's "is going away" spin is virtually identical to the stupid pissing contests of low-brow climate skeptics and low-brow climate alarmists who saw some weather in a recent week and use it as a prediction of the weather in 2100.

The CMS 2015 data are formally a 1-sigma deficit relatively to the Standard Model – which is almost certainly due to chance. If you believe that the branching ratio is some 0.8% as (optimistically) indicated in 2012, the deficit shown in the 2015 data is 1.8 sigma relatively to the flavor-violating extension of the Standard Model. That's larger but not "dramatically different from 1 sigma", a deficit that is there, anyway, and it's not the same 1.8 as the 1.8 sigma excess in the bigger dataset – simply because you know that a larger dataset measures the excesses or deficits more accurately than the smaller one.

The smaller datasets and the shorter periods are unavoidably more affected by the random numbers than their larger siblings.

To 2-sigma exclude the nonzero branching ratios indicated by the mild 2-sigma excesses in 2012, we will probably need the amount of collisions that is at least as large as the 2012 dataset. To "decide" much earlier than that is simply statistically indefensible. And you need at least to 2-sigma exclude the reasonable theory with a nonzero FLV branching ratio to claim to have evidence that "the signal is going away". Falkowski doesn't have it. He has basically used a 0.7-sigma evidence to "settle" a question – and that's much worse than using some 2.5-sigma to do the same thing.

No comments:

Post a Comment