**Yes, it is as likely as that they arrive 15 years after one another.**

Matt Strassler has joined Jester and me and noticed the intriguing excess of the \(3.5\keV\) X-ray line in the galaxy clusters that resembles a dark matter particle of a sort (sterile neutrino, axino, axion, moduli, we may ultimately learn).

But here I want to focus on a cute sentence that Matt wrote in parentheses. After he noticed another, possibly decreasingly convincing, gamma-ray line near \(130\GeV\) in the Fermi data, he wrote:

You can invent types of dark matter that would give you both signals – but it would take a small miracle for two signals of the same dark matter particles to show up in the same year.It's an amusing argument against the possibility that "both signals are real dark matter" but is the argument valid? I am sometimes making similar arguments (or at least tempted to do so), too. So the logic may deserve a few words.

First, let us neglect all the problems of "two dark-matter particles models" that have nothing to do with Matt's argument about "the same year". In other words, this paragraph and the following one is the last comment about this issue. People would normally assume that dark matter was composed of

*one*new particle species because it seemed more economic. Also, to make the invisible particle stable, we need the theory to have some "special feature". For example, the conserved R-parity makes the lightest supersymmetric particle or "LSP" for short stable. For two stable particle species, you need two things similar to the conserved R-parity.

It is nevertheless possible to make a dark matter particle "very long-lived" by suppressing its interactions. So one may have dark matter composed of the axions, very weakly coupled scalar particles. In fact, there may be many types of axions, a whole "axiverse", and some physicists argue that this is indeed a natural expectation resulting from string theory phenomenology. One may also have a nearly stable NLSP in a slightly R-parity violating theory which decays to the LSP, the gravitino. The NLSP and LSP may co-exist. They may also co-exist with the axion or axions or sterile neutrinos and other candidates. Several dark matter signals are also naturally incorporated into recently proposed eXciting dark-matter models.

But those are not the things I want to discuss here. I want to discuss the usage of the

*calendar*to estimate whether two different hints of dark matter could be simultaneously right. You may be surprised how a feature of the discovery unrelated to the expertise of the physicists – the calendar – could affect the probability whether a statement may be right. And your complaint would be justified: it is not really possible to use such arguments.

Before we get to the more analytic arguments why such arguments can't work, we should understand why we are tempted to think that they might work. Why does Matt believe that it is a valid argument? Well, I don't see into the deepest corners of his skull but I think that I can still have a peek at his internal reasoning. The logic is that "our theory and logic" predicts that it is rather unlikely for two similar major discoveries to occur within a year and good theories shouldn't predict unlikely things. So something must be wrong with the theory!

Dark matter as a concept has only been around since the important 1933 insights by Fritz Zwicky. But let's be more generous and assume that there are two dark-matter particles and they may have been discovered in any year between 1914, the beginning of the First World War, and 2014, the beginning of the Third World War or whatever the ongoing year is supposed to become. ;-)

The lighter dark-matter particle species (or its lower-energy imprint) is discovered on the year \(Y_1\). The heavier dark-matter particle is independent so it may be discovered on any year \(Y_2\) between 1914 and 2014 (or a bit later). At any rate, the point is that the probability that \(Y_1=Y_2\) is comparable to 1/100. It is rather small. This small probability (\(p\)-value) can be converted to something like a 2.5-sigma "bump". It is not a huge anomaly but it exceeds 2.5 sigma so people may start to notice.

I think that Matt Strassler has a knee-jerk instinct to look for anomalies and he's good at it. He is a naturally born bump hunter. And we deal with an observation – involving the proximity of two discoveries – which is unlikely. So it is a bump and something must be wrong about the theory. Matt concludes that at least one of the dark-matter candidates should be wrong because of the calendar considerations.

Is that an OK reasoning? It may be marginally OK to say that "something is wrong with the theory" even though 1/100 is not a terribly unlikely probability and 2.5 sigma is not a terribly huge bump. However, what's surely wrong is Matt's "derived claim" that the bump implies that one of the candidates has to be wrong. In fact, nothing follows from that. If there's something wrong with the reasoning or the theory, it's something else than the assumption that two discoveries may occur within a year. The fallacy is exactly the same one that we sometimes experience when we think about dice, for example.

Throw dice and get 6-6-6 once; the probability of that is \(1/6^3=1/216\). Now you throw the dice again. Can you get 6-6-6 again? That would be really extremely shocking because you would get 6 "six times in a row". So some people are inclined to think that the probability that you get 6-6-6 again, assuming that you just got 6-6-6, must be smaller than \(1/216\). However, the actual probability is still \(1/216\). The dice don't care about their previous adventures at all! I hope that this point doesn't need to be explained but you may always expose your doubts.

Matt's logic is exactly the same. If one discovers one dark-matter particle, it becomes

*really unlikely*that another dark-matter particle may be discovered on the same year. However, a better model is that the probability that a dark-matter candidate (at a certain level of "quality") turns out to be legit is always the same probability \(p\). And for the second candidate, it is still \(p\). Well, more precisely, it drops to \(p/2\) or so (approximation for \(p\ll 1\)) because there's only one candidate to be found now – but it's not "parameterically" smaller than \(p\).

So the discovery of one dark-matter particle doesn't make it impossible that you discover another one within a year. The discoveries don't "repel" each other much like the 6's you see on the dice don't repel each other! Matt would call the discovery of two dark-matter particles on the same year a "miracle" and I reminded him of something:

“But it would take a small miracle for two signals of the same dark matter particles to show up in the same year.”Matt would react in this way:

- That’s why the year 1905 is known as “annus mirabilis”, right? The special theory of relativity, the right explanation of the Brownian motion, and the right theory of the photoelectric effect occurred not only in the same year but in the same 6-week window and they were contributed by the same person among billions of candidates.

Two-dark-matter-particles theories of dark matter have certain problems but the two particles’ being discovered in the same year can’t be one of these problems, Matt.

There’s a difference between several independent discoveries being made in the same year in the same subject and having one genius solve several long-standing problems in the same year. In the latter case, there’s an obvious correlation: one genius at the center of it all. There’s no such correlation in these two excesses; completely different measurements, different places, different groups, different strategies, different implications...Columbia didn't quite agree with Matt:

The flipside is that there has been a sizable increase in the frequency of dark matter experiments, with a large amount of experiments producing results in the last three or four years. If there was a double signal, it wouldn’t be completely crazy if it fell within this interval.Our Bill Zajc of Columbia added a good footnote to Columbia's comment (well, Columbia could be Bill, too, or not):

And... if you had some less than compelling evidence you were working to refine, you might be more likely to push it out the door (publish) if another group had already published a claim. Stimulated emission.So the clumping could have a good reason. In fact, the clumping of the two dark-matter discoveries anywhere in the world within one year would be much less extreme than the clumping of Einstein's important 1905 papers. Both of these "anomalies" could have been helped by the fact that the research doesn't proceed quite "uniformly" which is why both theoretical and experimental discoveries tend to be attracted to each other and concentrated in the periods with a higher pace of research.

(Isaac Newton had his "miraculous year" in 1666. He observed an apple falling from a tree; developed the universal law of gravitation; calculus; and some important advances were made in optics. The year was so amazingly fruitful and habitable for some quality work in the solitude due to some extraordinarily lucky circumstances, especially the pandemics of plague at Cambridge University LOL.)

Such things – like several similar important discoveries within one year – surely do occur sometimes. Statistics guarantees that and the \(p\)-value at 1/100 isn't extreme in any sense to make something impossible. But what's really wrong about Matt's reasoning is that by declaring the validity of the two would-be discoveries to be different, he is not addressing the actual rational reason why he could legitimately expect the "two such discoveries within a year" to be unlikely.

The only such justifiable reason is the low value of the probability \(p\) that a similar dark-matter candidate really "makes it" to a verifiable discovery. Even if one of the candidates happens to be discovered, the other candidates have the same probability \(p\) to be right. If \(p\) is low, the candidate will probably go away. If \(p\) is close to one – if you really think that the hints already look like "almost clear discoveries" – then a correct reasoning

*predicts*that the two discoveries will be made within the same year!

If the possible systematic errors and blunders are eliminated in the case of the \(130\GeV\) Fermi line, for example, the question about its legitimacy may be reduced to the statistical errors. Once the excess gets (much) stronger than 2.5 sigma, for example, it becomes a (much) stronger argument in favor of the legitimacy of the candidate than Matt's 2.5-sigma "calendar" argument against the genuineness of the candidate. The "same year coincidence" may look like a miracle but no adjustments in assumptions about physics may ever help to "explain" such a coincidence.

In other words, the date when a potential discovery is announced has nothing whatsoever with the chances of the discovery to be true, so "coincidences" just can't be used to label discoveries and double discoveries illegitimate or nearly impossible! Saying that the probability \(p\) for a new-particle candidate to grow to a real discover is much smaller than one is the only valid potential argument that is similar to Matt's invalid "calendar argument".

And that's the memo.

There is a joke about a guy who is afraid to fly because there might be a bomb on the plane. His statistician friend advises: Why don't you simply bring your own bomb aboard. The chances of two bombs on one plane are extremely small.

ReplyDeleteOne number to rule them all, no free parameters, testable origin on a bench top, arXiv:1005.3310, 0907.2562. It's all about the second shoe dropping, or in reduction to practice, 6.68×10^(22) pairs of them massing 40 grams total.

ReplyDeleteTheorists are growing dark matter from peanut butter with a grape press. Things will not go better for trying chunky.

I am reminded of the fellow who always carried a bomb on a plane because there was such a low probability of two bombs being on the same plane.

ReplyDelete"The flipside is that there has been a sizable increase in

ReplyDeletethe frequency of dark matter experiments, with a large amount of

experiments producing results in the last three or four years. If there

was a double signal, it wouldn’t be completely crazy if it fell within

this interval."

This seems like a pretty good explanation. It would be interesting to see a plot of sensitivity over time for relevant experiments. I assume it's grown exponentially, with a doubling time of order one year. (Yeah, I'm pretty sure it's between 0,1 and 10 years ;) )

Is dark matter an epicycle?

ReplyDeleteTo me it seems that Matt Strassler even contradicts himself in his reply to Lumo:

ReplyDelete"There’s a difference between several independent discoveries being made

in the same year in the same subject and having one genius solve several

long-standing problems in the same year. In the latter case, there’s an

obvious correlation: one genius at the center of it all. There’s no

such correlation in these two excesses; completely different

measurements, different places, different groups, different strategies,

different implications..."

Two events can not be uncorrelated and repel each other at the same time, no ...?

Right, Dilaton, lack of correlation isn't the same thing as anticorrelation (or repulsion). But even the former, i.e. absence of correlation, may predict low probabilities of coincidences... But they are just "linearly" low, so not insanely low. One would need some repulsion to get higher powers (of the "delay between discoveries" over "whole history of dark matter timescale" ratio) and more extremely low probabilities.

ReplyDeleteMaybe have I been blacklisted in this blog like Galileo Galilei was by Inquisition? ;)

ReplyDelete