Wednesday, February 27, 2019

"Five-sigma proof" of man-made climate change is complete nonsense

An analogy between cosmological and climatological anomalies

In this short remark, I want to discuss two very different "discrepancies in sciences" simultaneously because they share the same basic point. One is about cosmology, another is about the climate.

Bill sent me a pretty nice article by Dennis Overbye in The New York Times,
Have Dark Forces Been Messing With the Cosmos?
The text mostly describes the slight contradictions in the measurement of Hubble's constant (which quantifies how quickly the Universe is expanding now) – and various proposals to explain the discrepancy.

Hubble's constant is the coefficient that you may multiply by the distance of a galaxy from us to obtain the speed with which it is escaping away from us. If you think about the units, Hubble's constant is basically inverse to the age of the Universe. For practical reasons, cosmologists usually express Hubble's constant in "kilometers per second per megaparsec", however.

In these units, the Hubble Space Telescope determined the constant to be 72. Adam Riess and colleagues confirmed this value and became confident that the error margin was just 2.4%. However, Europe's Planck spacecraft produced the value 67 which is almost 10% away. Too bad. It looks like a rather large discrepancy although it is still modest enough so that it could possibly be due to a fluke, too. What's going on?



Overbye discusses various theoretical proposals such as the 2014 axiverse solution by Devin Walker, a (black) guy whom I knew as a bright Harvard student, and two co-authors. Here, some new axion-like fields are introduced – fields that are arguably "predicted" by string theory – which affect the "different kinds of Hubble's constant" different, thus erasing the discrepancy.



However, the spectrum of proposed solutions is much more diverse. Which one is right? I find it rather obvious that we don't have enough data to clearly pick the winner. In fact, as you saw above, we just have one number measured in two or three ways – the values are 67 or 72 – and one number is just too little information. In the number of bits, such one number only contains a few bits. So to think that you may decode one paper full of "proven answers and explanations" of the discrepancy seems like a textbook example of a numerological superstition.

The discrepancy may grow to "five sigma" in some definition of the sigma. But the attribution may be either very exciting or very mundane or anything in between and we just don't know. The reason behind the difference of 67 and 72 may be some new great axion, or perhaps even a special kind of an axion that proves string theory. It may be a proof of the multiverse, inflation, whatever you like (well, not quite "whatever"). But it may also be some systematic error in either telescope, or a combination of three assorted smaller systematic errors at various stages of the measurement that just don't tell us anything.

I think that the idea that the precision of Hubble's constant, as we know it, should be 2.4% is just a matter of a wishful thinking. Sometimes since the late 1990s, cosmology may have looked like a new high-precision science and cosmologists (and many of us) loved to brag about this new view. But many of the "very good agreements" could have been accidental and the actual reasonable error margin in our knowledge of the parameters may be a few times larger.

Don't forget that Hubble's original value of Hubble's constant was about 4 times larger than the values we discuss today. He had the qualitatively correct paradigm – the Universe was apparently expanding – but his age of the Universe was just some 3 billion years instead of 13.83 billion years. Oops. Relatively to this 400% error of Hubble's, even the 10% discrepancy we have today is tiny. One general aspect of numbers quantifying the error margins is that they usually have a large error margin themselves. ;-)

So this discrepancy may be just an example of another stage of the cycle – cosmology looks a "bit less precise than previously thought". There's no easy "fix" if it is the case. Our hopes in the precision of cosmology could have been simply exaggerated and we may be returning closer to Earth – even though cosmology arguably lives away from the Earth.

The disagreement in one measured number such as Hubble's constant carries too little information. It is not enough to attribute the discrepancy to some detailed story. Note that the situation is equivalent to the anomalies e.g. in the muon's magnetic moment. There is also a 3-sigma or so discrepancy between the theory and the experiment. (Three sigma sounds like a lot but this number may still be measured and predicted with the precision of some 10 significant figures.) But the explanation may be... almost anything, from some very mundane errors of the apparatuses or humans (or subtle theoretical errors in the calculation) to the most exciting paradigm-shifting ideas in new physics.

If you realize how little information one experimentally measured number brings us, you may appreciate how the high-energy colliders may be wonderful – in comparison e.g. with precision science in particle physics. New colliders may actually produce a new particle. If you actually see a new particle as a clearly localized bump and you measure its interactions with itself and other particles, you may easily explain some previously seen phenomena that may be attributed to it and things make sense. The actual production of new particles tells us much more information than some discrepancy in a quantity measured with a high precision. A new particle tells us a lot of qualitative and quantitative information – what sort of a particle, what is the spin, what is the mass, what are the interactions with anything else.

Colliders may produce new classes of particles once in 20 years now but it's still worth to try to get them because the other kinds of data we may obtain are much more murky.

Now, an example of a very similar point from... the climate science. A notorious climate fearmonger Gavin Schmidt tweeted the following:


He picks about 3 scientific teams and praises them for reaching the "gold standard" of science (which is how the journalists hype it) – a five-sigma proof of man-made global warming. The signal-to-noise ratio has reached some critical threshold, it's those five sigma, so the man-made climate change is proven at the same level at which we needed e.g. the Higgs boson to be discovered by CERN's particle physicists.

It sounds great except it's complete nonsense. When we discover something at five sigma, it means something that clearly cannot be the case in climatology. When we discover new physics at five sigma, it means that we experimentally rule out a well-defined null hypothesis at the \(p\)-level of 99.9999% or so. Note that a "well-defined null hypothesis" is always needed to talk about "five sigma".

In the case of the man-made climate change discussion, there is clearly no such "well-defined null hypothesis". In particular, when Schmidt and others discuss the "signal-to-noise ratio", they don't really know what part of the observed data is "noise" and how strong it should be. The assumption must be that the "noise" is some natural variability of the climate. But we don't really have any precise enough and canonical enough model of the natural variability. The natural variability is undoubtedly very complex and has contributions from lots of natural and statistical phenomena and their mixtures. Cloud variations, irregular seasons, solar variability, volcanoes, even earthquakes, annual ocean cycles, decadal ocean cycles, centennial ocean cycles, 1500-year ocean cycles, irregularities in tropical cyclones, plants' albedo variations, residuals from a way to compute the average, butterfly wings in China, and tons of other things.

So we can't really separate the measured data to the "signal" and "noise". Even if we knew the relevant definition of the natural noise, we just don't know how large it was before the industrialization began. The arguments about the "hockey stick graph" are the greatest tangible proof of this statement. Some papers show the variability in 1000-1900 AD as 5 times larger than others – so "5 signa" could very well be "1 sigma" or something else.

Just like before Schmidt's tweet, it is perfectly possible that all the data we observe may be labeled "noise" and attributed to some natural causes. There may obviously be natural causes whose effect n the global mean temperature and other quantities is virtually indistinguishable from the effected expected from the man-made global warming.

If the people observed some amazing high-frequency correlation between the changes of CO2 and the temperature, a great agreement between these two functions of time could become strong evidence of the anthropogenic greenhouse effect. But it's clearly impossible because we surely can't measure the effect of the tiny seasonal variations of the CO2 concentration – these variations are just a few ppm while the observed changes, seasons, are hugely pronounced and affected mostly by other things than CO2 (especially by the Sun directly).

So the growth of the CO2 was almost monotonic – and in recent decades, almost precisely linear. Nature may also add lots of contributions that change almost monotonically or linearly for a few decades. So the summary is that Gavin Schmidt and his fellow fearmongers are trying to make the man-made climate science look like a hard science – perhaps even as particle physics – but it is not really possible for the climate science to be analogous to a hard science. The reason is that particle physics and hard sciences have nicely understood, unique, and unbelievably precise null hypotheses that may be supported by the data or refuted; while the climate science doesn't have any very precise null hypotheses.

At most, the attribution of the climate change is as messy a problem as the attribution of the discrepancies between Hubble's constant obtained from various sources. It's just not possible to make any reliable enough attribution because the amount of parameters that we may adjust in our explanations is larger than the number of unequivalent values that are helpful for the attribution and that we may obtain from observations. In effect, the task to "attribute" is an underdetermined set of equations: the number of unknowns is larger than the number of known conditions or constraints that they obey (i.e. than the number of observed relevant data).

Gavin Schmidt and everyone else who tries to paint hysterical climatology as a hard science analogous to particle physics is simply lying. Particle physics is a hard science and "five sigma proofs" are possible in it, climatology is a soft science and "five sigma proofs" in it are just marketing scams, and cosmology is somewhere in between. We all hope that cosmology will return closer to particle physics but we can't be sure.

And that's the memo.

No comments:

Post a Comment