Tuesday, April 26, 2016 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Research in Prague quantifies dishonesty of literature on climate sensitivity

The power of meta-analyses in measuring the density of scammers in the climate alarmist industry

Although one or two climate blogs have already noticed the clever paper in August 2015 when it was published, most of us were unaware of it. That included myself – even though the paper was published by authors at my Alma Mater, Charles University in Prague (founded 1348). They're from the Faculty of Social Sciences (FSV UK) – a much more quantitative department than the name indicates. A classmate of mine (whom I spent 6 years with behind the same school desk) completed FSV UK and then went to the London School of Economics before he became a director at the Patria Finance for some time. ;-)

Figure 3 from the paper makes the bias obvious.

Sadly, the paper published in Energy & Environment remained almost unknown for almost a year. Fortunately, Richard Tol sent the paper to Willie Soon and he sent it to me today. The 2015 paper is freely available on a Czech website:

Publication Bias in Measuring Anthropogenic Climate Change (also: a PowerPoint talk)
The authors are Dr Dominika Rečková (CZ) and Dr Zuzana Iršová (SK). Note that in Czech and Slovak, the -ová suffix indicates that they're female. That doesn't change the fact that their methodology is very clever.

They realize that the climate sensitivity – the warming (increase of the global mean temperature) that is expected from a doubling of CO2 if all other effects/changes are zero – is the most important quantity in the theme currently known as "climate change". The climate sensitivity may be basically said to be the CO2-induced warming between 1750 AD (when the concentation of CO2 was 280 ppm) to 2080 AD (when it's expected to reach 560 ppm, the doubled value).

OK, how big this warming is? No one knows the exact value. If you neglect the vapor and turbulence in the atmosphere and clouds, computer models based on the absorption are feasible and you get about 1.2 °C for the sensitivity. The "central" estimates promoted as the most reasonable ones by the IPCC were about 3 °C in the Fourth IPCC Report, and closer to 2.5 °C in the newer Fifth IPCC Report.

"Positive feedbacks" are expected to matter by the proponents of the climate panic; estimates sometimes reach 5 °C or more. There also exist estimates, e.g. Lindzen-Choi, that are smaller than 1 °C i.e. smaller than the aforementioned "bare" value of 1.2 °C, and most skeptics think that it's very reasonable to expect that the overall feedback is negative.

Many of us have reported our impression from the literature that it seems "obvious" that a big part of the climate scientists try to inflate the figure, selectively pick the higher values and overlook the lower values, and so on. Individual claims are often just "suspicious" but if you're suspicious too often, you start to be certain that something illegitimate is going on.

But the two researchers in Prague found a simple yet clever method to quantify this bias, to make it visible to the naked eye. What is the method? We want to determine the value of this constant, the climate sensitivity \(\Delta T\). There exist many estimates in the literature; the present authors found 48 estimates in 16 papers. These estimates report some mean value for \(\Delta T\); and some standard deviation \(\delta \Delta T\) – I apologize for this awkward but hopefully comprehensible notation.

One optimistically hopes that the error margin \(\delta \Delta T\) should shrink as the science is making progress. It's not really happening quickly (if it is happening at all) but this is what should happen. But if you have one estimate \(\Delta T \pm \delta \Delta T\), the future estimates should pretty much agree with the statistical distribution indicated by the first estimate. The agreement may be inaccurate but the probability of too many too high deviations (in comparison with the error margins) should be low enough.

But if you have several estimates \(\Delta T \pm \delta \Delta T\) for the intervals, the future estimates – hopefully more accurate ones – should be equally likely to be compatible with some distribution extracted from all these intervals. They should be equally likely to be close to the "higher older estimates" and to the "lower older estimates".

This Figure 2 was borrowed from Sterne et al. (2000), a paper on meta-analyses that has over 800 citations now.

You see that the distribution of the estimates of the quantity and the error margins is left-right symmetric. The most accurate estimates (at the top of the triangle above) must be pretty much equally far from both sides.

It's important to realize that for many estimates, this expected symmetry follows from the rules of statistics and nothing else. It doesn't matter what is your physical theory explaining one value of the parameter or another. You don't really need any theory at all. Regardless of the methodology, the basic rules of statistics and the probability calculus unavoidably imply that it becomes very unlikely that a large number of points will create a highly left-right-asymmetric pattern. The bias in the selection of numbers is necessary to explain a significant left-right asymmetry.

Now, pick the studies in the climate literature that produce mean values \(\Delta T\) for the climate sensitivity and \(\delta \Delta T\) for the the standard deviation (the error margin). What do you get if you draw these values in a plane? You get the green picture reported at the top – except that the authors chose to label the vertical axis differently. Their vertical axis is the "inverse standard deviation", \(1/\delta \Delta T\).

It's still true that points in the left part of the graph describe "mild global warming" while the "severe global warming" is on the right side of the graph; and accurate readings are those at the top while the inaccurate ones are those at the bottom (that's true for both graphs because the Sterne et al. 2000 diagram has "zero error margin" at the top.

At any rate, the green graph at the top of the blog post – the Figure 3 by the two Czechoslovak authors – is not left-right-symmetric at all. You may see a clear correlation: the more accurate readings are those that indicate a low climate sensitivity (global warming is insignificant); while the less accurate and very inaccurate readings are those that claim that the climate sensitivity is high (global warming is a problem).

The fact that the inaccurate high values of the climate sensitivity unbacked by their more precise cousins are surviving in the literature proves that there's a bias in the selection of the values of the climate sensitivity. It proves that the set of the authors of papers on climate sensitivity is dishonest. The paper makes this point very explicit in numerous equations that follow from the rules of statistics. Those rules are well-known among the (good) statisticians doing meta-analyses so the Prague-based researchers haven't invented them. But the climate sensitivity is an excellent example showing the power of these statistical techniques.

I should point out that the climate sensitivity \(\Delta T\) is often expressed as being proportional to \(1/(1-f)\) for some feedback parameter \(f\), i.e.\[

\Delta T = \frac{1.2\,{}^\circ {\rm C}}{1-f}

\]If \(f\to 1^-\), we can get a very high sensitivity which may also have a large error margin even if the error margin of \(f\) is modest. I think that the authors don't contradict this fact. But if that happens, it should still be true that you shouldn't find very accurate estimates with a low sensitivity on top of the medium accurate estimates with a very high sensitivity. In 2010, I argued that the climate sensitivity can't be high because a very low \(f\) would still fluctuate and over billions of years, it would visit the regime of instability \(f\gt 1\), which would have led to a catastrophic runaway effect – something that hasn't happened. But yes, let me say that I would feel much safer about their argument if the \(x\)-axis of their graph were showing the value of \(f\) rather than \(\Delta T\) because the \(f\to 1^-\) i.e. \(\Delta T\to\infty\) region is sort of singular and the naturally reconstructed distributions may be highly asymmetric (and therefore also non-Gaussian) in this region.

But let's believe that this improvement wouldn't ruin their argument. (If it does or if there's another serious flaw in the paper, someone should write a paper that does what they did correctly.)

Once the authors acquire a high statistical confidence in the existence of the bias (an effect requiring the "physics of the society"), they add it as a term to their "meta-theory". This "meta-theory" – which takes both the genuine measurements of the climate sensitivity as well as the bias into account – allows them to adjust the literature for the bias and approximately determine the values of the climate sensitivity that the climate researchers should have gotten if they had avoided the bias. The Prague-based answer is written as "the values of climate sensitivity are compatible with the interval between 1.4 °C and 2.3 °C" while their most likely (or median?) value is about 1.7 °C.

Also, one may calculate how high a percentage of the climate researchers should be fired. The result is approximately 97%. ;-)

P.S. 1: Rečková has already defended a bachelor thesis in 2014. It was about a similar topic. The abstract in English is available over there, too. She got the best grade for the work. It may mean she's great. It may also mean that at the Faculty of Mathematics and Physics, she could face more critical eyes. In both cases, the grade implies that the alarmist thought police had to be absent at the Faculty of Social Sciences of the Charles University in recent years because with that police, they would surely not allow a student to get an A for a thesis claiming the evidence of dishonesty in the climate alarmist industry. At a moderate U.S. school, she would unavoidably get C or worse. At Harvard, all deans and classmates would release a public statement asking why she stinks.

P.S. 2: I have found an alarmist blog, a self-described Anti-Anthony-Watts physics blog, that noticed the paper last Summer and that has found it inconvenient.

The blogger wants to argue that despite the statistical arguments, there is no bias. Instead, "it’s because it is very difficult to develop a physically plausible argument as to why climate sensitivity should be this low". This is nothing else than the admission that the blogger would cherry-pick the higher values because of his prejudices, too – an admission of his own dishonesty. (Obviously, it is straightforward and extremely natural to have models with the sensitivity \(\Delta T\sim 1.2\,{}^\circ {\rm C}\), just assume that the feedbacks approximately cancel, \(f\sim 0\), and it's enough to consider just some "cloud feedbacks" to believe that \(f\lt 0\) and the feedbacks are negative and \(\Delta T\lt 1.2\,{}^\circ{\rm C}\).) He completely misunderstands the point that statistical analyses of this sort are possible to study the consistency of the claims and the biases encoded in them – regardless of any details of the "physics" by which the claims were derived!

The paper may be flawed but if that's so, it's not because the authors of the meta-analysis papers are obliged to learn all the physics that the authors of the individual papers have to know. If the Czechoslovak paper is flawed, it's flawed because they incorrectly assume that the distributions predicted by the literature are symmetric (or even Gaussian) although they are asymmetric and the asymmetry becomes particularly important in the \(f\to 1^-\) i.e. \(\Delta T\to\infty\) region. In the unlucky case for the authors, the asymmetry matters and when this omission is fixed, the conclusions of the paper qualitatively change.

P.S. 3: There also exists a related August 2015 working paper using a similar methodology, Selective Reporting and the Social Cost of Carbon (download PDF), by Tomáš Havránek (Charles Univ. and Czech National Bank), Zuzana Iršová (see above), Karel Janda (CERGE – a rather prestigious economic institute in Prague; I know many people over there), and David Zilberman (Berkeley). It was published in Energy Economics (thanks, Richard Tol). They find evidence for a bias eliminating intervals that include \(\Delta T = 0\) and the bias is stronger in the peer-reviewed literature than in the rest of the literature (because the peer review has been hijacked by those who are actually responsible for most of the bias, but the authors don't focus on these extra explanations).

Maybe the detailed strategy of this paper is safer. They estimate the social price of carbon (the optimum carbon tax per one ton) and get between $0 and $100 (they can't exclude zero). Their evidence of a bias is based on the avoidance of the zero (and negative) values. Clearly, there can't be anything that would prevent carbon from being a net benefit, so the statistical deficit of papers that allow the negative (or with-zero-compatible) social price of carbon (which they find) simply must be due to a bias.

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :