Because there has been some confusion - and maybe deliberate confusion - among some (alarmist) commenters about the non-existence of a statistically significant warming trend since 1995, i.e. in the last fifteen years, let me dedicate a full article to this issue.

**Update:** By the way, in 2010, this statement ceased to be a controversial issue in the climate debate when Phil Jones admitted to the BBC that there's been no statistically significant warming since 1995 - the very same statement.

I will use the UAH temperatures whose final 2009 figures are de facto known by now (with a sufficient accuracy) because UAH publishes the daily temperatures, too:

Mathematica can calculate the confidence intervals for the slope (warming trend) by concise commands. But I will calculate the standard error of the slope manually.

x = Table[i, {i, 1995, 2009}]The UAH 1995-2009 slope was calculated to be 0.95 °C per century. And the standard deviation of this figure, calculated via the standard formula on this page, is 0.88 °C/century. So this suggests that the positivity of the slope is just a 1-sigma result - a noise. Can we be more rigorous about it? You bet.

y = {0.11, 0.02, 0.05, 0.51, 0.04, 0.04, 0.2, 0.31, 0.28, 0.19, 0.34, 0.26, 0.28, 0.05, 0.26};

data = Transpose[{x, y}]

n = 15

xAV = Total[x]/n

yAV = Total[y]/n

xmav = x - xAV;

ymav = y - yAV;

lmf = LinearModelFit[data, xvar, xvar];

Normal[lmf]

(* http://stattrek.com/AP-Statistics-4/Estimate-Slope.aspx?Tutorial=AP *)

slopeError = Sqrt[Total[ymav^2]/(n - 2)]/Sqrt[Total[xmav^2]]

Mathematica actually has compact functions that can tell you the confidence intervals for the slope:

lmf = LinearModelFit[data, xvar, xvar, ConfidenceLevel -> .95];The 99% confidence interval is (-1.59, +3.49) in °C/century. Similarly, the 95% confidence interval for the slope is (-0.87, 2.8) in °C/century. On the other hand, the 90% confidence interval is (-0.54, 2.44) in °C/century. All these intervals contain both negative and positive numbers. No conclusion about the slope can be made on either 99%, 95%, and not even 90% confidence level.

lmf["ParameterConfidenceIntervals"]

Only the 72% confidence interval for the slope touches zero. It means that the probability that the underlying slope is negative equals 1/2 of the rest, i.e. a substantial 14%.

We can only say that it is "somewhat more likely than not" that the underlying trend in 1995-2009 was a warming trend rather than a cooling trend. Saying that the warming since 1995 was "very likely" is already way too ambitious a goal that the data don't support.

**Thirty years, monthly data**

There were some questions that were asked (and answered) in the comments under this article.

If you consider all the 30 years on the UAH record instead of 15 years, of course, you can robustly determine that the trend is significant at a 99% confidence level, to say the least. Of course, it doesn't mean that the trend is man-made and/or that it can be extrapolated.

Also, if you switch from the annual data to the monthly data, the formula for the error of the slope (warming per century) will drop by a factor of sqrt(12) or so. The confidence will increase correspondingly. However, this increase of confidence is spurious.

You can only claim that the monthly data from 1995 couldn't have appeared as a monthly white noise (random, mutually uncorrelated figures for each month). However, the annual 1995-2009 data could have easily emerged even as white noise.

However, it is clear that the right model for the monthly data is closer to a red noise (or another, non-white color) than the white noise. So the right question is how robustly one can claim that the numbers didn't emerge as red noise. For example, what is the probability that the data with similar statistical characteristics as the real ones can generate the same month-on-month increments.

**Can't distinguish from red noise**

I have looked at it, too. The outcome of this calculation was completely obvious: one can't robustly distinguish the actual 1995-2009 monthly data from monthly red noise. The month-on-month global temperature changes extracted from the last 180 months represent a set of 180 numbers. They're pretty much normally distributed around +0.00089 °C per month while their standard deviation is +-0.122 °C per month.

The central value, +0.00089 °C per month, is statistically indistinguishable from zero if the monthly increments come from a normal distribution centered at zero whose standard deviation is 0.122 °C per month. Clearly, the average of 180 random and independent numbers taken from a normal distribution centered at zero is centered at zero, too. The standard deviation of the average is 1/sqrt(180) times the standard deviation for each random number which is 0.0091 °C per month. The actual trend was smaller by an order of magnitude!

Needless to say, with these ludicrous numbers, we can make the same conclusion about the last 30 years, too. The monthly record is statistically indistinguishable from red noise. In fact, for the month-on-month temperature change being normally distributed with the 0.122 °C/month being the standard deviation, the actual expected temperature change in 30 years is much higher than the observed one.

To summarize, different kinds of tests of statistical significance may pass or fail. Moreover, when the tests pass, it is usually not because of some unnatural underlying dynamics but because of the inappropriateness of the null hypothesis. For example, the "monthly white noise" is a completely inappropriate null hypothesis which is why it could have been easily and robustly falsified.

The null hypothesis saying that "the temperature series are red noise" passes all tests - in fact, its main problem is that the 10-year or 30-year accumulated trends in reality seem to be much smaller than expected. The null hypothesis that only cares about the annual data and describes them as white noise - without a trend - also passes as long as we only consider the most recent 15 years but not more.

Neither white noise nor red noise are good enough models that would describe the temperature change at any timescale in a satisfactory way. A more correct model of Nature would know about the right autocorrelation with diverse lags. In some sense, it would have contributions from noises of different frequencies (and therefore colors). I am convinced that such an improved model could match the autocorrelation and the distribution of increments at all timescales and that the null hypothesis that the underlying trend is zero would statistically survive, too.

That doesn't mean that I think that the CO2-induced warming trend is zero. I just think that there exists nothing in the data so far that would allow us to prove it at a satisfactory confidence level. After all, the overall theoretically calculated, feedback-free temperature change caused by our CO2 emissions since 1800 is close to 0.5 °C while the largely (?) unpredictable natural temperature changes were above 1.5 °C. In some sense, one can say that the CO2 warming is a 0.3-sigma effect.

Such insignificance can be seen in many ways. For example, the 1910-1945 warming was equal to the 1974-2009 warming (both are 35-year periods) even though the CO2, man-made contribution in the earlier interval was smaller by a factor of four or more. It implies that the natural factors in 35-year periods must be of the same magnitude as the recent human influence. If you agree that most of the natural changes are unpredictable at this moment (and new and new predictions for 10 more years etc. consistently fail), it means that the recent 35 years of warming can't be more than a one-sigma effect - no signal of an unusual warming.

Of course, 100 years could turn it into a 3-sigma or 5-sigma effect if the recent 35 years of warming were indeed due to CO2 - if CO2 has the sensitivity expected from this assumption. If we will see the overall 21st century warming to be above 1.5 °C or more, it will be relatively unlikely that it will have been purely due to natural factors (but the confidence level will still fail to be impressive).

But we're not there yet. We don't know whether the 21st century will see a similar change as the 20th century or a bigger one. In the first case, the man-made global warming will remain inconclusive even in 2100, with the extended datasets they will have (unless they will be much better and accurate in understanding the natural factors).

## snail feedback (0) :

Post a Comment