Tuesday, January 03, 2017

UAH AMSU: 2016 finally beats 1998 as warmest year, by Earth-shaking 0.02 °C

Update: On January 5th, RSS AMSU data are out as well. December was 0.16 °C cooler than November. The result is almost the same as UAH below – 2016 was 0.02 °C warmer than 1998 and the new flagship.



Dr Roy Spencer, one of the folks in the UAH AMSU team who calculate the temperature data from the NASA satellites, published the value of the global mean temperature for December 2016 and therefore the whole year 2016, too:
Global Satellites: 2016 not Statistically Warmer than 1998
In December, the global temperature anomaly (according to the version 6.0 of their product) dropped by 0.21 °C (a lot for one month) to +0.24 °C. Despite the first or second strongest El Niño that we experienced earlier in 2016, the global temperature is just a quarter degree above the normal, 40-year value for a December.



Alexander Ač asked me to publish this video shot by his friend who is an astronaut and irrevocably proving global warming. Czech Globe, the European center of excellence, is impressed by this new proof of climate change. (OK, the previous sentences were a prank but the broader message that institutes like that are employing complete idiots holds.)

The five warmest years according to UAH (and their temperature anomalies) are:
01 — 2016 — +0.50 °C
02 — 1998 — +0.48 °C
03 — 2010 — +0.34 °C
04 — 2015 — +0.26 °C
05 — 2002 — +0.22 °C
You see that 2016 was 0.02 °C warmer than 1998, the year that has defended its gold medal against the following 17 competitors.




Spencer writes that a difference equal to or greater than 0.10 °C would be needed to make the difference statistically significant – to be "95% certain" or "two-sigma-certain" that the difference isn't just noise. The actual difference is smaller by a factor of five despite the fact that the years were separated by an 18-year-long interval and both of them were affected by a very strong El Niño episode. This NOAA table shows that the 1997-98 and 2015-16 El Niño episodes were really almost identical, with the peak ONI index 2.3 in both cases, during the November-December-January 3-month period.




I know how the statistical significance is calculated but I think that the application of this standardized method is a bit naive in this context – and most other contexts – because the sentence about certainty includes the word "noise" or "fluke" and it has to be defined according to some statistical model. And the standard calculations of the statistical significance basically assume that the noise is a normally distributed white noise – the temperature anomaly is a random, normally distributed number uncorrelated with the number from other years according to the "null hypothesis" that we want to disprove (or that we want to see as a survivor).

This white-noise, normally distributed hypothesis is arguably the "simplest model without a trend" that we may want to compare with a hypothesis that includes a global warming trend. However, it's not necessarily the most accurate or realistic trendless hypothesis. A more realistic hypothesis would include some "inertia" or "autocorrelation" and could resemble red noise (or at least pink noise) rather than the white noise. And if you describe the global temperature as red noise (i.e. random walk), it's much more likely that the temperature will drift in a direction after some time and the calculation "whether it's normal to drift by X or Y centidegrees" has to be modified accordingly. The modification depends on the precise model we use for the natural noisy variations of the global mean temperature.

For this reason, I would say that the most realistic models would require a different calculation of the significance level – and a different result. But if we did so, the significance would be even lower than the significance mentioned by Spencer.

We may describe the tiny difference 0.02 °C between 1998 and 2016 by many more well-defined observations:

0.02 °C is so close to the natural error margins – e.g. the differences between different teams that measure the global temperature – that it's unlikely that other teams will have exactly 0.02 °C. (Well, RSS AMSU will have very close to 0.02 °C, too. But the terrestrial teams will probably announce a much higher difference.) While UAH AMSU v6.0 has produced very sharp numbers without error margins, we must admit that these numbers aren't exactly equal to "the most natural and canonical global mean temperature". If we interpret the UAH AMSU readings as "the natural and canonical global mean temperature", we must admit that there's an error margin and the error margin is visibly greater than 0.02 °C. That's a reason to say that "within the error margin, the difference between the temperatures of 1998 and 2016 is zero".

Second, 0.02 °C is the difference between two years separated by 18 years that were very comparable due to the similar very strong El Niño episodes etc. Divide 0.02 °C by 18 and you will get 0.0011 °C per year or 0.11 °C per century. The trend you may extract from the two warmest years in the UAH AMSU dataset is clearly zero for all practical purposes. If many people are dying in 2017, the reason certainly won't be the fact that 2117 will be 0.11 °C warmer than 2017.

Third, I have prepared a funny combinatorial calculation for you. Roy Spencer has ordered all the 38 years on his blog – a simple command in Mathematica can do it, too. Let's check how well these 38 temperatures agree with the model of "persistently increasing temperatures".

If the temperatures were increasing every year, we would have\[

\forall y_1 \lt y_2: \quad T(y_1) \lt T(y_2).

\] In other words, the temperature of any "later year" is greater than the temperature of any "earlier year". Among 38 years, we may find\[

\frac{38\times 37}{2 \times 1} = 703

\] inequivalent pairs of years \(y_1\lt y_2\). For how many of them (and what percentage) the temperature in the earlier year \(y_1\) was lower (colder) than the temperature in the later year \(y_2\)? If the temperatures were growing monotonically, it would be 703 pairs i.e. 100%.

What is the actual answer? Here's the simple Mathematica code to tell us the answer:
a = {{1, 2016, 0.5`}, {2, 1998, 0.48`}, {3, 2010, 0.34`}, {4, 2015, 0.26`}, {5, 2002, 0.22`}, {6, 2005, 0.2`}, {7, 2003, 0.19`}, {8, 2014, 0.18`}, {9, 2007, 0.16`}, {10, 2013, 0.13`}, {11, 2001, 0.12`}, {12, 2006, 0.11`}, {13, 2009, 0.1`}, {14, 2004, 0.08`}, {15, 1995, 0.07`}, {16, 2012, 0.06`}, {17, 1987, 0.05`}, {18, 1988, 0.04`}, {19, 2011, 0.02`}, {20, 1991, 0.02`}, {21, 1990, 0.01`}, {22, 1997, -0.01`}, {23, 1996, -0.01`}, {24, 1999, -0.02`}, {25, 2000, -0.02`}, {26, 1983, -0.04`}, {27, 1980, -0.04`}, {28, 1994, -0.06`}, {29, 2008, -0.1`}, {30, 1981, -0.11`}, {31, 1993, -0.2`}, {32, 1989, -0.21`}, {33, 1979, -0.21`}, {34, 1986, -0.22`}, {35, 1984, -0.24`}, {36, 1992, -0.28`}, {37, 1982, -0.3`}, {38, 1985, -0.36`}}

n = 0;
For[i = 1, i <= 37, i++,    For[j = i + 1, j <= 38, j++,     n = n + If[a[[i, 2]] > a[[j, 2]], 1, 0];
  ]
];
{n, 37*38/2, 2.*n/37/38}
OK, what will Mathematica return when you write this code?
{540, 703, 0.768137}
Only 540 pairs out of the 703 pairs of the years, or 77%, agree with the global-warming-like ordering. (Note that the percentage would be even lower if you overrepresented the pairs of nearby years and higher if you overrepresented pairs of years that are further apart.) 77% is visibly lower than 100% and much lower than 1000%, Julian Assange's certainty that the hacked e-mails didn't come from Russia. ;-)

In this sense, you could say that the global warming may be established by the recent 38 years of the satellite data at the 77% confidence level – which is just a little bit more than 1 sigma (68% would be the most popular translation of 1-sigma into a confidence level). Note that particle physicists and other hard scientists demand the 5-sigma standard for a "discovery" of the effect. If global warming were discoverable by the satellite data, it would have to be 5 times as fast than it actually was.

Just like particle physicists say that the LHC graphs with 1-sigma deviations are perfectly compatible with the Standard Model, we should say that the satellite data for the global mean temperature are perfectly compatible with the non-existence of any global warming. (The trend could be claimed to be more significant if you looked at intervals longer than 40 years. But the sources of natural variability that compete with the man-made explanation become more diverse if you switch to longer periods.)

The trend cannot be dangerous let alone catastrophic – it isn't even statistically significant and the magnitude needed to be "dangerous" is surely much larger than the magnitude needed to be "significant" i.e. "statistically detectable".



Incidentally, the current prediction for the temperature in Pilsen for Friday night is –16 °C. I am going to enjoy the last relatively tropic days near the freezing point. ;-) It will be very chilly but nowhere near the record cold for a January day in Pilsen – which was –24 °C.

No comments:

Post a Comment