Friday, January 22, 2016 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Temperature anomalies are naturally more accurate than the absolute global mean temperature

When skeptics' talking points degenerate to low-brow anti-science demagogy

Some climate skeptics sometimes offer talking points or "anger" of the type that I don't share at all. And it's quite possible that I am much more annoyed by some of the criticisms than the average climate alarmists are. An example appeared at Anthony Watts' blog yesterday – and we debated it in an e-mail ring:

Failed Math: In 1997, NOAA claimed that the Earth was 3.83 degrees warmer than today
Funnily enough, the URL of that blog post contains the figure 5.63 °F instead of 3.83 °F. While screaming about "failed mathematics", Anthony wasn't quite able to compute how much is 62.45–58.62. ;-) If you need to know, the bug was due to Anthony who copied 58.62 as 56.82 somewhere. Hmm.

What's the cause of these skeptics' excitement? Tom Nelson found a NOAA page published in early 1998 that said that the "global average temperature for 1997 was 62.45 F" (the page was already intensely discussed one year ago, so it's not really a new finding). Now, 18 years later, NOAA quantified the global average temperature for 2015 as 58.62 °F, a whopping 3.83 °F i.e. 2.13 °C cooler than 1997.

Such a huge cooling would not only contradict claims about the year 2015 as the warmest year; this multi-degree cooling in 18 years is clearly wrong. So Nelson, Watts, and others are excited. NOAA keeps on changing data, they keep talking about a quantity they have no idea about, and so on.

Except that the older reading of 62.45 °F was the value of a "different" global average temperature, one evaluated according to a different methodology. The "global average temperature" is a vaguely defined concept and the definition has to be "refined" for it to become quantitative or accurate. And there exist various "refinements". The most popular refinement within each agency keeps on evolving – hopefully in the direction of ever more natural and realistic notions of the "global average temperature".

So 62.45 °F could have been written as this very precise value (suggesting that the error margin is 0.01 °C or so) if it's understood as a "particular January-1998-style definition of the global average temperature" which is just a particular function of (or complicated procedure applied to) well-defined thermometer readings. This particular "operational" global average temperature is given almost precisely. But the "best procedures" to be made evolve with time and the January-1998-style calculation differs from some "idealized" global average temperature we might like to know instead – and this difference is the real error margin and that's arguably several °C.

A key point that Nelson, Watts, and many others still seem to misunderstand is that this huge uncertainty about (and methodology dependence of) the "global average temperature" does not imply that the values of the changes of this "global average temperature" must be at least equally inaccurate. It's simply not true. We can know the degree of warming (or cooling) of the world between two years much more accurately than we know the "global average temperature" for each year separately!

This is a fact that has nothing to do with CO2 or climate hysteria. It's a fact that every mathematically literate person who studies the climate must understand.

Using some thermometers, other gadgets, and some mathematical calculation, one may determine that the global average temperature in 1997 was\[

T(1997) = A \pm \Delta A

\] and it was similarly\[

T(2015) = B \pm \Delta B

\] in 2015. Here, \(A,B\) are some particular mean values and \(\Delta A,\Delta B\) are some error margins. Those may be very large, perhaps larger than 1 °C. But the point is that with a fixed methodology, it can happen that most of the \(\Delta A\) is the same thing as \(\Delta B\). We say that the error is mostly a "systematic error". When it's so, the difference\[


\] may be calculated much more accurately because the big majority of the error margin simply gets subtracted. So the temperature difference may be measured with the accuracy comparable to 0.1 °C if not slightly better – even though the absolute temperatures are known much less accurately. The point is that many sources of the uncertainty of the global mean temperature depend on things like "details of the terrain" etc. and those didn't measurably change between 1997 and 2015.

A quantum intermezzo

Incidentally, there are blog posts on this blog which made the very same point – the differences or "relative" degrees of freedom may be known accurately even if the absolute terms are highly uncertain – in the context of the foundations of quantum mechanics. In an EPR experiment with entanglement, the relative polarization of the two photons is predicted with certainty even though the polarization of each photon is individually completely uncertain. There's nothing wrong about it.

It's simply not true that some quantities are "strictly fundamental" while others are "strictly derived" (like the differences of temperatures or spins) so that their error margin never drops below that determined from the error margin of their "strictly fundamental" cousins. It can drop. There are some quantities which can sometimes be expressed as functions of others – but each of them may be the "most accurately known one" under certain conditions!

Einstein was arguably the first man who publicly boasted his misunderstanding of this point. However, I think that Einstein's misunderstanding was restricted to the "novelties implied by quantum mechanics". He would have understood the "classical" issues with the uncertainties of the temperatures and their differences.

Back to the climate

The probability distributions may be "smeared" in different directions of the phase space or the space parameterized by all observables - and this is true both in classical physics and in quantum mechanics (where we have many "conceptually new" examples to show this effect).

OK, what are some sources of the uncertainty of the global mean temperature?

First, let us talk about power laws and "nonlinearity of averages". For the sake of simplicity, assume that the Earth is composed of two uniform, equally large (by area) regions. The temperature of the cooler one ("mostly polar, cool" places on the Earth) is about 268 K (-5 °C) while the temperature of the warmer one ("mostly tropical, warm" places) is about 298 K (+25 °C). What is the average temperature of this two-place globe?

Well, the arithmetic average of 268 K and 298 K is 283 K. But other kinds of averages may be equally if not more natural. The thermal radiation scales like \(T^4\), the fourth power of the absolute temperature (in kelvins). And you may check that the average temperature calculated from the "overall energy flux" ends up obeying\[

T_{average}^4 &= (268\,K)^4 +(298\,K)^4\\
T_{average} &\approx 284.2\,K

\] It differs from the arithmetic average by 1.2 °C. So if you wanted to determine the global average temperature with a sub-degree accuracy, you would need to be very careful about the question whether you are averaging the temperatures in kelvins or the energy fluxes associated with these temperatures. The results may differ by something like one degree, as this realistic example shows.

Modrava in the Šumava Mountains (Bohemian Forest) on the Czech-Bavarian border has recorded the coolest winter night in Czechia so far in this season, –35.3 °C, which makes this place cooler than much of Siberia right now. ;-)

However, if you study the change of the temperature between two years, e.g. from 1997 to 2015, it won't matter much which of the two conventions for the "average" you adopt. The results will be the same, up to tenths if not hundredths of a degree. (See also Average temperatures vs average irradiances, 2008.)

There are other subtleties that hugely influence "what is your global average temperature". You measure the temperature by some weather stations and they have certain altitudes. You want the "global average temperature" to be some average of near-surface readings at all squared meters of the Earth's surface. But the weather stations aren't located at representative altitudes.

If the average altitude of the Earth's surface is just 200 meters higher than the average altitude of the weather stations (and it could easily be because people don't like to bring the material for weather stations to too high mountains), you should perhaps subtract 1 °C from your weather station-based global average temperature because you surely expect the temperature to decrease by some 5 °C per kilometer of altitude. Maybe you didn't make this subtraction of 1 °C in the past at all but you decided that you should have. Maybe you did make a subtraction but decided that you have a better estimate for the required subtraction. Your methodology to compute the "global mean temperature" may keep on evolving for such reasons.

Now, you have a big desert and assume that the average temperature of this desert is given by a particular weather station. But the temperature of the desert may actually be systematically almost always exactly 2 °C higher than the temperature indicated by the would-be representative weather station. You don't know the exact shift that is required. But whatever this uncertain shift is, the weather station may still be useful to estimate the temperature changes at any place of the desert.

An even simpler issue. Your methodology for "global average temperature" may completely remove the polar regions – some vicinity of the North Pole and the South Pole. (The satellite teams do it because their satellites don't see the poles too well.) By your "global average temperature", you may mean just the average with these "polar caps" of a certain size removed. The "global average temperature" with the polar regions recovered will be cooler and the temperature difference between these two kinds of a "global average temperature" may be estimated and will be pretty much constant in time.

Even seemingly minor issues may matter a big deal. When you talk about the "average of the surface", do you want each squared meter of the tilted surface to be "equal", or do you want each squared meter of the underlying horizontal "projection" to be equal? These two conventions may produce significant differences, too. How?

Imagine that the world has two parts again. A cooler, mountainous region has the average temperature of 268 K and the average slope of the surface is 0.2 radians over there. The warmer one is flat at 293 K. The horizontal areas underlying them are the same. So if you use the horizontal area as the measure for averaging, the average temperature will be 283 K again. However, if you use the actual area of the terrain, you will see that the wiggly part of the Earth will have a higher area, by the factor of \(1/\cos(0.2)\approx 1.02\). So the non-horizontal averaging paradigm will lead to the global average temperature which is the weighted average\[

\frac{1.02 \times 268\,K + 1 \times 293\,K}{2.02} = 280.4\,K

\] It's 2.6 K cooler than the normal averaging. My slopes were somewhat extreme and the perfect correlation between "warmth" and "slope" was unrealistic. But you see that there's a potential here for getting terms shifting the global average temperature by a degree or so, too.

A huge fraction of these uncertain or convention-dependent shifts cancels away if you talk about the anomalies. On its FAQ page, NOAA tells us:
7. Why use temperature anomalies (departure from average) and not absolute temperature measurements?

Absolute estimates of global average surface temperature are difficult to compile for several reasons. Some regions have few temperature measurement stations (e.g., the Sahara Desert) and interpolation must be made over large, data-sparse regions. In mountainous areas, most observations come from the inhabited valleys, so the effect of elevation on a region's average temperature must be considered as well. For example, a summer month over an area may be cooler than average, both at a mountain top and in a nearby valley, but the absolute temperatures will be quite different at the two locations. The use of anomalies in this case will show that temperatures for both locations were below average.

Using reference values computed on smaller [more local] scales over the same time period establishes a baseline from which anomalies are calculated. This effectively normalizes the data so they can be compared and combined to more accurately represent temperature patterns with respect to what is normal for different places within a region.

For these reasons, large-area summaries incorporate anomalies, not the temperature itself. Anomalies more accurately describe climate variability over larger areas than absolute temperatures do, and they give a frame of reference that allows more meaningful comparisons between locations and more accurate calculations of temperature trends.
These are some old and new examples. And I could invent dozens of rather different examples that imply the same message.

At any rate, the explanations by NOAA make sense. It is quite typical and easy to understand that the temperature anomalies – and temperature changes – may be quantified much more accurately than the absolute global average temperatures themselves simply because most of the uncertainty about the "global average temperature" is a systematic error of a sort. This error is basically independent of time which is why this error cancels when we compute anomalies (that were quantified according to the same methodology and conventions) – or, equivalently, when we compute differences between annual temperatures only.

Once again: The large uncertainty of the individual global average temperatures don't imply that the anomalies and temperature changes can't be more accurate than that. The differences and anomalies may be more accurate, after all. However, as I argued in Which global mean temperature... in 2009 and elsewhere, the large error margins in the individual global average temperatures indicate that the small, sub-degree temperature changes aren't important even if we can measure or calculate them this accurately. This is the right and important statement to be made here.

Note that all the details and ambiguities about the "right definition of a global average temperature" remain an issue even if you construct meteorological and climate models. So the weather and climate models basically can't predict the global average temperature too accurately – even though they may have the ambition to predict temperature changes much more accurately. There's no contradiction here.

In the case of the models, there are additional sources of additive shifts to the global average temperature. No one knows the right albedo of the Earth to be used at all moments too well. No one really knows how much the greenhouse effect from water vapor (imagine that you remove all other greenhouse gases by hand) heats the surface. It may be 30 °C or 35 °C. The figure isn't known much more accurately than that. No one knows. Obviously, if we could calculate the predictions of the water greenhouse effect and all other natural factors this exactly, we could easily isolate the effect of CO2, too. We just can't. Assuming some realistic contemporary knowledge about all the details, the right additive shift simply has to be fudged in all state-of-the-art global climate and meteorological models. No one can compute it from the first principles. This "need for fudging" isn't due to the alarmists' evil or dishonesty. The fudge is needed simply because the climate science hasn't become a full-fledged precision science yet. We may only speculate if we were much further if the climate science weren't plagued by the climate hysteria in recent 20 years. At the end, my guess is "No" because the amount of nonsensical alarmism would be much smaller – but the amount of serious science would be about the same as it is today. It would be a greater fraction of the activity in climatology; but the overall activity and funding for climatology would be way lower than it is today. So these two coefficients would approximately cancel. The rate of progress in serious climatology hasn't changed much in either direction due to the "alarmist funding boost"; only the non-serious part of the research has been inflated by an order of magnitude.

For these reasons, I view this criticism by Tom Nelson, Anthony Watts, and others as a cheap one. This is the kind of a controversy in which the productive vs unproductive side of the debate are pretty sharply separated. NOAA (like UAH AMSU and others) actually has to produce some figures for the global average temperature and/or its anomaly – while for Watts et al., it's "enough" to criticize. Watts doesn't have any more accurate value for the global mean temperature (BTW I am somewhat confident but nor certain that 59 °F is a more natural estimate than 62 °F) but he's doing fine just with destructive criticisms, without offering any alternatives. So even though these propositions are facts – the error margin of a temperature anomaly/difference is smaller than the error margin of the global average temperature itself – Watts pretends that these facts "work for him" and he wins a match if he "calls out NOAA on those things".

Sorry, Anthony, but this is just anti-science populism. NOAA's work may be better or worse than e.g. the work of the satellite temperature teams but they're doing actual science related to the global averaging of temperatures – and they have to face the actual difficulties. You are not facing anything because you are not doing anything useful for this subdiscipline of science, you are just trying to score cheap points by hiding the fact that everyone who does these things seriously must face certain facts and difficulties.

As I have repeatedly clarified in the past, I do agree with some of the alarmists' critiques that – to put it more carefully – there is a significant overlap between the "popular movement criticizing the panicking climate science" and the "popular anti-science movement in general". Clearly, I don't want to have anything whatsoever to do with the latter. Even if and when CO2 is agreed to be one largely irrelevant factor among hundreds that influence the climate, it will still be true that scientists will try to talk about things like the averaging of temperatures and they will face most of the facts and challenges above just like they face them today. Those challenges have nothing specific to do with CO2 per se and whoever pretends that these challenges make him a "winner" is just a generic hater of science.

P.S.: Yesterday, I tried to offer an analogy to Anthony showing that the evolving adjustments are nothing to get justifiably angry about. (See also Adjustments done right are great, Feb 2015.) Take our calendar. We live in the year 2016 Anno Domini, supposedly 2016 years after the birth of Jesus Christ. Except that more careful analyses show that Jesus was born between 7 BC and 2 BC (of our standard calender), most likely in 4 BC. Let me assume that 4 BC is right. If our calendar were literally claimed to be the most accurate representation of time since Jesus' birth, we would call this year 2020 and World War II would be said to have ended in 1949.

If we took Jesus seriously, we could and we would switch to the new calendar. This would require to update all older texts. There would be "no crime" if we switched to this new dating system. We don't do it because we don't consider Jesus' exact birth too important. But if you consider the global average temperature as an important thing, you should be switching to ever more accurate methods to quantify it. It's a part of the scientific progress.

Whether you switch to a new calendar or keep the old one, things still work. What the calendar gives you is the unique identification of the moments. And the uncertain and perhaps evolving overall time shift (4 years in my example) cancels whenever you calculate differences of years. So the World War II took the same time according to both calendar conventions because 1945-1939 = 1949-1943 = 6. ;-) So what's the big deal here? Whether or not you switch to the "more accurate" new conventions and calendars – and NOAA did something of the sort in the case of the global average temperature – should clearly have nothing to do with the question whether you believe CO2 to be dangerous.

It has much more to do with the question whether you prefer to do the hard work to find things out about the real world by the scientific method; or you prefer to scream that the scientific method basically doesn't work and those who do science are painful.

One more bonus; click the image to zoom in.

A graph is sometimes worth one thousand words. This is how NOAA (or skeptics at the UAH AMSU satellite team, it doesn't really matter!) could present its probability distribution for the "idealized global average temperature in 1997" (horizontal axis) and "the same in 2015" (vertical axis) based on its real-world approximations of the "global average temperature". You see that both temperatures are highly uncertain, with a few degrees of error margin. However, their difference is \(0.55\pm 0.08\) °F, pretty much accurate. The Mathematica code that produced the graph is
DensityPlot[Exp[-(x + y - 119)^2/6]*Exp[-(x - y + 0.55)^2/2/0.08^2], {x, 58, 61}, {y, 58, 61}, ColorFunction -> "CMYKColors", PlotPoints -> 35, MeshFunctions -> {#3 &}, Mesh -> 3, MeshStyle -> {Black, Dashed}, PlotLegends -> Automatic, PlotRange -> All]
The mistake that folks like Tom and Anthony – and most laymen – are doing is to assume that all the probability distributions have to factorize into the products of probability distributions for "the preferred, fundamental variables", like the temperatures of the two years. In terms of geometry, they think that these ellipses are always vertical or horizontal. They can never get rotated or tilted, they believe.

But as the picture above shows, ellipses may be tilted or rotated nicely and easily, thank you for asking. ;-) The probability distributions don't have to factorize and there's nothing like the "fundamental variables" (which would be metaphysically more elementary than their more general functions). Tilted distributions are omnipresent; they just encode the correlations between the variables depicted by the two axes. Here, the high correlation between \(T(1997)\) and \(T(2015)\) means that their difference is known much more accurately than these two temperatures separately.

Maybe I should point out that the graph above assumes that something like the "idealized global average temperature" is at least carefully well-defined. As I discussed in the examples at the top of this blog post, it isn't really the case. So some of the uncertainty for \(T(1997)\) and \(T(2015)\) separately comes from the inability to determine the "idealized global average temperature" from the approximate and sparse thermometer readings; some of the uncertainty comes from the ill-definedness of the term "global average temperature". Do we weight according to the horizontal projected areas or the tilted areas of the surface? In the forest, do we take the temperature 1 foot above the soil or 1 foot above the grass, and are we supposed to measure the temperature inside the trees in the corresponding places of the forests? And so on. No one bothers to make all these things absolutely precise – although they influence the values of the global average temperature, sometimes by a degree or more – because these details don't influence what we really care about, and it's the temperature changes, or at least the influence on the changes is much smaller than the influence on the "baseline".

NOAA is well aware of the fact that the anomalies and/or temperature changes are much more meaningful (and more accurately measurable) than the absolute individual "global average temperatures". It's Tom, Anthony, and other skeptics who don't realize that! Or at least, they pretend that they don't realize that.

And sorry to say, this kind of a critique of Anthony's objections against NOAA should have naturally appeared on the science-dominated websites run by the alarmists. Sadly, there aren't any. Climate alarmists are much more occupied with the pissing on skeptics' graves during the funerals (or with nonsensical demagogic constructions "allowing" them to say that one gallon of gasoline melts 200 tons of ice; or with predictions of imminent $500 per barrel oil price; or, if you're Al Gore, the global apocalypse comes next Tuesday) than with science.

Add to Digg this Add to reddit

snail feedback (0) :