Friday, September 07, 2007

Quantifying climate change - too rosy a picture?

In our weekly dose of peer-reviewed skeptical literature about the climate, we look to Nature.

Stephen Schwartz has recently calculated the climate sensitivity to be around 1.1 Celsius degrees.

June article in Nature

In June, he co-authored a comment with Robert Charlson and Henning Rodhe that was published in Nature:
Quantifying climate change — too rosy a picture?
Their basic argument is obvious: if you fine-tune the parameters of your model to agree with the observed 20th century temperatures, you may get pretty close to the right answer - for example, the difference between calculated and observed temperature may be smaller than 0.1 degree for every year. But that surely doesn't mean that the temperatures predicted for the 21st century will be equally accurate. Why? Well, because the accuracy that we achieve in reproducing a function clearly doesn't mean that we know the underlying dynamics and coefficients equally accurately.

Fidel's thought experiment

To make this point obvious, imagine that Fidel Castro creates a model in which the global temperature depends on the CO2 emissions and the GDP per capita in Cuba. Surely he can adjust the response of the climate to his economy so that the resulting model will agree with the 20th century more accurately than the model without the GDP term - unless the GDP profile is "orthogonal" to the difference between the observed temperatures and those predicted by the GDP-free model which is very unlikely. The more terms you add, the more accurate your predictions can get.

But correlation is not causation.

The fact that you can reduce this error of your description of the 20th century data doesn't mean that your addition of the GDP term increases the accuracy of the 21st century predictions. I assume that the reader believes me that the GDP of Cuba doesn't influence the climate that much and if we include this unscientific term, we inevitably create a worse model with less accurate predictions, not a better model, regardless of any agreement with the 20th century graphs.
Commercial break: Václav Klaus spoke about the global warming hysteria in Italy
IPCC's misconceptions

Schwartz et al. demonstrate that the authors of the IPCC report don't understand these different kinds of sources of errors in the predictions and they use inconsistent assumptions about the uncertainty of the climate sensitivity. For example, the ensemble of models doesn't cover the whole interval of temperatures that would be expected from the magnitude of the uncertainties listed elsewhere. More quantitatively, the anthropogenic forcing in the IPCC report is very uncertain - between 0.6 and 2.4 Watts per squared meter (the ratio of the limits is four) - while the predicted temperature change resulting from this forcing is twice as accurate - between 0.5 and 1.0 Celsius degree (the ratio is two).

The IPCC essentially deduce the uncertainty of the sensitivity from the agreement of the overfitted model with reality. The Fidel example above shows that this clearly leads to far too self-confident conclusions about the uncertainty. In this sense, the IPCC claims that the resulting temperature is known more accurately than the input parameters that determine the model which is absurd.

That implies that the ensemble of their models doesn't coincide with the statistical ensemble that you would expect from the known uncertainties, not even if you assume that these models got everything qualitative right except for some quantitative constants. It follows that the statistical uncertainties of predictions extracted from the model ensembles will be vastly underestimated.

The RealClimate.ORG group has a special new blog article dedicated to the paper by Schwartz et al. and they assure the readers that Schwartz et al. "misinterpret" the IPCC report. A funny detail is that while RealClimate.ORG link to two less relevant texts, they don't offer their readers the very paper by Schwartz et al. that is being "debunked".

Well, the readers don't even expect it. What the readers of RealClimate.ORG want is their daily dose of religion: they want to be assured by "experts" that the holy global warming is great and anyone who tries to diminish His holiness - including Prof Schwartz - is a jester, cook, denier, or liar. And RealClimate.ORG is indeed optimized for readers at this intellectual level.

They have added an "update" that contains the link to the Schwartz et al. article.

Forster et al.: fog

Instead of the paper by Schwartz et al., they only offer a reply by Piers Forster et al. These people clearly haven't understood anything from the objection by Schwartz et al. that was sketched above (as Schwartz et al. explain in their reply to the reply at the end). They fully confirm the worries of Schwartz et al. and in their two pages of fog, they actually add several additional examples of their incompetence.

For example, in order to "disprove" the statement by Schwartz et al. that the IPCC model ensemble underestimates the uncertainty of the climate sensitivity, they celebrate the fact that different models predict significantly different values of the ocean heat uptake. This is kind of cute: they're criticized that the ensemble underestimates the uncertainty of sensitivity, so they answer that the models overestimate the uncertainty of the ocean heat uptake.

Gavin's handicap is that one of his hands is 10 inches shorter than the other, but the advantage is that the other hand is actually 10 inches longer than the first one. :-)

Gaia vs analytical thinking

Are these Ladies and Gentlemen unable to see that these two quantities are different and can't be traded for each other? If both of them are wrong, then there are two problems with their model ensemble: two errors that unfortunately for them don't cancel. Most of these climate scientists are totally incapable to think analytically. They haven't yet grasped the very idea that in science, one must try to separate a complex system to components and each component must be understood as accurately as possible in a setup where the influence of all other components is as reduced (or under control) as possible: different components of your models must be studied and verified in isolation whenever it is possible.

Instead, they only want to look at a linear combination of the components. They think that if they understand this combination, they understand every piece of it - which is of course complete nonsense.

Anthropocentric king vs natural loser

Finally, let me mention one trick that is often used by these people. Look at the following graphs taken from Forster et al.:

These three graphs show global, global land, and global ocean profiles of the 20th century temperatures. The black line are the observations, the blue strip are predictions of the purely natural models while the pink strip contains predictions of the combined models involving man-made greenhouse gases. The pictures surely look like an impressive piece of evidence for the AGW theories.

What these vague words don't tell you is that the graph compares the best fit based on the combined anthropogenic model with one of the worst fits among the purely natural models. The blue strip is not the result of the same optimization within the class of purely natural models that was used for the anthropogenic models - a class in which the proposed mechanisms (cosmic rays, sunspots, ocean turbulence) would be included with adjustable coefficients. Instead, the blue line is simply obtained by taking the best anthropogenic model and removing one of its key hypothesized components - the enhanced greenhouse effect - by hand. Well, if you do so, you can be pretty sure that the results get worse. They would get worse in Fidel Castro's thought experiment, too.

Imagine that someone has a theory that pi is exactly equal to a "man-made" fraction in which the unhappy number 13 is important. After some time, her best model is that pi equals 355/113. She "debunks" the deniers funded by the trigonometric industry who argue that you don't need any 13 - because pi equals a multiple of an arctan - and to show how accurate her model is, she shows how 355/113 seems to be close to pi while 355/100 is very far from pi. Well, that's nice but the trigonometric jesters don't say that pi is 355/100. They say that there is an accurate formula based on trigonometric functions rather than artificial, man-made fractions. And they are right. The easiest way to formulate their model is pi=4.arctg(1), something that can be accurately evaluated by Taylor expansions.

Aerosols vs ocean graphs

There is one more technical detail about the graphs above that you should notice. The observed temperatures differ from a structureless linear increasing function in one most visible way, namely by the cooling period after 1940 or so. In order to describe this feature of the graph, the anthropocentric believers attribute the cooling to aerosols. However, these aerosols only cool the land because they don't spread globally. Indeed, you can see that the middle graph - global land - has a rather good agreement between the black line and the pink strip. However, if you look at the right graph, you see that the cooling after 1940 affected oceans, too - and the pink strip is clearly unable to reproduce the peak around 1940.

What does it mean? Well, if you just look at the black curve, you may see that there was cooling both over the lands as well as above the oceans: in fact, the cooling above the oceans after 1940 was significantly stronger than the cooling above the land. So any mechanism (such as aerosol emissions) that is largely confined to the land is likely to be falsified by the data. But as we have said, the Gaia people will never be able to make such insights because they really look on the far-left overall global graph only which doesn't show a great agreement but it is good enough for them to think that they have found the theory of everything about the climate. Except that they have clearly found a piece of exc*ement only.

And that's the memo.