Friday, October 26, 2007 ... Deutsch/Español/Related posts from blogosphere

Science: Climate uncertainties cannot go away

Science magazine has just released a peer-reviewed paper by Gerard Roe and Marcia Baker,

Why is climate sensitivity so unpredictable? (full text)
The authors argue that during the last 20 years, no significant progress has been made to reduce the uncertainties about the climate, especially the climate sensitivity, despite skyrocketing numbers of scientists, funding, and computer power. Moreover, they think that this fact won't change in the future. More concretely, they say that even if various things will be known more accurately, we won't be able to say more unambiguously what is the probability that the climate sensitivity is very high, e.g. higher than 4.5 Celsius degrees. The invent a probability distribution for the climate sensitivity that decreases slowly for large values of the sensitivity and proclaim that this is the ultimate form that won't go away.

Nude Socialist, American Thinker, and others assume that skeptics will be enthusiastic about these claims. What do I think?




First of all, I agree that during the last two decades, not much progress was made in these questions, especially if you look at the knowledge of mainstream scientists. But unlike Roe and Baker, I don't think that it is a consequence of fundamental limitations of such a chaotic system. It is a consequence of having too many incompetent, politically passionate, corrupt, and dishonest people in the discipline.

Can predictions about the climate get better?

The answer is obviously "Yes". Virtually every prediction in the past that an answer to a question would be forever unknown or forever inaccurate has been shown incorrect as soon as science advanced in an unexpected way. The predictions are always extracted from the assumption that the specific algorithm that the authors imagine is the only one that can be used to study the system. They are always proved wrong.

What are the exceptions? The main exception is the uncertainty principle in quantum mechanics. It wasn't really "predicted": when Heisenberg first formulated it, he already had a rigorous proof and almost the whole theory (matrix mechanics). ;-)

Back to the climate. Let us ask a frequently discussed question:

Can we predict the (long-term) climate without being able to predict (short-term and medium-term) weather well? The correct answer is that We don't yet know.

Some people correctly say that certain unimportant features of the weather may "average out" when you focus on the long-term climate. Consequently, the subtleties that appear in the short run may be irrelevant. They are right. It can be so. We know many examples from physics where an effective approximate theory correctly describes the low-energy, long-distance, or long-time limit of a physical system whose high-energy, short-distance, or short-time behavior is unknown. For example, one can determine the properties of the molecules without knowing nuclear physics.

But I emphasize that it is possible, not guaranteed, that the long-time behavior of the climate may be understood before details about the weather are fully mastered. It doesn't have to be like that. For example, the chaotic dynamics of the climate can very well lead to a scale-invariant behavior whose description is qualitatively equal - and equally uncertain - at all time scales.

But the size of the Earth is finite and it is unlikely that an object of this size can generate interesting new dynamics at arbitrarily long time scales. This "infrared cutoff" allows me to assume that the internal contributions to the long-term climate can be understood after some effort.

Possibility to learn is something different than actual knowledge

More importantly, we must realize that if something can be answered in principle, it doesn't mean that we have already answered it. The correct long-time effective theory may be subtle, may include unexpected degrees of freedom and unexpected interactions. It is almost certainly a different theory from the first guess that you write down and it is probably a different theory from those that are popular today. Less intelligent champions of the global warming theory usually assume that if science is correct, it follows that their fashionable guess is correct. But science is something different than scientists - and science is very different from bad scientists.

Central limit theorem

Even if the climate is inherently chaotic and non-deterministic, there exist quantities that are objective in character. The probability distribution for various values of the climate sensitivity would still be a legitimate target of science. In principle, we could find what it is. Analyze the data from the whole history of the Earth - or organize experiments with the same planet for hundreds or millions of years in the future - and determine the responses of temperature to CO2 changes in many types of circumstances.

If you do it many times, the answers will converge somewhere. Either they will converge to a well-defined value of the climate sensitivity - and the probability distribution will start to rapidly approach the normal distribution by the central limit theorem - or you will be getting different temperature changes in different cases but you will still be able to draw a well-defined distribution. Assuming that we classify all possible internal phases of the Earth, the distribution itself can't be ill-defined in principle.

What do I actually think about the distribution?

First of all, the fact that the climate sensitivity is not yet described as a number with a normal, Gaussian distribution means nothing less than the fact that this scientific discipline hasn't yet become a fully quantitative one.

Second, the "bare" climate sensitivity without feedbacks is a well-known and calculable number - at least for those who know how to compute the absorption by a CO2 molecule. This number is modified by responses of clouds and other effects - by feedbacks. Is the strength of these feedbacks well-defined? Well, it may certainly depend on other internal degrees of freedom of the climate system. For example, the feedbacks can be stronger or weaker during El Nino than during La Nina. But this is exactly the kind of subtlety that will go away assuming that the long-term frequency of El Ninos and La Ninas (and similar effects) is well-defined, too. You can just take a weighted average. So a well-defined averaged numbers encoding the feedbacks should exist, after all.

Third, we should avoid obvious bias in talking about the specific value of the sensitivity. Many kinds of arguments indicate that the sensitivity is around 1 Celsius degree. The observed temperature change in the last 100 years indicates that the sensitivity is around 1 Celsius degree and the bare value is not far either. Because of uncertainties, the correct number can be somewhat different. But if the best methods - including Svensmark's and Friis-Christensen's analysis of the patterns to isolate the effects of cosmic rays, El Ninos, volcanos, and low-frequency signals - are continued to be refined, we may learn this number pretty accurately.

But this point should have been about the bias. I think that many scientists are far too interested in one types of values, including Roe and Baker. They are interested whether the climate sensitivity is higher than 2.0 or 4.5 Celsius degrees but they never seem to be interested in the probabilistic distribution on the other side. It's as if one half of their brain is completely missing. If you start with the bare greenhouse effect value of 1 Celsius degree, the feedbacks can go in both ways. The climate sensitivity can even be negative: CO2 can cause cooling in the long-term average. Be sure that every argument that puts the probability distribution equal to zero for negative (or even for small) values of the sensitivity is a mathematically faulty argument.

I surely do agree with Richard Lindzen that the feedbacks are going to be negative because feedbacks of apparently stable systems usually are negative. But even if you have a different opinion, all scientists should agree that both possible signs must be studied. People don't do it these days. Not even Roe and Baker are doing it. Well, it's because they are interested in applications: the main application is fearmongering. Their climate science is so "applied" that "corrupt and dishonest" could be more appropriate adjectives.

It is pretty clear that the more balanced assumptions about your calculations you make, the more symmetric the probabilistic distribution will be. The probability that the sensitivity is higher than a certain pretty high limit is similar to the probability that the sensitivity is negative.

Is there a reason why the probability distribution should be pretty much symmetric? Some people could argue that if you invert the relationship and ask what is the CO2 change needed for 1 degree of warming, the corresponding inverted distribution will be non-Gaussian (or asymmetric) even if the original distribution was Gaussian (or symmetric). So why do I assume that the first one should be Gaussian (symmetric) but the inverse one doesn't have to be?

Well, it's because in this hypothetical relationship and mechanism (the greenhouse effect), CO2 is the cause and temperature is the effect determined with a certain coefficient (sensitivity). It is these coefficients - that multiply the causes to obtain the effect - that normally have symmetric or Gaussian distributions.

Summary

Ideally, climate science should become a science again. Once it is a science, it will recognize that there exist only two possibilities whether we roughly know the climate sensitivity: we either don't know it, not even qualitatively, or we know it, at least qualitatively. In the latter case, the distribution inevitably approaches the normal distribution and further work is guaranteed to make the central value more accurate. In the former case, we shouldn't be pretending that the science is settled and it should lead to important consequences for policymaking.

There exist many possible mechanisms, potentially scary, whose character is not fully understood. For example, "cow sensitivity" measures how much more angry a photon from cell phones makes a cow. When the cattle's grumpiness exceeds a certain value, all cows (and pigs) will stop reproducing and everyone will have to become a vegetarian. Due to some complexities of the cow brain, the "cow sensitivity" may be nonzero or high, indicating that we, meat-lovers, should abandon cell-phones. But I think that a rational attitude is not to think and talk about these relationships at all until there is some reason to think that they are nonzero or important. Thinking about every conceivable relationship has normally been referred to as "paranoia".

Climate science should avoid paranoia, it should dedicate equal efforts to study both kinds of answers to many questions, and it should do its best to make certain numbers accurate and increasingly accurate. Only when the methods to do so are found and used, its long-term predictions can be taken seriously as scientific results. Once such scientific results are available, it is still not clear whether they are going to be important for policymakers. That's how science works: if you do it honestly, it is just not true that every result is "useful" for applications.

And that's the memo.

Hat tip: Marc Morano

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :