Reasonable climatological papers (including papers strongly relevant for the debate about the climate hysteria) began to appear more frequently again. Nicholas Lewis and Judith Curry have a new paper

The implications for climate sensitivity of AR5 forcing and heat uptake estimates (full text free PDF)in the journal Climate Dynamics.

*Click to zoom in.*

Their goal is to take the newest IPCC AR5 WG1 temperature (and ocean heat) data from 1750 AD and analyze the accumulation of heat and temperature changes in the best way, with a minimum dependence on climate modelling that may introduce new uncertainties and biases, and with a special effort to eliminate the periods with volcano eruptions etc. that also modify the heat budgets by pretty much unknown contributions.

The main result is arguably summarized in the table above: the short-term warming obtained from a doubling of CO2 (TCR: transient climate response) seems to be most likely close to 1.3 °C while the long-term warming from a CO2 doubling (ECS: equilibrium climate sensitivity) is most likely to be close to 1.6 °C. These values are close to the "no-feedback" bare value 1.2 °C and suggest that the feedbacks are very weak. Because the values are compatible with many of the previous climate skeptics' estimates, the expected warming in the decades to come is negligible from any practical viewpoint.

Real Climate quickly and predictably wrote down a criticism that is rather incomprehensible to me so I won't try to judge it in any way. Richard Millar of Oxford who wrote it seems to be more interested about political goals concerning the future CO2 emissions while this paper has nothing to do with these policy issues; it is about the physical science.

However, while I this that the paper seems to display lots of expertise and calm heads, there is one aspect of this paper – and lots of other papers – that I find totally inconceivable. It is the asymmetry of the 5%-95% ranges of the climate sensitivity. In particular, the huge values of the "still plausible" long-term climate sensitivity – the upper bound goes up to 4 °C – isn't really possible.

Lewis and Curry estimate the values of the climate sensitivity in many ways and the results indeed look surprisingly stable, suggesting that climatology has some chance to become a precision science. The short-term and long-term climate sensitivities (their most likely values) calculated from various intervals and with various choices really seem to be close to 1.3 °C and 1.6 °C, respectively, with deviations that are usually smaller than 0.1 °C. That would be a huge precision, indeed. The fact that all these partly independent estimates are close to these values could be a coincidence; but one may also say that it looks like evidence that the true values are really in the ballpark.

Despite this impressive harmony when it comes to the "most likely value", the intervals they ultimately quote for the "plausible interval" are very wide, especially in the case of the ECS, the long-term climate sensitivity. Note that this 90% (i.e. 1.6-sigma) confidence interval is something like 1.0–4.0 °C in the table I quoted; some other random numbers from the paper contain values as far as 9 °C or so. The "center" of this interval would be at (1+4)/2 = 2.5 °C; however, the most likely value is 1.6 °C, much lower.

This discrepancy means that Lewis and Curry, much like many others, think that the probability of an ever higher sensitivity decreases very very slowly – while the probability of a lower-than-median value decreases very very quickly so that tiny (and negative) values are ruled out. This asymmetry isn't really plausible. Why?

Well, because we know from other historical contexts that the climate sensitivity can't be anything like 4 °C or higher. More mathematically and extremely, such a slowly decreasing distribution would actually mean that there is a high probability that the climate sensitivity is infinite or "even higher", if I put it in this way.

What do I mean? The climate sensitivity is proportional to \(1/(1-f)\) where \(f\) is a coefficient quantifying the feedbacks and the value of \(f\) is more fundamental than the value of \(1/(1-f)\). So \(f\) itself, and not \(1/(1-f)\), is composed of various terms that pretty much add up additively. The values \(f\gt 1\) are excluded because they would predict an unstable system – any perturbation from the "right temperature" would exponentially grow. The Earth would have already been destroyed billions of years ago.

So we know that a sensible probability distribution for \(f\) is, by the central limit theorem, very close to a normal distribution and we know that the values \(f\gt 1\) must be excluded. Because \(f\) itself isn't a perfect constant over the geological history of our planet, it was almost certainly a bit higher at some points in the past than it is today (because \(f\) has at least some dependence on the arrangement of continents, the chemical composition of the atmosphere, the prevailing color of vegetation etc. – and those things have been changing over the millions of years). So \(f\) can't really be \(0.9\) today because that would mean that at some point billions of years ago, the value would have exceeded \(1.0\) due to the very long-term evolution of the climate. And that would mean that billions of years ago, the climate has been unstable. However, such a runaway evolution has never occurred (or at least, it wasn't occurring in a huge majority of the recent hundreds of millions of years, to say the least).

It follows that the "bell curve" (the probabilistic distribution) for \(f\) must actually sit at a safe distance below \(f=1\). Because we are separated from \(f=1\), the nonlinearities are not strong in the plausible interval and \(1/(1-f)\) may therefore be approximated by a linear function rather well and has a nearly normal distribution, too. The huge asymmetry is simply impossible because of basic physical principles as well as rudimentary observations such as the longevity of the Earth.

I have discussed this long-term stability argument against high positive feedbacks in the 2010 blog post "Why the feedback amplification cannot be both high and positive".

In other words, it means that if someone says that 1.6 °C is the most likely value for the ECS and a confidence interval allows 4.0 °C as well, which is higher by 2.4 °C, then she must pretty much admit that the values of the ECS may also be 2.4 °C lower, i.e. values not far from –0.8 °C (negative climate sensitivity) must be allowed, too. There may exist a proof or evidence that the negative feedbacks can't change the sign of the climate sensitivity but I am currently not aware of such a proof or evidence.

I am not the only guy who noticed that the slow decrease of the probability for high values of the sensitivity is implausible. James Annan, a somewhat moderate climate alarmist, has actually made very similar points about the statistical distributions (although with less input from physics) in the past.

**Off-topic:**kids have been used in brutal ways. They were forced to parrot lots of painful things about the climate. This video is different. The kids actually say things that are true and that make sense. It's both cute and insightful for the beginners. P.S. Stop the video at 1:18 when despicable ecoterrorists interrupt the kids and start to spread their outrageous lies.

## snail feedback (9) :

It all looks very neat and tidy. But it is based on the presumption that CO2 is the main driver. Phlogiston was the big thing at one time and probably had all sorts of statistical analysis to prove the theories. Didn't work out so well in the end. It's all above my paygrade. I'm just a suspicious observer who puts his 2 cents in now and again.

And in the high energy limit the local QFT theories in the worldvolume of NS5 branes in IIA and IIB are little string theories (2,0) and (1,1).

From all these geometric realizations of QFTs via String theory it is evident that String theory is the natural completion of QFT. I wonder all these hasty critics of the theory have any idea about these amazing facts?

I think that this paper used all of the assumptions of the latest IPCC report along with historical data to show that even if you accept everything that the IPCC report says and compare it to actual data the climate sensitivity is low. The blind acceptance of the logic of the IPCC report may be the source of this "Fat Tail." But it is still a good form of argument to assume everything the report says is true to reach conclusions that are much less catastrophic.

"Because we are separated from f=1, the nonlinearities are not strong".

I plugged in the numbers from the paper, and find the spread in f to be quite large. So for the ECS, they quote 5%=1.05 K, best=1.64 K and 95%=4.05 K. Taking a bare CO2 doubling of Tco2=1.2 K, one can plug in these dT's and find f from your formula dT=Tco2/(1-f). I get f(5%)=-0.143, f(best)=0.268, f(95%)=0.704, which is basically symmetrical around f(best) (-0.411 and +0.436), despite the fact that 1.05 K and 4.05 K are asymmetrical around 1.64 K (-0.59 and +2.41 K).

Similarly for TCR, I get f(5%)=-0.333, f(best)=0.098 and f(95%)=0.520, which is also symmetrical (-0.431 and +0.422) despite the TCR values of 0.90, 1.33 and 2.50 K being asymmetrical (-0.43 and +1.17 K).

So despite the central estimates coming out very similar, the 5%-95% uncertainty in f is so large that the nonlinearities do kick in at the higher end. That the central estimates are so close, while the spreads are so large seems a bit strange, maybe it is due to the relatively short timescales studied?

Thanks for the numbers, very explicit.

Thanks for "fat tail" - I should have used this phrase about 5 times in the blog post because that's the normal name for the main "hero" of my complaint.

I don't think that it's based on this assumption. The spread clearly means that there are other highly comparable sources, and the very methodology of the authors was working to remove some other drivers that they know to be at least as important, namely the volcanos.

Chern-Simons

repair of Einstein-Hilbert action is chiral. Knots are chiral. Six ways of testing spacetime geometry with quantitative chiral atomic mass distribution can validate such theory and falsify alternatives. Geometric calorimetry and geometric molecular rotational temperature require but 24 hours in existing apparatus. A geometric Eotvos experiment contrasting chemically and visibly identical single crystal test masses in enantiomorphic space groups is 90 days.

Stop whining. Look.

Slighty OT/ I suspect that this workshop might be of interest to many people, starting for Lubos: Fine-Tuning, Anthropics and the String Landscape. Alan Guth, Lisa Randall, Tom Banks, Alexander Vilenkin and some other theoretical physicists will be there from today, and you can watch them in live streaming:

http://workshops.ift.uam-csic.es/iftw.php/ws/anthropic/page/260

Post a Comment