Saturday, May 13, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Climate CO2 sensitivity

...and editorial policies

See also:

RealClimate: saturated confusion (directly related)
CO2 - temperature relationship is the other way around
Climate sensitivity is defined as the average increase of the temperature of the Earth that you get (or expect) by doubling the amount of CO2 in the atmosphere - from 0.028% in the pre-industrial era to the future value of 0.056% (expected around 2100).

Recall that the contribution of carbon dioxide to the warming is expected because of the "greenhouse" effect and the main question is how large it is. The greenhouse effect is nothing else than the absorption (of mostly infrared radiation emitted by the Earth) by the "greenhouse" gases in the atmosphere, mainly water vapor - but in this case we are focusing on carbon dioxide, one of the five most important gases causing this effect after water vapor.

If you assume no feedback mechanisms and you just compute how much additional energy in the form of infrared rays emitted by (or reflected from) the surface will be absorbed by the carbon dioxide (refresh your knowledge about Earth's energy budget), you obtain the value of 1 Celsius degree or so for the climate sensitivity.

Dr Stephen Schwartz obtained 1.1 degrees for the full sensitivity, see also my reply to his critics
While the feedback mechanisms may shift the sensitivity in either direction, Prof. Richard Lindzen of MIT, a world's leader in the sensitivity issue, will convince you that the estimate is about right but the true value, with the mostly unknown feedback mechanisms, is likely to be lower than the simple calculation. One of the reasons, Lindzen's own, is a negative feedback by water vapor and clouds. There is however another issue here: The dependence of the temperature on the CO2 concentration is not linear but rather "sublinear". Why is it so?

You should realize that the carbon dioxide only absorbs the infrared radiation at certain frequencies, and it can only absorb the maximum of 100% of the radiation at these frequencies. By this comment, I want to point out that the "forcing" - the expected additive shift of the terrestrial equilibrium temperature - is not a linear function of the carbon dioxide concentration. Instead, the additional greenhouse effect becomes increasingly unimportant as the concentration increases: the expected temperature increase for a single frequency is something like
  • 1.5 ( 1 - exp[-(concentration-280)/200 ppm] ) Celsius

The decreasing exponential tells you how much radiation at the critical frequencies is able to penetrate through the carbon dioxide and leave the planet. The numbers in the formula above are not completely accurate and the precise exponential form is not quite robust either but the qualitative message is reliable. When the concentration increases, additional CO2 becomes less and less important.

In particular, there exists nothing such as a "runaway effect" or a "point of no return" or a "tipping point" or any of the similar frightening fairy-tales promoted by Al Gore and his numerous soulmates. The formula above simply does not allow you more than 1.5 Celsius degrees of warming from the CO2 greenhouse effect. Similar formulae based on the Arrhenius' law predicts a decrease of the derivative "d Temperature / d Concentration" to be just a power law - not exponential decrease - but it is still a decrease.

One might also want to obtain a better formula by integrating the formula above over frequencies:

In all cases, such a possible warming distributed over centuries is certainly nothing that a person with IQ above 80 should be producing movies about and nothing that should convince him to stop the world economy.

When you substitute the concentration of 560 ppm (parts per million), you obtain something like 1 Celsius degree increase relatively to the pre-industrial era. But even if you plug in the current concentration of 380 ppm, you obtain about 0.76 Celsius degrees of "global warming". Although we have only completed about 40% of the proverbial CO2 doubling, we have already achieved about 75% of the warming effect that is expected from such a doubling: the difference is a result of the exponentially suppressed influence of the growing carbon dioxide concentration.

As Richard Lindzen likes to say, it is just like when you paint your bedroom. The first layer of white makes a lot of difference in the amount of light in that room; additional layers make a smaller contribution.

The first calculation of the climate sensitivity, based on the Stefan-Boltzmann law, was published by the Swedish chemist Arrhenius in 1896: it had some problems but it was a fair starting point. The Carbox Dioxide Calculator on is based on my simple exponential formula and you must take the exact resulting number with a grain of salt.

More exact treatment: Why is the greenhouse effect a logarithmic function of concentration?
However, my simple exponential formula agrees with the logarithmic Arrhenius formula plus minus 50% up to 1000 ppm or so, expected around 2300. The changes in the emission by the surface of the Earth can be linearized although they depend as "T^4" on the temperature because the expected increase of "T" is at most 2 degrees, less than one percent of the normal "room" temperatures of 290 degrees above the absolute zero.

In reality, the increase of the temperatures since the pre-industrial era was comparable or slightly smaller than 0.76 Celsius degrees - something like 0.6 Celsius degrees. It is consistent to assume that the no-feedback "college physics" calculation of the CO2 greenhouse effect is approximately right, and if it is not quite right, it is more likely to be an overestimate rather than an underestimate, given the observed data.

The numbers and calculations above are actually not too controversial. Gavin Schmidt, a well-known alarmist from RealClimate, more or less agrees with the calculated figures, even though he adds a certain amount of fog - he selectively constructs various minor arguments that have the capacity to "tilt" the calculation above in the alarmist direction.

Richard Lindzen would tell you a lot about likely negative (regulating) feedback mechanisms (the iris effect?). Your humble correspondent finds all these mechanisms - positive or negative - plausible but neither of them can really be justified by the available, rather inaccurate data.

But the figure of 1 Celsius degree - understood as a rough estimate - seems to be consistent with everything we see and Schmidt himself claims that only intellectually challenged climate scientists estimate the sensitivity to be around 5 Celsius degrees (I forgot Schmidt's exact wording). It is also near the result of 1.1 Celsius degrees obtained by Stephen Schwartz in 2007.

Three weeks ago, Hegerl et al. have published a text in Nature that claims that the 95 percent confidence interval for the climate sensitivity is between 1.5 and 6.2 Celsius degrees. James Annan decided to publish a reply (with J.C. Hargreaves). As you might know, James Annan - who likes to gamble and to make bets about the global warming - is

  • an alarmist who believes all kinds of unreasonable things about the dangerous global warming;
  • a staunch advocate of the Bayesian probabilistic reasoning.

However, he decided to publish a reply that

  • the actual sensitivity is about 5 times smaller than the Hegerl et al. upper bound which means that the warming from the carbon dioxide won't be too interesting;
  • Hegerl et al. have made errors in statistical reasoning; the error may be summarized as an application of rationally unjustified Bayesian priors which is an unscientific step.

The second point of Annan is based on the observation that Hegerl et al. simply use a "prior" (a random religious preconception that defines our "primordial state of ignorance" before the sin involving the apple, so to say) that is a crucial part of the Bayesian statistical reasoning. In this particular case, the Hegerl prior simply allows the sensitivity to be huge a priori - and such a dogma to start with is simply too strong and is not removed by the subsequent procedure of "Bayesian inference".

Such an outcome is a typical result of Bayesian methods in many cases: garbage in, garbage out. If your assumptions at the beginning are too bad, you won't obtain accurate results after any finite time spent by thinking. Although I don't want to claim that Annan's reply was a great paper, I am convinced that the fact that Annan was able to appreciate these incorrect points of Hegerl et al. is partially a result of my educational influence on James Annan. ;-)

Nevertheless, Annan's reply was rejected by Nicki Stevens of Nature without review with the following cute justification:

  • We have regretfully decided that publication of this comment as a Brief Communication Arising is not justified, as the concerns you have raised apply more generally to a widespread methodological approach, and not solely to the Hegerl et al. paper.

In other words, Annan's reply could have the ability to catch errors that influence more than one paper, and such replies are not welcome. Imagine that Nicki Stevens is the editor of "Annals der Physik" instead of Max Planck who received Albert Einstein's paper on special relativity. Even better, you can also imagine that Nicki Stevens is the editor who receives the paper on General Relativity whose insights apply more generally. ;-) Or any other paper that has any scientific meaning, for that matter, because meaningful science simply must be general, at least a little bit.

When we apply my reasoning more generally to a widespread methodological approach of many editors (and journalists), we could also wonder whether the person named Nicki Stevens realized that one half of the internet was going to discuss how unusually profound her misunderstanding of the scientific method was. She seems to believe that scientists should be just little ants who are adding small pieces of dust to a pyramid whose shape has already been determined by someone else, outside science, for example by Al Gore.

See also the Climate Swindle documentary.

Other frequently visited climate articles on The Reference Frame

Add to Digg this Add to reddit

snail feedback (17) :

reader Don said...

Just an added anti-catastrophic factor. letr's say the increase was one degree, or whatever. The increase would occur mostly at the poles, and mostly in the winter, and mostly at night. That doesn't leave much to lose sleep over. It is probably desireable. Plus, the added C02 would make all vegetation increase, while using less water to do so, making our planet more lush, benefitting both wildlife and crops, and us. Cheer up!

reader sabesin2001 said...

no actually temperatures remain about the same near the equator and increase several times the global mean increase near the poles. ice melts, cities need to be evacuated, don't cheer up too much.

reader kace said...

Wonderful article, thank you. Temperature increases decelerate -- the tipping point is really a plateau! ... Please keep writing on this subject. The time and money being wasted on global warming alarmism are shocking.

reader The Avenger said...

Great blog! Excellent insight into A.G.W. I think Dr. Lindzen is the leading authority on climate change after looking through all of the literature. You may well be headed that way. "Real Climate", what a joke. I wish I had found this blog months ago

reader Lorin said...

Sorry to rain on the parade, but there is a tipping point, a point of no return. The thermohaline circulation is dependent on certain salinity levels and temperature ranges. Should these balances change, as a result of global warming or for whatever other reason, there is reason to believe that there may be a disruption of the thermohaline, the consequences of which we can't be fully certain (I personally doubt they'll be good).

reader Joseph said...

I've written an analysis where I estimate climate sensitivity at 3.46C. I believe the methodology is straightforward. Comments and scrutiny are welcome.

reader Lumo said...

Dear Joseph, sorry but I don't understand your derivation. More precisely, it looks like complete gibberish. You seem to be using input data that don't contain the information about the sensitivity - without other data - and you are dealing with them in very unclear ways.

Moreover, you are subtracting very large numbers of other very large numbers that would generate huge errors even if your method were otherwise correct, which also means that you haven't isolated the CO2 signal from other signals at all.

I am very surprised that the most sensible answers about climate sensitivity are not in your poll at all, despite having so many options. The climate sensitivity is almost certainly "none of the above", between 0.3 and 1.2 deg C, and there exist much more robust and transparent ways to derive it than your derivations. Many of them are on this blog.

reader Joseph said...

So you don't believe that CO2 data combined with temperature and temperature change rate data is enough data to be able to derive sensitivity? Why not?

The results of the model I worked with are testable in ways that are rather clear. I'll come back with a link.

reader mathandphysics1 said...

I confessed to PW that I am trickster, so only take my comment with a grain of salt

I can only really offer a simpler derivation of your algebra.

You have defined sensitivity as:

S = (equilibrium temp at twice the current carbon conc.) - (equilibrium temp at the current carbon conc.) = a log 2

or for simplicity

S = T'2 - T'1 = a log 2

in terms of a

a = (T'2 - T'1)*(1/log 2)

Imbalance has been defined as:

I = (equilibrium temp at the current carbon conc.) - (observed temp) = T'1 - Tobs

You have also defined the instantaneous change of I (dI), where k1 and k2 are constants, as:

dI = k1*J - k2

which after some manipulation turns out to be:

dI = k1*T'2*(log C / log 2) - k1*T'1*(log C / log 2) - k1*Tobs - k2

In my mind, it all seems to say that if we knew what the equilibrium temperatures were, we could compute the sensitivity and then we could compute the imbalance and instantaneous change in imbalance as a function of observed temperature and and carbon concentration.

So the question then becomes "What's the equilibrium temperature?"

I'm not sure that has been answered. There are a lot of other things that equilibrium temperature is dependent on. Constants a and b really are not constant (this is why you said "assuming all else is equal").

Having equilibrium temperature as a function of carbon concentration only is a "HUGE" assumption. This is the fundamental fallacy of most climate models.

reader Joseph said...

Here's a hindcast based on my analysis. I'm making the spreadsheet available so that formulas can be verified.

Mathandphysics: Thanks for reading my analysis.

While in the formulas I do assume the effect of imbalance is instantaneous, in reality I did find this is not the case. There's a lag of 3 years, and this is taken into account to produce the results of the analysis, as well as the hindcast.

It should be noted that the concept of doubling is an approximation of reality. Consider what would happen if the CO2 concentration becomes zero. Or what if there's doubling from 0.0001 ppmv to 0.0002 ppmv? The math doesn't add up. So I'd suggest a better model is

T' = log (C + k) + b

The concept of doubling says k = 0. It can't be any other way if it were accurate.

Without getting into details, k is a small positive number relative to pertinent CO2 concentrations, so the concept of doubling is apparently a good approximation.

Of course, this equilibrium temperature calculation is based on the assumption that everything else is equal, which in reality is not. There are confounds, sure.

reader Joseph said...

About lumo's range where sensitivity <= 1.2C, let me just say that's practically impossible.

If doubling causes an increase of 1.2C, consider what would happen if CO2 concentration goes from 285ppmv to 380ppmv. This would be:

1.2 * log (380 / 285) / log 2


0.5 degrees

Already there has been a global warming since 1850 of about 0.7 to 0.8 degrees. It's really not possible for equilibrium temperatures to trail observed temperatures when net forcing is increasing, as it is.

A basic verification that should be done when people propose climate sensitivity estimates is to produce a time series of equilibrium temperatures and compare with observed temperatures. This is easy enough if you make an assumption about a point of equilibrium

reader Lumo said...

Joseph, what you write is just completely stupid.

Concerning your question why temperature series and CO2 data themselves are not enough to derive the sensitivity, I think that the answer is completely obvious: a major part (most) of the changes of temperature is due to non-CO2-related thing.

Just look at the two graphs
: they're surely not proportional to each other. They're not proportional to each other even if you try to make the model more complicated by including lags and similar things.

That's an obviously true "negative" proposition that is trivial to prove in many other ways. Temperature changes analogous to the present ones existed even when CO2 levels were stable, and on the contrary, after the 1940s, when the CO2 levels grew intensely, there was a slight net cooling for 30 years. Obviously, the other factors that made the cooling possible (aerosols, oceans, Sun variations etc. etc.) had to be at least as strong as the greenhouse effect. And their total size was probably much stronger because some of them probably came with the same (warming) sign as the greenhouse effect.

The figure 1.2 K is the bare climate sensitivity calculated without feedbacks (it also happens to agree with one that can be extracted from the 0.7 K of warming since the Industrial Revolution, because we have already made 1/2 of the doubling effect), and the reality can be both higher or lower than 1.2 K because the feedbacks can be positive or negative.

I am convinced that most of the key feedbacks are negative in all systems that are as-stable-as-the-Earth, but even if you disagree, I think that it is indisputable that you haven't provided us with any evidence of your "inequality" that the sensitivity always has to be higher than the first number one calculates. What you're doing is just pure junk science: trying to get some number that might be fair and than to collect all possible dirty irrational pseudoarguments and arguments in one direction, to push the 1st number in the direction that you find more convenient.

As indicated above, I am pretty sure that the net sensitivity is lower than 1.2 K because the feedbacks are negative.

As explained recently in Roy Spencer, it is also impossible to imagine that the CO2-temperature relationship and cloud-temperature relationship is purely about the greenhouse effect plus feedbacks, even if you obtained a high correlation (which you don't) because the correlation between clouds and temperature can always be a consequence of the opposite causal relationship.

Clouds can e.g. amplify ocean-driven temperature variations which will increase the apparent simultaneous changes of the cloud cover and temperature - but it would be incorrect in this case to say that the clouds are strengthening the CO2 greenhouse effect because different driving forces can be amplified differently. Correlation is not causation.

I will be rejecting further comments of yours that will look as dumb as the recent ones.

reader mathandphysics1 said...


I think you might be getting into a trap using Excel's regression program. Regression analysis is a powerful tool, but I would caution you about using it without understanding its limitations.

reader mathandphysics1 said...

Just to correct a minor detail in my derivation

Jason has used a d as a constant, which although stated in his article, threw me off (I thought he was confused and was equating differential element dI with the instantaneous rate of change dI/dt, but he was actually saying dTobs/dt = k*I). It is a minor correction.

He has defined the instantaneous rate of observed temperature change, with k3 a constant, as

R = dTobs / dt = k3*I = k3*(T'1- Tobs) = k3*T'1 - k3*Tobs = k1*J - k2


J = T'1 - Tobs - b


R = dTobs / dt = k1*(a log C + b) - k1*Tobs - k1*b -k2

which again becomes

R = dTobs / dt = k1*T'2*(log C / log 2) - k1*T'1* (log C / log 2) - k1*Tobs - k2

Sorry, just want to give the horse one last kick

reader mathandphysics1 said...

I think I said Jason, I meant Joseph if I did

reader cochise20001 said...

What were the CO2, Methane, Nitrogen, Aand Oxygen Atmospheric proportions in the Proterozoic and Phanerozoic geologic ages?

In particular what were the changes at the start and end of the Cambrian and of the Carboniferous?

What were the greenhouse gassses proportions going into and coming out of the "Ice House World" epochs?

reader exmaple said...

Sir, why not chart the CO2 only scenario with recent satellite temps? It should show correlation, and make the climate science industry go nuts. No feedbacks, no crisis, no money.