Thursday, February 25, 2010

Why the feedback amplification can't be both positive and high

When no feedbacks are included, the greenhouse effect caused by CO2 adds about 1.2 °C per doubling of the CO2 concentration. This is a result of a rather clean physics problem. There's no real "complexity" in this problem: we reduce the Earth to a pretty manageable differential equation.

The doubling from the pre-industrial concentration of 280 ppm to 560 ppm of CO2 in the atmosphere will occur slightly before 2100, assuming business as usual. If the figure 1.2 °C were the total answer, and assuming that the mankind has caused the whole 0.6-0.8 °C of the warming we may have seen in the last century or so, it would mean that 0.4-0.6 °C of man-made warming would be left by 2100 - less than the innocent 20th century change.

That's a completely unspectacular change. So this elementary greenhouse effect is not enough for the "applications" of the physical effect in policymaking. The advocates of carbon regulation and threats depend on some amplification of the man-made greenhouse effect, i.e. positive feedbacks. The IPCC would like the warming per the CO2 doubling to go as high as 5 °C and some people would be thrilled to see even higher figures - that seem to completely disagree with the small rate of the recent warming.

Feedbacks: geometric series

Imagine that you add some CO2. That changes the temperature by the "bare mechanism" of the greenhouse effect. But the modified temperature also changes some other things in the climate that may change the temperature again. These "second round" effects are called the feedbacks and they may change the temperature in both directions.




If the "simply calculated" bare temperature change was "ΔT" and if the new increment was "f.ΔT" where "f" is a dimensionless coefficient, this "f.ΔT" of extra warming must actually be inserted to the feedback as an input once again. That adds additional "f^2.ΔT" of warming. And so on. The total warming is
ΔT(total) = ΔT (1 + f + f2 + f3 + ...) =
= ΔT / (1-f).
Yes, it's called the geometric series. While the total warming depends on "f" nonlinearly, it is the very coefficient "f" whose distribution should be kind of uniform. After all, the feedback "f" is a sum of many diverse effects. It's "f" that behaves as an additive quantity, not "1/(1-f)".

The alarming scenarios depend on the assumption that "f" is really close to one, something like "f=0.8" if not "f=0.9", and the corresponding total warming is correspondingly high. For example, for "f=0.8", we obtain "ΔT(total) = 1.2 °C/0.2 = 6 °C". This is the type of results that people like James Hansen would love to be true (or at least believed to be true).

However, the values of "f" above one are almost strictly ruled out because the geometric series above is actually divergent. That would physically mean that any initial perturbation would be amplified exponentially: the deviation from the would-be equilibrium would be exponentially increasing with time. (The normal behavior is that you approach an equilibrium in the future, and your distance from the equilibrium is exponentially shrinking.) The Earth's temperature would soon (in a logarithmic time) escape from a hospitable interval. Everything would freeze over or evaporate.

This arguably hasn't happened for billions of years.

It follows that "f" can't exceed one, at least not too often. It can be positive - feedbacks can be positive - but they can't be too positive. However, we may make a much stronger statement than this one. Why?

Because physical mechanisms make it pretty inevitable that "f" is not a universal dimensionless constant. For different "quasi-equilibriums", different chemical compositions of the atmosphere and the biosphere, the amount of ice in the Arctic, positions of the continents, and so on, i.e. for various changes that the Earth has seen during its history, the values of the total feedback coefficient "f" must have been different. The coefficient "f" is inevitably variable. (The "f" is also dependent on the location, but let's look at the global mean temperature only.)

By the central limit theorem, we may assume that for a random moment of the Earth's history, "f" took values in a normal distribution around the central value "f_0" and the standard deviation "SD". Because "f" is approximately the sum of contributions from many effects, there's no way how "f" could be "automatically" prevented from exceeding one.

So by looking at the statistical distribution, we may determine the percentage of the Earth's history when "f" actually exceeded "1". Whenever this occurred, if it ever occurred, the temperature was exponentially running away from the equilibrium value. So within a few decades, it would be reaching the boiling point or drop well below the freezing point. The life would die out. The geology would be very different.

Let's assume that such an uncontrollable exponential development of "f" exceeding one would destroy the life on Earth within 47 years, to make the numbers simpler. (I was approximately inspired by a stupid movie, Age of Stupid, when I chose this figure.) The Earth is 4.7 billion years old, so its life contains 100 million periods whose length is 47 years.

Because none of those 100 million periods has contained the deadly exponentially runaway behavior we are just discussing, it follows that the probability that "f" exceeds one should be lower than "one in 100 million".

Inserting the numbers

But we had an explicit formula for the probability that "f" exceeded one. We said that "f" was distributed according to the normal distribution around "f_0" as the mean value, with the standard deviation of "SD".

The maths is complicated, so let's be surprised by the power or lack of power in this argument. (I haven't made any calculation before I wrote this text: this is being written from scratch.)

My estimate for the fluctuations of "f" depending on the "regime" of the Earth is "SD=0.1" (for feedbacks "f" comparable to one, this is something like a 10% error). I think it's unlikely that "f" is determined much more accurately than that: it's much more likely that the uncertainty of "f" is higher than that. Now, what is the mean value "f_0" such that the probability that "f" exceeds one, given the standard deviation "SD=0.1", is lower than "1 in 100 million"?

Well, it's simple. If you look at the numbers describing confidence intervals, you will see that "1 in 100 million" is approximately equivalent to a "6 sigma" deviation from the mean. So the mean value must be at least 6 standard deviations below 1. But because I decided that "SD=0.1", it follows that "f_0" must be at most "1-0.6 = 0.4", which leads to the total warming of 1.2 °C / 0.6 °C = 2 °C per CO2 doubling. About 1 °C would be left for the 21st century.

If you managed to show that the standard deviation for "f" is "SD=0.2", the maximum allowed mean value of "f" would be "f=1.2-6*0.2 = 0". If you would demonstrate that the deviations are as big as "SD=0.2", that would prove that the (average over time and space) feedback coefficient "f" actually has to be negative!

Now, I don't know how much "SD" actually is. One would have to look at the typical changes of the water vapor variability and the variability of cloudiness in different epochs of the paleoclimatological and geological history. But whatever the exact numbers are, I think that this argument is very powerful and largely excludes the values of "f" - and distributions for "f" - that are too close to "f=1". Also, note that the normal distribution decreases very quickly: if I used a different distribution that is nonzero everywhere, I would get more strict conditions for "f_0"!

I feel that the argument above is a quantitative explanation for the intuition that feedbacks in systems without a runaway behavior are much more likely to be negative than positive: they must be "repelled" from the unphysical runaway region of the parameter space. The argument above is no "rigorous proof" that the feedbacks can't be high but I think it is a sensible starting point to choose the "priors" for different values of "f" that are a priori conceivable. The priors should follow a natural distribution that should be pretty much negligible at "f=1". That mostly excludes any significant amplification of the bare greenhouse effect.

Of course, I have no doubts that the alarmists will deny the existence of general theoretical arguments that make similar "catastrophes" very unlikely. But others may want to look at arguments in both directions.

And that's the memo.

9 comments:

  1. That "1.2 deg C" from "CO2 in the air without feedback" is entirely in the realm of the hypothetical, because there is no way to measure it.

    That supposed figure is within natural variability of the atmosphere, and there is no way to decompose all the effects to ascribe a temperature change of that magnitude to a unique influence.

    This "greenhouse gas theory" is by far the most contemptible misuse of the word "science" in the history of human thought. It is so vile that it makes me angrier than I have ever been.

    ReplyDelete
  2. Dear Brian,

    it's not "hypothetical", it's "theoretical", because it's not a result of a random guess but a result of a calculation based on empirically established laws of physics.

    The calculation doesn't have to directly correspond to reality - and indeed, it quantatively doesn't agree because the feedbacks are likely to exist, with one sign or another. But it's still an important calculation. The idealization in it is "not infinite" and my personal guess is that the total feedbacks won't be too high in either direction.

    Cheers
    LM

    ReplyDelete
  3. Don't the feedback factors have to be independent random variables to end up with a normal distribution and central limit theorem? Can we really say they are independent?

    I know it's not a scientific argument, but from a theological perspective any grand Designer wouldn't make a world with runaway feedbacks. We wouldn't do that ourselves, would we, if we we were having a stab at designing it? Another way of looking at it (metaphysically, but something that atheists might like to buy into) is that of all the potential universes that could exist, only the ones with negative feedbacks could endure, and as this one certainly endures, it must generally have negative feedbacks.

    ReplyDelete
  4. Dear ScientistForTruth,

    do they have to be independent for the central limit theorem to hold? Yes and no. Not really.

    The sum of a large number of independent, equally distributed variables is the simplest context to prove the central limit theorem.

    But the theorem holds much more generally. What the "large number of independent quantities" really requires is that the "total amount of independence" in all these quantities has to be large. So there's absolutely no problem if you sum 100 different effects that are mutually correlated - but not perfectly correlated - with each other.

    I totally share your negative-feedback intuition although my explanation would be less divine. Well, if there are positive feedbacks, there are much larger chances that they will become runaway positive feedbacks, and these systems will jump to a runaway behavior and drift elsewhere, until the exponential runaway approximation breaks down.

    Then they settle into a different regime that is dominated by more negative feedbacks. So in some sense, my argument is Darwinian in character - systems with positive feedbacks are not the fittest who survive - but please don't view this alternative explanation as an attack on God. ;-)

    Cheers
    Lubos

    ReplyDelete
  5. Dear Lubos, the "measurement" of the consequences of the calculation is hypothetical, and I agree that that the "calculation" of it is theoretical, and valid "calculations" of it are by no means unique.

    I prefer to refer to "damping" rather than "feedbacks" that must be the origin of overall stability.

    ReplyDelete
  6. Lubos,
    I am a CAGW skeptic (and becoming doubtful of even significant AGW) and believe the clouds and long term ocean currents dominate the climate variation. However, I want to point out that the feedback analysis you made leaves out the T^4 feature of the gray or black body, which would limit runaway temperature. The linear feedback series is only a small variation approximation.

    ReplyDelete
  7. Dear Leonard, thanks for your interesting point.

    It is however a double-edged sword. If it prevented the Earth from too big fluctuations in the past, it may do the same in the current era, too.

    Moreover, it takes a lot of change for the nonlinearity of T^4 to kick on. Set the current temperature, 15 °C or T = 288 K, to one, and write T as (1+t). Then (1+t)^4 = 1 + 4t + 6t^2 +...

    Keeping only the 4t term is linearization. The next term is 6t^2. It is only equal to the linear term for t=2/3 which is 192 Kelvin of temperature. This is a huge temperature change. For temperature changes much smaller than 192 K, the linear approximation of T^4 is almost perfect.

    There are other nonlinear things, too. But many of them are similarly accurately linear for the whole interval of temperatures that dominated the whole history of Earth. I don't want to claim that most processes in the climate are linear. But what I want to claim is that if the global temperature variations from the mean admit a very nice linearized theory.

    Cheers
    LM

    ReplyDelete
  8. Here's a guy (I.J. Katz) who argues that runaway feedback (in both directions) is entirely normal for Earth (all that water makes for a bistable system).
    [http://lanl.arxiv.org/abs/1002.1672]. Life seems to have managed to survive both the ice ages and the
    tropical heats.

    ReplyDelete
  9. Dear Nick,

    life has surely survived ice ages and interglacials but the conclusion in the paper - which is just a nice try - seem to be exactly upside down.

    The very graphs he uses show that the global mean temperature wants to be in the central region most of the time, while it always and quickly "reflects" from the extreme high and extreme low temperatures (which seem to be nearly unbreakable bounds).

    So it means that these "extreme" temperatures during the glaciation cycles are not stable at all. Quite on the contrary, they're the most unstable points in the graphs. And they're unstable exactly because of negative feedbacks that regulate the deviations from the long-term average whenever the deviations are high enough.

    So I think that the equivalences stated in the article and in your comment are just completely wrong. By the way, the intermediate temperatures don't persist at constant levels. But that doesn't mean that they're "unstable" in any sense, surely not "runaway unstable" or "more unstable than the extreme temperatures". Temperatures are changing all the time, because of internal variability (weather that accumulates) as well as external influences. But the very fact of a "change" doesn't imply an "instability". This is just a completely bogus, mathematically sloppy way of using all these terms.

    Best wishes
    Lubos

    ReplyDelete