OK, the New York Times published a hysterical article about the alleged acceleration of global warming and Kip Hansen (quite a surname for a skeptic!) wrote a response at Anthony Watts' blog. To add a third hyperlink, Grant Foster didn't like it so he published a rant titled How Deniers Deny.

Great.

Whether the noisy weather data for the global mean temperature include a clear enough "linear trend" – the global warming itself – is already a bit uncertain. Especially the question whether this apparent increase is more than a fluke is uncertain. However whether the *quadratic coefficient* of a nonlinear model fit is nonzero (and whether it's positive or negative if it is nonzero) is even more uncertain.

I want to show you how incredibly uncertain the quadratic coefficient is. We will use the RSS AMSU v4.0 satellite data from 1979 to 2019. If you change TLT to TMT in the URL, you may get the same data for the mid troposphere instead of the lower one.

Your humble correspondent tends to use the RSS data instead of UAH not because I believe that UAH's numbers are worse – but mainly because I believe that both teams are doing professional work and by using the data from skeptics John Christy and Roy Spencer, I would create an unnecessary extra opportunity for a stupid kind of *ad hominem* criticisms.

Again, download the file with the lower troposphere temperature data. The first column is the year, the second column is the month, the third column is the averaged global anomaly, the remaining eight columns are regional. We may download the data to Wolfram Mathematica:

midTroposphere = False; (* True/False *)The final length command tells us we have an array with 494 items (lines). We omit the first three lines (formatting) and store the third column of the rest in "c" (491 monthly anomalies):

replaceRSSbyUAH = False;

a = Import[

"http://data.remss.com/msu/monthly_time_series/RSS_Monthly_MSU_AMSU_Channel_" <> whereString <> "_Anomalies_Land_and_Ocean_v04_0.txt", "Table"];

Length[a]

aPure = a[[4 ;;]]; c = aPure[[All, 3]]; Length[c]The corresponding graph that we work with looks like this:

ListLinePlot[c]

Nice. It's 491 months (41 years without one month) of the global mean temperature. It's mostly increasing, a bit over 0.5 °C in those 41 years, and the peak in the middle is mostly due to the 1997-1998 El Niño of the century.

We may now calculate the slope and/or the quadratic coefficients. Clearly, the particular value we get cannot be taken too seriously. To get some idea about the error margin, we will sample the possible coefficients by removing 0-99 of the initial months (0-9+ years); and 0-99 of the final months. We will calculate the linear fit (the variables have "only" in them) as well as the quadratic one:

linlist = {}; quadlist = {};The command

linonlylist = {};

For[cStart = 1, cStart <= 100, cStart++,

For[cEnd = -100, cEnd <= -1, cEnd++,

ckus = c[[cStart ;; cEnd]];

c2 = Transpose[{Table[i, {i, 1, Length[ckus]}], ckus}];

nlm = NonlinearModelFit[c2,

abs + blin x + bquad x^2, {abs, blin, bquad}, x];

nnlm = Normal[nlm];

linlist = linlist~Join~{D[nnlm, x] /. {x -> 0}};

quadlist = quadlist~Join~{D[D[nnlm, x], x]};

nlmonly =

NonlinearModelFit[c2, absonly + blinonly x , {absonly, blinonly},

x];

nnlmonly = Normal[nlmonly];

linonlylist = linonlylist~Join~{D[nnlmonly, x]};

]

]

{Length[linonlylist], Length[quadlist], Length[linlist]}returns {10000, 10000, 10000} as expected – we have ten thousand linear "only" coefficients, ten thousand linear ones, ten thousand quadratic ones. The results are visualized as histograms

Histogram[linonlylist*1200]I multiplied the linear trends by 1200 to get the trend in "°C per century", instead of "°C per month", and the quadratic coefficient by 1200 squared to get "how much the quadratic term contributes per century" (by "the quadratic term", we always mean a quadratic function that has the extremum in December 1978, our

Histogram[linlist*1200]

Histogram[quadlist*1200^2]

*t=0*). The results are the following graphs:

This is the linear only slope, in Celsius degrees per century. The command

1200*Total[linonlylist]/Length[linonlylist]returns 2.12883 °C per century. Visually, the error margin is some 0.20 °C per century but the actual error margin is larger because our sampling isn't sufficient to incorporate all sources of uncertainty. This "over two degrees per century" trend is greater for RSS than for UAH, the latter has a trend below 1.5 °C per century.

The other two graphs are related to our quadratic fit which is more interesting. Below, I show the distribution for the linear coefficient and the quadratic one:

and

You see that when the quadratic term is allowed, the uncertainty of the linear coefficient becomes much higher. The command

1200*Total[linlist]/Length[linlist]returns the trend 2.53226 °C per century but the error margin is comparable to 1 °C per century now. We already see that 2.53 is greater than 2.13 – which, as smart readers notice, means that the quadratic term will mostly contribute a negative term. You know, the difference between these two linear trends would be just 0.4 °C or so but because the quadratic term will accelerate, the actual contribution will be much larger, given by the command

Total[quadlist]/Length[quadlist]The result is –2.78944 °C per century from the quadratic term, cool! ;-) The error margin of this quadratic term is comparable to 3-4 °C per century, too. So we can't be too sure about the sign, of course.

{%*1200^2, " degrees Celsius per century from quadratic term"}

Now, let's extrapolate all our quadratic models to January 2100. That's 121 years (1452 months) after January 1979 i.e. we want to calculate the temperature for the month number 1453. ;-) Recall that the current month is 492 according to our satellite calendar. The command is simple:

fitsquad =The ListLinePlot returns a nice oscillating graph that looks like a thick band of a variable thickness – the index going from 1 to 10,000 isn't "linear" in a meaningful sense (note that the 10,000 entries are always just flattened 100 rows with 100 items). But the histogram is meaningful

Table[linlist[[i]]*1453 + quadlist[[i]]*1453*1453 - linlist[[i]]*492 - quadlist[[i]]*492*492, {i, 1, 10000}];

ListLinePlot[fitsquad]

Histogram[fitsquad]

Total[fitsquad]/Length[fitsquad]

and tells us that between December 2019 and January 2100, we expect the warming of –1.59282 i.e. the cooling by 1.6 °C although the uncertainty of this result is comparable to 5 °C. But the mean value is negative simply because in 80 years, the quadratic terms will matter more than the linear ones and the mean value of the quadratic coefficient determined from the RSS AMSU fit is negative.

You might object that we got a negative quadratic coefficient mainly because the warmest anomalous year, the 1997-1998 El Niño of the century, was near the middle of our intervals. That made our 1979-2019 temperature graphs closer to "inverse U-shaped". But if you start to manually remove such cherry-picked flaws, you are already in the realm of "manual improvements" where the dependence on the subjective feelings – and political agenda – becomes huge and the uncertainty of the results becomes even greater than the uncertainty of the quadratic coefficient as we calculated it (without ad hoc exceptions).

*Ježek's Bugatti Step, a composition about acceleration. The 1931 foxtrot was created to celebrate Louis Chiron's Bugatti victory at the 1930 Brno circuit although we like to rewrite the history and say that he celebrated Ms Eliška Junková, a famous Czechoslovak racer who also drove Bugatti in a period. Ježek also composed Mercedes, a tango.*

Climate alarmists may switch from linear extrapolations of the temperature – which are clearly not dangerous because our civilization has shown that it doesn't have the slightest problem with the 1-2 °C per century of warming and it would just continue – to quadratic extrapolations. Quadratic functions are more dramatic because their slope grows with time, you know. But the flip side for the alarmists is that the quadratic coefficient may very well end up being negative in many cases – like in our case – so they could easily predict an accelerating global cooling after the future hypothetical "peak temperature" moment.

At any rate, the uncertainty about the quadratic coefficient is huge – much greater than the uncertainty of the linear slope – and the sign itself is uncertain, too. To pick a preferred sign or even a preferred value is a matter of guesswork and all the fancy science is pretty much useless for such a guesswork.

We could try to reduce the uncertainty of the quadratic fit by using a longer dataset than the 41-year-long satellite record. The disadvantage is that these longer periods of time include a greater fraction of additional non-carbon-dioxide drivers (Pacific Decadal Oscillation or any other natural or non-CO2 man-made driver). So the results for the quadratic coefficient could look more precise but they could also be more misleading.

For theoretical reasons, I expect the "rate of global warming" to be pretty much constant. On one hand, I do think that mankind will keep on increasing the CO2 emissions which contribute something – if some suicidal countries reduce these emissions greatly, other countries will do something very different. On the other hand, the dependence of the greenhouse warming on the concentration is logarithmic or "sublinear" which means that it is slowing down. These two effects go in opposite directions and I think that they're comparable i.e. ready to be approximately canceled.

In other words, the future expected warming rate won't be too different from the warming trend of 1-2 °C per century for many decades to come and the only question is whether this likely continuing trend is dangerous or not. I find it obvious that it is not dangerous at all, at least not for 200 more years.

## snail feedback (0) :

Post a Comment