## Thursday, May 05, 2016 ... /////

### Recent 45 or 30 years: linear warming a better fit than the CO2-related warming

A climate alarmist named Tim Palmer gave a talk about chaos and global warming at the Perimeter Institute For Theoretical Physics in Waterloo, Canada.

The talk was even discussed in The National Post – most actual physics talks at the PI surely don't enjoy this luxury.

He's an okay speaker but the content was very problematic. At the beginning (around 20:00) he started with a good question someone posed to him:

At the Perimeter, people are used to talks about quarks and strings. So why would they listen to a monologue about the climate alarmism junk?
The correct answer is, of course:
The climate alarmists are so shameless that they want to penetrate into every institution and every secret room of yours. And the Perimeter Institute contains lots of extreme leftists who are shameless enough to embrace any fashionable left-wing fad and claim that it's theoretical physics.
Tim Palmer's answer was:
The climate alarmism is a big problem in theoretical physics as I will convince you.
Well, it's not. But he got a big applause for this sentence, strengthening my own explanation above.

Palmer made a short introduction to chaos theory you can get even in the most superficial pop-science publications, mentioned some hurricane in 1987 or whatever, and said that the future may be predicted despite the chaos. He was promoting the usual misconception that the probability of the meteorological phenomenon X may be calculated as the percentage of models or model runs that produce a future that obeys the conditions of X.

Needless to say, this is extremely far from the truth in the real world. The problem is that the models share some (very many) systematic errors and a big fraction of these systematic errors are more or less completely shared by the different models (and they're certainly shared by different runs of the same model). (In some cases, the term "systematic error" is a euphemism because the errors are as big as the effects they are trying to predict – so the models are literally worthless for many purposes.)

So by running a model many times and averaging the results (and the calculation of the fraction to calculate the probability is an example of averaging, one averages the numbers 0 or 1 from different models/runs), you just don't improve your accuracy too much and maybe you don't improve it at all.

Richard Feynman made the same point that "the averaging many opinions doesn't help" when he talked about the "judging of the books by their covers". He was a committee member along with many other people who weren't doing their work properly and he was annoyed by the herd instincts that were implicitly claimed to be more important than careful work:
This question of trying to figure out whether a book is good or bad by looking at it carefully or by taking the reports of a lot of people who looked at it carelessly is like this famous old problem: Nobody was permitted to see the Emperor of China, and the question was, What is the length of the Emperor of China's nose? To find out, you go all over the country asking people what they think the length of the Emperor of China's nose is, and you average it. And that would be very "accurate" because you averaged so many people. But it's no way to find anything out; when you have a very wide range of people who contribute without looking carefully at it, you don't improve your knowledge of the situation by averaging.
Palmer and similar people obviously don't understand these things – or they do understand and pretend not to understand. So the main message of Palmer's talk was completely wrong. Too bad that he claims to be a leader of an Oxford climate predictability group.

But it made me play with some "repeated models" designed to describe the global temperature data and other things. At the end, I made a simple calculation that I have wanted to do for some time but I haven't done it yet. The calculation gives an empirical answer to the question:
When you try to approximate the global temperature as a simple increasing function of time, is the increasing function proportional to the CO2 greenhouse effect (which is supposed to be accelerating) a better fit than the simple linear warming?
This is basically equivalent to "is there empirical evidence for the acceleration of the warming rate"? If the linear fits are better, there's no evidence and it may look more likely that the warming we observe is some very slow process or cycle that could have existed already 200 years ago when the CO2 emissions were negligible.

Everyone knows what is a simple linear fit. The CO2-linked fit was a linear fit expressing the temperature as $T(t) = a+b d(t)$ where $d(t)$ is the number of doublings of CO2. If $y$ denotes the year:$\eq{ c(y) &= 280+120 \exp((y-2016)/57)\\ d(y) &=\frac{ \log[c(y)/c(1750)] }{\log 2} }$ For the climate data, I used the HadCRUT4 dataset since 1850 through March 2016. What are the results?

Well, I found the best linear fit and the best CO2 fit in several cases – for the recent $N$ years. In the table below, you see the results for different values of $N$. The $\Delta T$ columns are the root mean square deviation of $T(t)$ from the best fit expressed in Celsius degrees:$\begin{array}{|c|c|c|} \hline N & \Delta T_{\rm lin} & \Delta T_{\rm CO2} \\ \hline 160& 0.198 & 0.172\\ 130& 0.173 & 0.160\\ 100& 0.169 & 0.156\\ 80 & 0.175 & 0.159\\ 60 & 0.149 & 0.142\\ 45 & 0.133 & 0.135\\ 30 & 0.1305 & 0.1311\\ \hline \end{array}$ and for shorter intervals, the differences between the quality of ht fits become negligible. You see that the differences are small everywhere but for 60 or more years, the CO2-linked fit is slightly better, by roughly 10% in the standard deviation when you take the recent 150 years or so.

For the recent 45 years, the linear fit is better by two percent or so. It's an even tinier difference but if the global temperature were linked to the CO2 concentration, you would expect to see some acceleration in the 45 years as well – but you see the opposite. Note that the CO2 emissions increase by a factor of $e$ every 57 years or so, as I wrote in my simple exponential formula above. This corresponds to the doubling time of 40 years or so. This exponential growth fitted the CO2 data well for 2 centuries and only recently, maybe, the growth rate is beginning to slow down.

Even if you take the recent 45 years which is longer than the interval in which the human CO2 emissions normally double, there's still no (or slight negative) trace of the acceleration of the warming in the data. It may be due to chance, of course, but when it happened now, it may very well happen in the future 40 years.

Even if CO2 were responsible for most of the trend-like changes of the global mean temperature, the data make it very clear that 45 years is too short a period of time to be able to see the acceleration or deceleration of the man-made emissions in the temperature data.

So if you think that a cooler globe is better (it's almost certainly not) and it's great to reduce the man-made emissions by 40% despite the huge economic costs and if you think that you will see some clearly detectable effects of such a reduction (deceleration of the emissions) in the data by 2060, you are fooling yourself. You won't see anything of the sort. The acceleration – doubling – of the emissions in the recent 40+ years is invisible in the climate data which means that any simple conclusion about the acceleration of the emissions – or emissions reductions – from the global temperature data is known to be invalid if you only have some 45 years to see what's going on.

Some of the leftist satirical TV programs have shown the old people who don't care about the global warming because they're too old and they won't be affected. A funny thing is that no one who is alive today will be threatened by the temperature consequences of a moderate change in the CO2 emissions and most of us won't be able to observe any difference even with the finest devices in our lifetimes.

If there is a problem, it's demonstrably a science-fiction-style problem for generations in a very distant future – and these future generations will surely have a better idea what to do and what not to do, too.

As our anthem sung by some of the conservative pop stars correctly says, don't fret, the temperatures are normal. We're fine, it's just a little toasty whether that can bring us together. Climate change is hogwash. I actually like the song a lot (although the lyrics becomes "too strong" at some point) – the harmony is great, the people are singing well and smiling, the stupid blonde at the end isn't missing at all, and it's a competition for the songs by the Minnesotans for Global Warming and other excellent producers of this kind of music.