## Friday, June 05, 2015 ... //

### Karl et al. hiatus killer is "research" that began with conclusions

All the major teams that try to quantify the "global mean temperature" indicate that the warming/cooling trend has been basically zero for almost two decades. RSS AMSU, a satellite record, shows a trend that is exactly zero (infinitesimally negative, I guess) in the recent 18.50 years. When you allow small trends that are clearly not statistically significant, you will conclude that even longer recent periods lack any sign of "global warming".

This absence of "global warming" in the recent decades is usually referred to as the "hiatus". I am using the word myself – and so do most skeptics and alarmists (except for those alarmists who think that the very word is a blasphemy, of course) – but if you asked me, I would probably reply that it is a silly name because the word "hiatus" indicates that it's a temporary break in some process that otherwise takes place permanently.

I don't really believe that there is any sufficiently reliable, strong, persistent process that could be called "global warming". So the absence of a temperature trend in a 20-year window is as normal an outcome as you can get. Sometimes, the temperatures get higher in 20 years. Sometimes they get lower. An underlying "warming trend" from some source (e.g. CO2) can make the former possibility slightly more likely than the latter one. Sometimes the temperatures stay nearly constant. There's no rational need to invent catchy names for these three possibilities, especially not for the most mundane third possibility.

A whole discipline of pseudoscience – one pretending to be science, like most pseudosciences – has been created. It is the "climate change science" whose preachers – pretending to be scientists – shout that the sky is falling. The "hiatus" is an inconvenient truth for these "researchers" so as of mid May 2015, they have proposed 63 explanations of the hiatus.

The heat is hiding in the ocean and will marvelously jump out, debunk the second law of thermodynamics, and fry the world during Easter 2016. The "hiatus" is due to a Chinese chimney. It was eaten by a dog. And so on, and so on. Finally, as I saw on all the skeptics blogs – but Bill Z. made me much more interested – we have the 64th explanation of the "hiatus". The "hiatus" no longer exists and the global warming has returned to the recent 20 years as well.

No group of 3 alarmists will agree which of the 64 explanations is right but you may be sure that they are collectively right, anyway. It's how the consensus science works. They know the "right" conclusions even if they don't have the slightest idea what is going on and what the right explanations of anything could be.

The disappearance of the "hiatus", the 64th explanation of it, is the claim of a Science Magazine article

Possible artifacts of data biases in the recent global surface warming hiatus
by Thomas Karl and 8 coauthors from NOAA. They have "revised" their NCDC dataset (they have a new name to replace NCDC but I don't think that I should write down or you should memorize the new name because it's counterproductive to be distracted by irrelevant and mutating names of each piece of squashy junk that you may find in a cesspool) and the new NCDC record is the first one that "shows" global warming in the recent two decades, too.

You may compare an alarmist blog post with a skeptic blog post on this issue. The canonical alarmist blog post is Gavin Schmidt's
NOAA temperature record updates and the ‘hiatus’ (Real Climate)
and I am being extremely generous when I include the debatable letter "m" in Schmidt's name – while the best skeptical response (or an accumulation of responses) is one by Judith Curry:
Has NOAA ‘busted’ the pause in global warming? (Climate Etc.)
Thanks, Bill, for this URL!

First, the "global warming" trend they get (for the global mean temperature in the 1998-2014 window) is insignificant by even soft scientists' standards – less than 2-sigma (90% confidence level). And this figure, 90%, is really a cherry-picked maximum from several conceivable, similar calculations. I may discuss these matters later.

But even the "microscopic tricks" that led to these adjustments make it extremely likely that the adjusted dataset is less accurate than the unadjusted one. The biggest change occurred in the oceans.

Now, one must understand that largely due to the large heat capacity of water, the temperature variations above the ocean are smaller than those above the land. This comment applies to periodic oscillations as well as various recently observed "trends". This asymmetry between the land masses and the oceans has numerous consequences. For example, the seasonal variations of the global mean temperature emulate those of the Northern Hemisphere – because most of the variations come from the land masses and those lie mostly on the Northern Hemisphere.

You may talk about many trends – those in the ocean and above the land; and you may measure them in various ways, satellites, weather stations, weather balloons, buoys, or engine intake of marine vessels ;-).

You were expected to laugh when I mentioned the last source of the data – engine intake of marine vessels. It sounds great if one can determine some historical information about the temperatures from this unexpected source. But it wasn't a gadget that was designed to measure temperatures – unlike satellites, buoys, weather balloons, and weather stations. And unsurprisingly, such "amateur gadgets to monitor temperatures" have some problems that seem to be generally acknowledged. Dick Lindzen et al. quote heat conduction from the vessels.

However, the shock is that the warming trend extracted from the marine vessels was copied to the buoys time series. It means that an increasing linear function was simply added to the buoys' time series for the temperature – to make them more "well-behaved". Surprisingly for them ;-), once they added an increasing function to the temperature as a function of time, the slope (warming trend) extracted via the linear regression has increased! This surprising mathematical result must be the holy 884th sign of global warming that everyone was looking for!

You see what's going on. The warming trend indicated by the buoys – a project that was specifically designed by scientists to measure the temperature of the ocean – was completely erased by Karl et al. The time series was detrended and the trend was replaced by another trend extracted from a different source, one that wasn't meant to measure temperatures scientifically. Great.

They say that every year, the global warming has to be getting worse and they have so much evidence. But they must be terribly unlucky because in the only two decades in which we had numerous diverse projects specifically designed by scientists to measure temperatures, all of the professional one seem to say that there is a "hiatus" throughout this 20-year period. So the faith finally has to depend on upside down medieval trees and engines in outdated marine vessels.

If you're not dull, you will ask: Why didn't they do just the opposite? They could have repaired the trend from the marine vessels for them to agree with the lower trend from the buoys. You won't find any answer to this question except for the reminder that the marine vessels have existed for a longer time. That's great but their long-lived existence in no way implies that their measurement of the trend since 1998 is more accurate than the trend measured from the buoys.

I will be happy to listen to another explanation that I overlooked but the conclusion seems totally obvious to me. The trend was "copied" in this way in order to spread the highest trends that randomly appeared in a dataset to all other datasets. It was done simply because the authors prefer higher trends. Higher trends mean higher grants.

Imagine that you have 10 sources of temperature measurements that have produced their estimates for the "global warming" trend (in °C per century). Let us assume that the numbers are randomly and Gaussian-normally distributed around the right value, which I call 1, with the standard deviation, also 1 (by the way, a pretty reasonable distribution for the warming trend). You will get numbers such as
{2.17, 0.34, -1.53, 0.95, 1.56, -0.18, -0.95, 0.29, 0.38, 1.66}
I produced this output in Mathematica command
Round[RandomVariate[NormalDistribution[1, 1], 10], 0.01]
Well, the correct value of the trend is not accurately known if you only get the 10 random numbers. If you take the average of the 10 values, you would get 0.469. It's not quite equal to the correct value 1 (by construction), either, but it's closer. The standard deviation of the arithmetic average is the square root of ten times smaller than the previous one (than one).

What Karl et al. and most other preachers who just decide to write a paper killing an inconvenient fact do is simply to take the greatest value of the trend they may get in any other way and declare it as the truth. In my example, it's the number 2.17. They adjust all the data to match this highest trend. So the list of 10 trends above is replaced by
{2.17, 2.17, 2.17, 2.17, 2.17, 2.17, 2.17, 2.17, 2.17, 2.17}
Nice. The warming trend seems clear now. It's 4.6 times higher than one that we would have gotten from the average value of the noisy numbers and 2.17 times greater than the correct one. The maximum among 10 random numbers from a distribution is clearly "very likely" to be higher than the actual mean value. To choose the maximum means to cherry-pick, to be biased.

And Dick Lindzen and his CATO colleagues suggest that this step has taken place repeatedly, even in this single paper. They always needed a "better" reconstruction of the temperature in the Arctic Ocean. So they just copied the trends from the Arctic lands.

This is likely to produce a bogus warming signal simply because much of the Arctic ocean is covered by ice throughout the year. And when water and ice co-exist, well, the temperature has to be extremely close to 0 °C, the only temperature at which both phases may exist, and therefore the trend at these constantly frozen places is likely to be close to zero. (I hope that the U.S. readers agree that this is a rather easy-to-remember number to describe the freezing point, even if the unknown Democratic Party candidate called Chaffee or something like that who wants to introduce the metric system in the U.S. will lose, as everyone expects. He fell in love with it in Canada.)

Quite generally, all the trends and fluctuations associated with the land are greater than those around the ocean and this is bound to be true in the Arctic as well, probably even away from ice.

It is not possible to be "quite certain" that the desire to push the data in the "preferred" direction was the motivation behind a particular adjustment. However, when you see too many of them and a shockingly high percentage of them always works to increase the trend, well, you may become rather certain that something dishonest is going on. Your confidence may actually be calculated. If there are 20 adjustments whose signs should be independent but you get 20 times plus, the probability that this occurs by chance is 1/2^20 or one part in a million.

With twenty "plus signs" and zero "minus signs" of the adjustments that increase the alarmists' certainty that there exists an important persistent global warming trend, you may become 99.9999% certain of their misconduct. It's great if those folks may achieve 90% "certainty" that their ideology about the climate threats is right. Meanwhile, you may achieve a 99.9999% or 5-sigma certainty that they are crooks who should spend the rest of their life in prison – or hanging.

Lindzen and colleagues make one more general point that I have made several times in the past, too. These positive adjustments don't really make the case for "global warming" any stronger because for it to get stronger, you need the trend to be high enough and the error margin to be small (for the "certainty" or "confidence" that it is positive to be sufficiently high as well). But if they selectively increase the apparent or predicted trend measured in one way, it is good for the "faith" because there is a higher trend but it is also bad for the "faith" because the differences between the different measurements (here especially between the marine-vessel measurements and satellite measurements of the temperatures above the ocean) and the error margin goes up, making the foundations more shaky. So these selective conclusion-driven adjustments are enough to make everyone else sure that you lack the scientific integrity but they are not really enough to strengthen the case for a hypothesis that otherwise disagrees with the empirical data.

Judith Curry's blog contains lots of other (partially overlapping) observations about the paper by Karl et al. made by various people. It's staggering if you compare this serious scientific analysis by people who have apparently read the Karl et al. paper carefully (I didn't!) with the blog post by Gavin Schmidt that talks about virtually none of these technical issues. But one sentence by Schmidt seems to be his "main point":
The real conclusion is that this criteria for a ‘hiatus’ is simply not a robust measure of anything.
Funny. The fundamental dogmas of the warming church force you to say that 90% confidence level is "near certainty" about some "very important findings" while no inconvenient truths can ever be "a robust measure of anything". That's nice but the hiatus is still there and statistically significantly contradicts the predictions of the dangerous global warming theory. That theory claims that 20 years at current emission levels are enough for a statistically significant trend to emerge. That's why every decade (if not every year or every day!) of "fight against climate change" is said to be important by the anti-carbon jihadists!

So if your theory predicts a significant trend in 20-year windows but data show that the trend isn't there, your theory is still falsified and with your fog or without it, you should still be arrested or hanged, Mr Schmidt.