Wednesday, November 13, 2013 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Molecule painter and icy pal double the warming trend since 1997

...and the Real Climate guys instantly endorse them...

This is pretty hilarious. For years, we would hear from certain "researchers" that one needs to be just a climate scientist (a string theorist with A* from all physics subjects in the grad school was never good enough) to be taken seriously by the climate establishment. However, Stefan Rahmstorf of the Real Climate just wrote a text called

Global Warming Since 1997 Underestimated by Half
where he promotes a new paper in the Quaterly Journal of the Royal Meteorological Society titled
Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends
by Kevin Cowtan of York and Robert Way of Ottawa.

The climate alarmists didn't like the people's increasing knowledge of the "global warming hiatus" – about the 350th proof that they're pseudoscientists – so they needed a paper to "explain" the hiatus away. You could think that the saviors are climate scientists.

But if you click at the names above, you will learn that Cowtan is an expert in programming software that paints protein molecules while geography student Robert Way is a cryosphere kid who likes to play with the permafrost and a long-time assistant to kook John Cook at the doubly ironically named "Skeptical Science" server.




To make the story short, they decided that HadCRUT4 can't be perfect if it shows too little warming trend since 1997 or 1998. So a problem with HadCRUT4 has to be found out. They decided that the gaps in the reported temperature are the problem and this problem may be exploited.




So they designed a new method to "guess" the missing temperature data at various weather stations and various moments by extracting some data from the satellites. They liked the result, it seemed to be able to calculate the missing figures – and especially because the warming trend since the late 1990s was raised by about 0.5 °C per century which means that it doubled or tripled relatively to the "statistical zero" reported by HadCRUT4.

There are many reasons why I find it utterly insane for an alarmist to suddenly embrace such a development.

Don't get me wrong. The missing data are a problem for the integrity of a weather-station-based temperature dataset. But it's also a problem that has been wrestled with. Richard Muller declared himself as the leader of the world's best and most neutral team of researchers, called BEST (Berkeley Earth Surface Temperature), who decided to solve, in the best mathematical way imaginable, exactly this problem how to fill the gaps.

(I have played with those geographical gaps as well, e.g. after I found out that 30% of weather stations have seen a cooling trend in their record which is 80- years long in average, but I decided it would be better not to compete with Muller et al. because this work really needs several independent heads, some time, and funding to be done properly.)

At the end, Muller et al. concluded that it made sense to present temperature from the first half of the 19th century and in the 20th century, they pretty much confirmed everything that HadCRUT and GISS were saying. Now, a molecule painter in York, England arrives with his cryosphere kid pal in Ottawa, Canada and they "show" that all these numbers were wrong.

It's likely that Cowtan and Way have not made a "silly numerical mistake" although I haven't verified their code, of course. They have followed a logic that may superficially look OK and they got a result. The result made the warming look larger – by a coincidence – so this increased their desire and chance to publish the paper.

But we must ask: Is there any reason for such a "hybrid" dataset to tell us some more accurate information about the global warming trend than the non-hybrid datasets?

I am convinced that the answer is No. A paradoxical feature of their conclusion is that they used the satellite data to increase the (small) warming trend seen at HadCRUT – even though the satellite data actually show a cooling trend since 1997 or 1998. That's ironic, you could say. Why wouldn't you prefer to use the satellites for the whole Earth, anyway?

One must be extremely careful about splicing data from different datasets. It's very easy to fool yourself. It seems to me that they have no control over the error margins (especially various kinds of systematic errors) that they introduced by their hybridization method. They just produced a computer game – a software simulation – that looked OK to them but they made no really scientific evaluation of the accuracy and usefulness of their method. The error margins may very well be larger than if they only used one dataset.

Moreover, filling gaps by using a completely independent source of the data could be good for the description of the local and temporary swings of the temperature but it's the worst thing you can do for the evaluation of the overall long-term, global trend, exactly because each splicing may introduce a new error from the relative calibration.

Consider this example: the gaps may be mostly in the winter in recent years and mostly in the summer in older years (or the proportions may evolve, to say the least). So if you substitute the temperatures from a dataset with smaller variations (and that could be the case of the satellite data), it means that you will increase the recent data and decrease the older ones. You will therefore spuriously increase the warming trend (or change a cooling trend to a warming one). If you realize how large the winter-summer temperature differences are, you may see that the effect on the calculated trend is substantial even though the local, temporary temperature oscillations may be reconstructed rather nicely. I don't see any evidence that they have protected their calculations of the trend against these obvious problems. They seem to be completely naive about such issues.

For many reasons sketched above, I don't believe that their methodology gives more accurate estimates of the trend of the global mean temperature than Muller's BEST, for example.

But what I find remarkable is the weird sociological dimension of these "findings". After years in which everyone was told that the warming trend is known with certainty etc., it may easily change by 0.5 °C per century – even in the recent decades which should provide us with the most controllable raw data. Half a Celsius degree per century is almost the whole warming trend we like to associate with the 20th century! So if a computer graphics programmer and a cryosphere kid might change the figure by 150% overnight, and Germany's most important alarmist below 50 years instantly applauds them, someone else could surely change the trend in the opposite direction and the 20th century warming would be gone, right?

The role of the censorship or artificial endorsement for similar papers – which is likely to be influenced by politics and predetermined goals – would become primary. It's no science.

Just to be sure, I don't believe that the uncertainty concerning the 20th century warming trend is this high. The warming trend from 1901 through 2000, calculated by some linear regression from some hypothetical "exact global temperature data", is 0.7 °C plus minus 0.2 °C, I would say. Changing it by 0.5 °C is 2.5-sigma modification, a rather unlikely event. The probability that the number (0.7±0.2) °C is negative is equivalent to 3.5 sigma standard deviations – something like 1 in 3,000. It can't be "quite excluded" that the accurate warming trend was actually negative but it is very unlikely.

While the willingness of Herr Rahmstorf to jump the shark knows no limits and his endorsement of this paper is a free ticket for the aid by a psychiatrist (if a warming tripler were said to be hidden inside, Rahmstorf and his soulmates wouldn't hesitate to eat a hamburger created out of feces, not even for a second), you should also be assured that all these (theoretically) radical changes of the climate record are still inconsequential for any questions in the practical life.

The change of the trend by 0.5 °C per century is something you can't feel on your skin – not even if you wait for 100 years and carefully remember how you were feeling before you began to wait. ;-) And I don't have to explain that such temperature changes can't be dangerous – a temperature change may only be dangerous if it is at least 10 times larger than what you can feel. No chance that there is danger hiding here.

Add to del.icio.us Digg this Add to reddit

snail feedback (19) :


reader Uncle Al said...

Parameterization obtains any desired conclusion. Economics almost exactly proves Khmer Rouge scientific socialism to ruthless laissez faire capitalism to Federal Reserve frank criminality. 1000 Fourier transform terms ring the bell. We anticipate almost exactly empirical M-theory when heteroskedasticity is sated. The second orbit, below, writes its path.

http://www.youtube.com/watch?v=QVuU2YCwHjw

Reality is what does not go away when you stop believing it.


reader tomwys said...

Brilliant analysis, Luboš, and the mismatching of data sets echos Mann's normalization errors. Look carefully at how the HadCRUT 4 differs from 3. Addition of almost 200 circumpolar data stations (mostly Russian - some Canadian) and NO EQUIVALENT Southern Hemisphere additions, where it became sharply colder in the last decade, courtesy of record Sea-Ice albedo.

Conclusion, these amateurs are double counting (twice!), with predictable results.


reader Daneel_Olivaw said...

"After years in which everyone was told that the warming trend is known with certainty etc., "

Actually, what has been said is that linear trends estimated using relatively short periods are inherently uncertain, so this study fits perfectly with that idea.

In fact, may I note that this paper's estimate of 0,12°C/decade is within the confidence intervals of the original HADCRUT4 dataset according to SKS trend calculator (thought the error bars showed in Table III are much smaller, I wonder why).

"The warming trend from 1901 through 2000, calculated by some linear regression from some hypothetical "exact global temperature data", is 0.7 °C plus minus 0.2 °C, I would say. Changing it by 0.5 °C is 2.5-sigma modification, a rather unlikely event"

The problem is that didn't happen. The 1901-2000 trend is not really changed since this paper only analyses data from 1980 according to their Figure 2 and the use of satellite data which is not available for the 1900s.

Also, Table IV shows biases in trends starting in different years and for 1990 is just -0,02, which is the exactly the the error margins you seem to accept.


reader cksvnsk said...

Simpson it is, but the paradox:
http://en.wikipedia.org/wiki/Simpson%27s_paradox


reader Eugene S said...

Gibberish (except for the last sentence, which is a quote from a speech that science-fiction writer Philip K. Dick gave).


reader Werdna said...

I think they are doing something which is existentially wrong, which is trying to interpolate between measures of surface temperature using measurements of atmospheric temperature well above the surface. Personally, I think this is junk, just a wrong thing to do. But interestingly enough I do remember someone else doing a similar statistical procedure but was more careful:

http://noconsensus.wordpress.com/2010/02/13/7686/

At any rate, you can, I personally believe, using the satellite data to interpret the surface data. I did so here:

http://devoidofnulls.wordpress.com/2011/07/16/relating_lower_tropospheric_temperature_variations_to_the_surfac/


And I concluded if we take the satellite data seriously, it strongly suggests the near surface warming trend is *overestimated*-although other interpretations are possible, this is the only one consistent with current understanding (aside from implicitly implying a low sensitivity).


reader Brian G Valentine said...

Lysenko had no use for the genetic theory of natural selection, and the author of the referenced paper and his ilk have no use for energy conservation principles.

There is nothing anybody can do. I wonder if Lysenko found it useful to call his detractors "deniers"?


reader Eugene S said...

The probability that the number (0.7±0.2) °C is negative is equivalent
to 3.5 sigma standard deviations – something like 1 in 3,000.
This makes no sense to me, what am I missing? How can 0.7 plus or minus 0.2 ever go into negative territory?

Hilarious to read the comments section on Real Climate where a number of people meekly ask how much credence should be given to a paper by people with few climate science credentials and for their audacity are savaged by the enforcers of the much-vaunted consensus.

Also instructive to see how Gavin Schmidt airily dismisses such concerns. In Schmidtworld, the only "outsiders" going against consensus are "retired dentists", presumably with no hope of getting anything right.


reader Luboš Motl said...

Dear Eugene, 0.7 ± 0.2 does NOT mean "between 0.5 and 0.9 with a uniform distribution". Instead, it implicitly means "a normal distribution" with the standard deviation equal to 0.2. Please read

http://en.wikipedia.org/wiki/Normal_distribution



if you don't know what the normal distribution is. You can't really encounter distributions that "guarantee" to be within an interval, like from 0.5 to 0.9, in Nature. The probability that the deviation is larger is *always* nonzero. The distribution of any quantity that is obtained by complex enough methods, with sufficiently many sources of uncertainty that add up sufficiently linearly, is always extremely close to the normal (Gaussian) distribution.


This is not just semantics or matter of definitions. If you take a real-world problem, like the quantification of the warming in the 20th century, and there's some error margin, there's always a chance that the figure is hugely wrong. So it's always wrong - childish - to assume things like the sharply truncated uniform distribution. It's always mature to consider the normal distribution as the zeroth guess.


reader Eugene S said...

O.k. thanks, I am of course familiar with normal (Gaussian) distribution a.k.a. the bell curve but what I did not know was that

0.7 ± 0.2 does NOT mean "between 0.5 and 0.9 with a uniform distribution". Instead, it implicitly means "a normal distribution" with the standard deviation equal to 0.2.


reader Casper said...

Alexander Ac will be happy again. The global warming did not stop after all, it was just a mistake.


reader Bryan William Leyland said...

So satellite temperatures don't count any more? Strange!


reader David A said...

Yes, and the average T is silly, when it takes far less energy to raise T in the dry polar regions. Also, these same climate science yahoos were ripping on the satellite data sets before deciding to extrapolate from the most inaccurate portions of them, (the high latitude 80 plus degrees) just to get the answers they want.


reader Mervyn said...

I am sick and tired of all these climate change charlatans acting like modern day snake-oil salesmen trying to flog their "catastrophic man-made global warming potion". If these fraudsters were part of the corporate world by now they would most certainly have faced criminal charges by the corporate regulators for engaging in misleading and deceptive conduct. But then, we're talking here about global warming … the most corrupt field of science the world has ever experienced.


reader Brian G Valentine said...

There have been worse cases in history, but without so many profoundly uneducated socialist politicians and others with little else to give their life any meaning. endorsing it.


reader Alexander Ač said...

Luboš,

I always told you that if you know what is so terribly wrong with AGW, go and publish it in peer reviewed journal - you would be more famous than Richard Lindzen ;-). You said everything is already published, so you dont have to.

So the real point is NOT if people are climatologists or not, especially given that what you do is to "only" make mathematical algorithm for calculation of global temperature (and not to explain why is it rising, what is causes etc.) better/complete.

BTW, it is not such a big deal what they found, since HadCrut dataset was excluding most rapidly warming areas of the globe. More interesting is I think that now HadCrut shows even a stronger warming trend that GISS dataset.

BTW once you properly exclude natural contributions to year-to-year temperature variability, there is no such thing as "warming pause". There is no reason to attack "overblown" climate sensitivity estimation, which puts mainstream at around 3°C/doubling CO2.


reader Alexander Ač said...

Casper,


no I am not - since I am not "happy" with warming atmosphere, since it undermines what we have created in the last centuries.

BTW why did you expect that GHGs miraculously "stopped" or "reduced" their well known physical properties?


reader Luboš Motl said...

Dear Alex, the authors of this particular splicing paper aren't excessively bright minds and they won't be famous, either. Their paper won't really influence the thinking of anyone.


Dick is great and somewhat famous but he should be vastly more influential in atmospheric physics than he is. He has been mostly sidelined by tons of lesser minds and he sort of got used to it, but I wouldn't.


The climate sensitivity at or above 3 deg C per doubling is empirically excluded.


reader Fat_Man said...

If you torture data long enough it will confess.