Friday, November 20, 2015 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Find global warming, win $100,000

Mmanu_F has pointed out that Douglas Keenan, a climate skeptic who has done statistical calculations as a trader in New York and London, earned lots of money, and wrote some articles, has declared his own Kaggle-like challenge

Douglas Keenan's Contest 1000
whose winner will receive $100,000 (zero point one million dollars). The winner needs to be the fastest one; and send his entry before November 2016. An entry fee of $10 has to be paid so that non-serious contestants are suppressed.

What do you have to do? Download this text file. It contains 1,000 temperature records. Each of them is composed of 135 entries – global temperature anomalies for 135 years (emulating the years 1880-2014, if you wish) written with the precision of 0.001 °C.

All these 1,000 datasets were produced by realistic enough trendless models – probably slightly better than random walks. In some of them – the precise percentage is unknown but it is comparable to 50%, I think – the extra trend of 1 °C per century i.e. 0.010 °C per year was added with a random sign.

You have to send Doug a list of 1000 bits indicating which of the time series were "modified" by the added trend and which of them were not.

I've spent almost two hours with the challenge now. I've drawn dozens of histograms and 3D histograms in Mathematica etc., studying the distribution of temperature jumps and all things like that. At the end, I do believe that the overall slope obtained by linear regression gives you the best information about each time series.

The time series are clumped around the slopes 0 °C per century, +1 °C per century, and –1 °C per century, but I think that these three groups overlap sufficiently so that you won't reach the success rate of 90% by any simple cut. But I am also sure that you may achieve the success rate that is much higher than 50%.

I've tried to divide the time series according to many other variables, believe me, but there seems to be no discrimination power in those. Correct me if I am wrong. Well, I haven't tried to send my "best guess" yet. ;-)

What does it imply

Let's assume that no one wins Doug's challenge – and because of his Wall Street and City of London experience, I guess that he's pretty sure that no one will. What it means – also assuming that his "climate model" is sufficiently equivalent to the real-world global mean temperatures – that no climate scientist may achieve a higher than 90 percent certainty about an "underlying trend" by looking at the measured global temperature data.

This statement is clearly morally equivalent to Keenan's challenge. He has produced 1,000 planets with some 135-year-long temperature record. Some of them had the extra underlying trend, others didn't. Because you won't make more than 90% of correct guesses, it means that you can't be more than 90% certain about a generic planet or dataset.

Well, the certainty of your guess actually depends on the temperature trend – or other quantities that you might find useful. We are not living on a generic planet, you may say. Assuming some big trends relatively to the noise, you could be more certain. Except that the actual observed temperature trend is rather close to the trend "in between" the trendless and trendful series in Keenan's challenge.

The noise is simply too strong relatively to the purported "signal" and there's no reliable enough way to distinguish them. The non-global, regional data are even more noisy and therefore less helpful for the discrimination. To summarize, no statistical analysis of the observed temperature data could have achieved a 95% certainty about an underlying trend – about global warming – and all papers that claim so suffer from mathematical errors.

To justifiably claim the existence of an underlying trend, you actually have to reduce the noise considerably. How can you reduce the noise if the magnitude of the noise is observed and substantially high? Well, you have to explain a big part of these variations – previously considered a noise – in terms of some predictable contributions. But because most of the year-to-year jumps are caused by pretty much random El Niño-like oscillations, I think that you won't succeed in that, either.

If no one wins Keenan's $100,000, it's pretty much a settled fact that the claims about a statistically reliable proof of global warming deduced from the temperature data are flawed. But feel free to try to win the challenge and be free to identify errors in my reasoning.

Add to Digg this Add to reddit

snail feedback (0) :