Have you seen this cute video? (Via G.D.)
In the previous posting, we tried a couple of thought experiments. One of them was to imagine that the "global temperature" behaves like Brownian motion (which is approximately as good an approximation as the "random noise" around a "long-term average" in which the years are independent - because the long-term persistence in the real world exceeds that of the random noise but is lower than for Brownian motion).
By the term "Brownian motion", I always meant a random walk that generates a binomial distribution. Sorry for my sloppy terminology.
In our simple toy model, the annual temperature jumps, compared to the previous year, by +dT with probability 50 percent, and decreases by -dT with the same probability. What is the probability that all 7 years 1999-2005 will see a strictly lower temperature than 1998?
William Connolley uses the same technique he uses for the "real" climate - namely Monte Carlo simulations. He wrote the following comment:
- ... My calculation (by monte-carlo; I guess I should be able to do it exactly but I've forgotten how to if I ever knew) is that the chances are about 1/4 for equal up-down increments ...
OK, what is the correct result? Let us call the temperature in 1998 "zero" and let us choose units in which "dT=1". In 1999, "T=+1" with probability 50%. Only if it is "T=-1", we will have a chance for 7 cooler years, and it occurs with probability 50%. That's what's left in 1999. We will only consider the case "T=-1" because in the rest of cases, we have already violated our goal.
In 2000, the temperature will be either "T=0" or "T=-2". Both of them have probability 50% which means 25% of the total. We will talk about the (smaller) percentages of the total from now on. My original problem was defined so that "T=0" again (matching the record) already violates the conditions. So only 25% of cases, in which "T=-2" in 2000, will work.
Now you can already see that the final result will be below 1/4.
In 2001, it will be either "T=-1" with probability 1/8, or "T=-3" with probability 1/8, too. Both of them work.
In 2002, it will be "T=0" with probability 1/16 - which means a violation, or "T=-2" with probability 1/8 (combining twice 1/16), or "T=-4" with probability 1/16. Clearly, only the latter two cases with a total probability 3/16 survive.
In 2003, we will have "T=-1" with probability 1/16, "T=-3" with probability 3/32, or "T=-5" with probability 1/32. The total probability remains 3/16.
In 2004, the temperature will be "T=0" with probability 1/32, which means a failure, or "T=-2" or less with overall probability
- (3/16-1/32) = 5/32
If the temperature in 2004 is "T=-2" or less, then it will be below zero in 2005, too, and we're all set. Once again, the total result is that assuming the Brownian motion uniform-step model, the probability that we get 7 cooler years, after any specific year (that we called 1998), equals 5/32.
What about the Monte Carlo models? Our senior computer modeller told us that the right answer is
- 1/4 = 8/32
This is 60 percent above the correct result 5/32 (of course, the correct answer is what is taken to be 100%). Imagine that. A simple mathematical task involving one integer variable and seven mathematical operations "+1" or "-1" - a task that most of you could have solved analytically in the kindergarden. Even if neither you nor your nurse had known "probabilities", you could have listed those 128 equally likely histories (sequences of simple integers) and count how many of them satisfy the criterion. (A reader in the fast comment obtained this idea independently. A programmer could also use a computer to calculate the precise result by listing all 128 histories.) William Connolley had to use a computer with a Monte Carlo program (for William: programme), and he overshoots the correct result by 60 percent anyway.
What probability would you guess after half a minute of thought? 1/6? You would be 10 times closer to the truth than William Connolley with his "scientific approach" involving computers. Or 1/8, by phrasing the question so that the warmest year among eight in a more or less random sequence is the first one? You would still be 2.5 times closer to the truth than William.
Now imagine that you replace this funny model where we add 1+1 seven times in a row by a semi-realistic model of the climate that has billions of variables, thousands of physical effects (many of them hypothetical and many more probably missing) and hundreds of their mutual relations and feedbacks. These mechanisms involve many non-trivial cancellations that make the individual terms more important than in our case (so that a 60% error of anything is a disaster). You also improve the time resolution by three orders of magnitude, extend the predictions from 7 years to 50 years. Finally you give the new problem to William Connolley or his friends. What can you expect from their results if they're not even able to calculate 5/32 correctly? What you get is complete chaos, of course. Worthless numbers. Junk. Global warming. Methodological rubbish, using the words of Hans von Storch.
I suspect that they run their unrealistic computer games - that overshoot the global temperature anyway because they assume, among other things, that ice melts 1,000 times faster than it does - approximately three times, completely without any understanding, intuition or clue what's going on in the "black box", and if the third result is sufficiently politically correct and predicts a sufficient global warming to satisfy their brothers and sisters in the "scientific consensus", they promote the result into a scientific conclusion and their friends in the scientific journals happily publish this new kind of "science". This is what our society pays billions of dollars for.
Biased Brownian motion
We also mentioned the asymmetric case that William Connolley has surprisingly calculated pretty well - he obtained 1/10 while the correct result is 13/128. Imagine that the temperature increases by +1.5 with probability 50%, and decreases by -1.0 (centikelvin - but the choice of unit does not matter, of course) with probability 50%. What is the probability that 7 years after 1998 will be cooler? We already know the method, so let's list the percentages:
- 1999: T=-1 (50%), otherwise T non-negative
- 2000: T=-2 (25%), otherwise T non-negative
- 2001: T=-3 (12.5%), T=-0.5 (12.5%), otherwise T non-negative
- 2002: T=-4 (1/16), T=-1.5 (1/8), otherwise T non-negative
- 2003: T=-5 (1/32), T=-2.5 (3/32), otherwise T non-negative
- 2004: T=-6 (1/64), T=-3.5 (1/16), T=-1 (3/64), otherwise T non-negative
- 2005: T=-7 (1/128), T=-4.5 (5/128), T=-2 (7/128), otherwise T non-negative
The total of the surviving probabilities is 13/128 which is indeed close to 1/10. Once again, assuming Brownian motion with the up/down steps in the ratio 1.5 vs. 1, there is only 10% probability that after a year like 1998, all seven following years will be cooler.
You should not be shocked that the number 10% is so small. The same conclusion applies to other years, not just 1998, and for most others, the criterion is satisfied and they are not followed by a period of 7 cooler years. Everything morally agrees with the rules of chance, of course. In the short term, you can't really see any problems with the Brownian motion model, except that it is probably better than the "independent years" model.
William obtained a pretty good numerical result for the second task - and you may ask whether it is good news or bad news. It could be good news because he can calculate at least something. Let me offer you an alternative explanation. It is a bad news because it shows that he may have had the correct program but he does not know how to use it. More precisely, he does not understand that he must repeat the simulation a sufficient number of times for the average of his results to be close to the actual results. And he should actually try to figure out what is the variance of the results generated by the Monte Carlo methods, as long as we want to call it repeatable science.
William's large numerical error is kind of baffling because one second of computer time should be enough to find the correct results with the accuracy of several digits. Do the math. Repeat the simple history "N" times (less than a microsecond in this case!), and the relative error should go down like "1/sqrt(N)". You should get 3 valid digits within 1 second (million of microseconds).
This seems to confirm the hypothesis that no one has yet told them that they should estimate the error margin of their climate predictions. It really seems that they imagine that "Monte Carlo" programs should behave just like the casinos in Monte Carlo. Try your good luck five times, and when you're lucky and you win $100,000 or +8 Celsius degrees of warming per century, go home, turn your computer off, and celebrate! (And publish it, too.)
The CPU time could have been too expensive for William to reduce the error below 60%. But one can then ask How many times did they actually run the "real" climate models that have a million times as many variables and many thousand times as finer time resolution? Once? Or twice, choosing the "better" answer? And this single run not only predicts the future but also simultaneously validates the thousands of assumptions and parameters in the model, does not it? Because the calculation leads to the Big Crunch singularity of global warming and moreover also shows the warming in the present era, then the hundreds of assumptions must be correct, right?
Meanwhile, in India, as of today, the recent extreme record cold has killed 200 people. If you help to pay the money to Kyoto+ protocols designed to cool the planet, you may help to double such numbers. You will also help to further improve the snow record in Japan.