Wednesday, May 26, 2010

Gavin Schmidt on attribution



Gavin Schmidt wrote an essay on the attribution of climate change:
On attribution (Real Climate)
He believes - or he pretends to believe - that the average RC readers are more confused about this topic than himself. However, his text - which is a mixture of correct observations, tautologies, obfuscations, hidden facts, missing basic principles, double standards, and manifestly untrue propositions - shows otherwise.

His "executive summary" makes four basic claims:
  1. You can’t do attribution based only on statistics
  2. Attribution has nothing to do with something being “unprecedented”
  3. You always need a model of some sort
  4. The more distinct the fingerprint of a particular cause is, the easier it is to detect
Now, the points 1,3,4 are correct as stated while 2 is incorrect. However, Schmidt later strengthens 1 to something that is no longer correct. And he masks the results of 4, the fingerprint (or he only wants to use the point 4 when it's convenient but not otherwise). So when these four items are taken to include the whole context, I only agree with 3 - although the word "model" in 3 is inappropriate and immediately leads Schmidt to additional missteps.




But even if you ignore the wrong word "model" and consider the point 3 correct, 3 is just the very beginning of science - and everything else that Schmidt would like to be done with 3 is just wrong.

Correct point 3: the "culprit" has to be described by a theory

To attribute doesn't just mean to scream the name of the "perpetrator" - even though this is exactly what 99% of the AGW proponents do. It means to have a theory - or a "model" - that offers a mechanism that allows the "perpetrator" to do whatever is attributed to him (or her or it) and that predicts some detailed, usually quantitative properties of the outcomes of the mechanism.

However, what is strikingly missing in Schmidt's scheme of things is that
a hypothesis about the attribution whose predictions don't agree with the empirical data has to be abandoned.
This is really the basic requirement for the analysis to be scientific.

Now, in the real world, the empirical data used to evaluate the hypotheses are always of statistical character; see Defending statistical methods.

At the very end, the evaluation of pretty much every hypothesis or theory about the world boils down to statistics. Sometimes the statistical results are "so clear" that we don't even have to use the language of statistics: the data may "completely" disagree with the theory.

However, in most cases, especially in complex sciences such as climate science, the results are less clear. Hypotheses rarely agree or disagree with the data "completely sharply". We need the statistical methods to quantify the probability whether the "pretty good agreement" or "pretty serious disagreement" could have occurred by chance.
If it is very unlikely for the disagreement between the empirical data and a hypothesis to occur by chance, the hypothesis is falsified and has to be abandoned.
Obviously, the very statement about the attribution has to be based on (or extended to) a theory; Schmidt says "model" but I deliberately talk about a "theory" because the "theory" doesn't have to be linked to any software or anything of the sort that is suggested by the word "model".

Moreover, the term "model" leads you to believe that all "qualitative" features of the theories are fixed and the only thing you can modify is a couple of numerical parameters. Gavin Schmidt assumes it's enough to be told how to improve the shape of his wooden earphones. But that's just not the case: the "models" may be - and usually are - qualitatively wrong and a few numerical fixes just can't help. The airplanes just don't land: the IPCC pseudoscientific models are just another great example of such a qualitative failure. That's why it is critical to talk about a "theory" instead of a few narrow-minded "models".

But the theory - or several competing theories - is/are just one side of the coin.

You can't do attribution without the empirical data and statistics

The other side of the coin are the tests of the validity of the theories. In "noisy" disciplines such as climate science, such tests are always statistical in character. (Even though Schmidt thinks otherwise, noise occurs both in Nature and in the lab but it is true that these problems become more uncontrollable in Nature.)

These tests are what actually decides about the validity of the individual theories or "models". So the attribution is ultimately done by statistics because statistics decides whether the agreement between the theories and the data could have been so good or so bad.

That's why I can't possibly understand the context in which Schmidt's first dictum, "you can't do attribution just with statistics", could be both valid and nontrivial. You always need statistics when you attribute effects to various causes because statistics is needed to decide about the validity of different competing hypotheses or theories. Well, you also need other things such as a brain, data, and honesty - but I guess that this trivial and vague insight wouldn't deserve one quarter of a technical essay.

In fact, the only reason why he could be writing such a thing is that he would like to entirely avoid the evaluation of the theories - and indeed, now we mean the computer "models" in particular - which is the bulk of the attribution process. It seems that he wants to do the attribution without any comparison with the empirical data. That seems to be the main point of his article.

If effects are unprecedented, the attribution is always easier

Also, Schmidt claims that "attribution has nothing to do with something being unprecedented". However, attribution has almost everything to do with something being unprecedented.

Imagine that you observe an effect, "E", for example the warming by 0.6 °C in one century (but think generally!), that can have various causes, "C1, C2" etc. One of your theories, "C1", is that "E" is only caused by "C1" (the same name won't lead to any confusion) and cannot exist without "C1".

Such a hypothesis is a pretty strong one. It may happen that "E" has occurred in the past - you may have evidence supporting this statement - even though "C1" didn't precede it. If that's the case, your "C1" theory is falsified. It's dead. You have to try a completely different one or at least substantially modify your original one. If "E" could have occurred without "C1", it should be obvious that it can't be the case that "E" can only be caused by "C1".

This side of the argument is very strict. If you make too bold statements about the uniqueness of the causes, your hypothesis may be - rather easily - falsified. That would surely be the case if a naive scientist conjectured that 0.6 °C of global warming per century can only be caused by an increase of CO2.

The opposite implication can't be equally sharp but it can still provide us with some circumstantial evidence.

If you have two hypotheses, "C2" and "C789", in which either one effect, "C2", or either of three possible effects, "C7, C8, C9", can sometimes cause an effect, "E2", and if you observe the effect "E2" to be occurring for the first time (it's unprecedented), and so is "C2", then the "C2" hypothesis has passed a nontrivial test and its probability to be valid has just increased more substantially than the probability of "C789" that was just "consistent" with the data.

It's because the test passed by "C2" was more nontrivial than the test passed by "C789". "C789" just predicted that because some of the effects "C7,C8,C9" recently occurred, "E2" could have occurred as well. But it has no explanation why "E2" hasn't occurred in the past. In this situation, "C2" explains more than "C789" and may be favored because of this advantage.

The probability of a hypothesis that has passed a less trivial test - one that was a priori less likely to be passed - increases by a bigger factor than the probability of a hypothesis that has just passed a trivial test that could have been easily passed. That's called Bayesian inference and it is an important method for a rational convergence towards more likely theories, especially in a noisy context.

We could be much more quantitative about these relationships: the correlation between the "unprecedentedness" of the hypothetical causes and their hypothetical effects is clearly an important positive evidence in favor of a hypothesis. And on the contrary, the lack of a correlation between the timing (and "unprecedentedness") of the hypothetical cause and its hypothetical effect speaks against the hypothesis that attributes the effect to the given cause.

Depending on the statistics (i.e. on how much independent evidence for or against the hypothesis we may collect), such successful correlations may "nearly" establish a hypothesis, and if the correlations don't work, they may rule out a hypothesis.

Again, it's pretty clear why Schmidt wrote the bizarre and kind of manifestly wrong statement that "unprecedentedness" doesn't matter. It's because the hockey stick graph by his RealClimate colleague, Michael Mann, that was designed to "prove" global warming by showing that the recent warming was unprecedented has been shown invalid.

Because of this development, the reconstructions of the temperatures during the last 1,000 years suddenly speak against the idea that CO2 has a significant impact on the climate because they demonstrate that the climate has been doing the same thing for many centuries - changing by half a degree per century or so, in relatively random but not quite uncorrelated directions. And because this new argument is inconvenient, Gavin Schmidt tries to sling mud on the whole framework and all the logical arguments that have failed to confirm his predetermined conclusion. That's why he's slinging mud on the importance of "unprecedentedness", too.

But his statements make no sense.

Agreement in fingerprints strengthens the case for a hypothesis

Now, the last fourth point of Schmidt's "executive summary" is completely valid. The more distinct a fingerprint is predicted by a hypothesis about a cause, the easier it is to detect it and decide about the fate of the hypothesis.

However, even in this valid point, you may feel that Schmidt only wants to apply the rule in one direction or in some contexts, when the conclusion is convenient. The right clarification of Schmidt's fourth proposition is that sharper fingerprints predicted by a hypothesis make it easier for the hypothesis to be tested.

Nevertheless, what's important is that it is not and it can't be a priori clear what the result of such a test will be.

So the test may confirm and strengthen the hypothesis; but it may also disfavor it or rule it out. Pretty clearly, it's the latter option that is realized in the case of the hypothesis that "changes of CO2 drive most of the climate change". Why? Simply because all the major fingerprints - distinct predictions of the enhanced greenhouse hypothesis - fail to match the observational data.

First, CO2-induced "global warming" should be global. It is caused by the infrared absorption that almost uniformly takes place everywhere on the Earth, during all seasons, and during all parts of days and nights.

However, the real observations show that the Northern and Southern Hemisphere have seen completely different warming trends. In particular, the Southern Hemisphere hasn't been warming for 50 years - or at least, its warming rate was something like 3 times smaller than the trend of the Northern Hemisphere.

Using the spherical decomposition (two-dimensional counterpart of the Fourier analysis on a sphere), that shows that the "first" (North-South asymmetric) spherical harmonic has been as strong as the "zeroth" (uniform) spherical harmonic of the warming rate. However, the CO2-greenhouse theory only predicts that the global mean temperature (the zeroth harmonic) should be changing: it doesn't predict any systematic changes between the North and the South (the first harmonic).

However, the real observations show that the North-South difference has been changing as much as the North-South average, proving that "global" effects can't account for the majority of the observed changes. The North-South fingerprint doesn't agree. The greenhouse theory is disfavored by this observation, to say the least.



Second, similar problems occur when we study the fingerprints involving the altitude. The greenhouse warming rate should be maximized about 10 kilometers above the equator, in the hot spot (see the left picture): that's what the greenhouse-dominated theory of climate change predicts. However, the actual observed fingerprint is very different (see the right picture). In particular, there's no hot spot at the expected place.

In fact, the warming rate has been observed to be smaller in higher altitudes. The fingerprints don't agree.

Now, this means that the hypothesis has been disfavored. The probability that the hypothesis that "CO2 is responsible for most of the warming rates in the atmosphere during the last 50 years" has become much less likely than it was before this test. This decrease of the likelihood may be quantified.

But it seems that this testing of hypotheses that can lead to both results is not what Gavin Schmidt wants to allow in science, as he understands it. So instead of testing and refining competing theories, he only wants to allow "models" whose qualitative features have been fixed from the beginning and where you're only allowed to adjust a few parameters by a few percent.



Only the black ancestors could have sent the airplanes from the skies, the models say. The only task is to adjust the shape of the dummy bamboo models and wait for the riches...

But you are only change the "models" - change the shapes of his wooden earphones, to use Feynman's Cargo Cult Science terminology - in ways that won't threaten Schmidt's vested interests. However, that's not enough for the science to work properly: it's not enough for the airplanes with cargo to land.

Gavin Schmidt must have heard what science actually is when he studied physics in the past. That's why I think that his distortion of the scientific method is obviously deliberate, driven by his financial and ideological interests. And that's why I find it so important for all ethical attorney generals to go after the neck of hundreds of crooks such as Gavin Schmidt.

And that's the memo.

No comments:

Post a Comment