Andrew Revkin wrote an essay about the way how the media deal (and should deal) with uncertainties in science:RealClimate and BackReaction are among the blogs that have responded.
Andrew Revkin, a moderate climate alarmist, is worried that the uncertainties and fluctuations of the scientific results reported by the media - for example constantly changing statements about the warming/hurricane link and about the melting Greenland ice - reduce the confidence of the public in some "basic" propositions that he considers settled, especially his opinion that "we should act to stop climate change".
Well, that's too bad if he wants these "big questions" to be unaffected by the scientific results because this "irrelevance of scientific results" is the main reason why we say that climate alarmism is a new form of religion. Policymaking that is based on real science certainly should be affected by new scientific insights and it is very correct if the public is affected, too.
Although Revkin wrote his article in order to defend the highly irrational "action to stop climate change" regardless of any scientific results and his article will be viewed as tendentious and outdated as soon as the current climate hysteria will fade away (and many of us know that it is tendentious and outdated already today), his text nevertheless opens an important question:
How should the media report the uncertainties about scientific propositions?In principle, every well-defined "Yes/No" question about Nature (and sometimes even the society) has a sharp answer: either it is "Yes" (the probability that the statement is correct is 100%) or it is "No" (the probability is 0%). In reality, scientists (and people in general) aren't sure about the answer. So their confidence is a number in between 0% and 100%. Some questions are close to 0% or 100% - those that have been pretty much settled - while other questions are closer to 50% - especially those that are not settled and that the scientists are trying to answer right now.
Let us assume that the "best confidence level" P, the probability that a particular proposition (e.g. "a warmer climate will bring more category 5 hurricanes") is correct, calculated by taking all known evidence "optimally" into account, is a number that objectively exists. Instead of the probability "P", we could also be talking about the "average expected value P" of some continuous quantity. It plays a very similar role in the following text.
What should the media be doing with this number? Well, the ideal newspapers or TV stations are able to pick the right scientists, to organize their own scientific calculation of the confidence level, and to present the true picture of reality, including the correct number "P", reflecting the best up-to-date science, to their readers or viewers. ;-)
As you may guess, these ideal newspapers and TV stations don't exist. They typically report an incorrect "P" that can deviate from the correct one in both directions. The media can make certain statements
- look less certain (closer to 50%) than what science says,
- look more certain (further from 50%) than what science says.
- Statistical errors
- Systematic errors
The most trustworthy media are expected to have a high signal/noise ratio: their coverage should be more accurate than the coverage by others. They should give you the most accurate idea what the number "P" is, even though the number can be encoded in words and the precise meaning of various words such as "very likely" may often be misunderstood. However, one must realize that which media are the most trustworthy ones is a dynamical question whose answer can be changing with time. The New York Times may be doing relatively well but it was never guaranteed and it is still not guaranteed that the paper would remain one of the most trustworthy sources forever.
It makes no sense to dream about a world where all media are completely accurate. You can't have such a thing in a real and free society. There will always be these statistical errors and noise. Demanding readers will always be preferring accurate sources; other readers will look for the less accurate ones, either because they don't care about the accuracy or because they can't distinguish the accurate sources from the inaccurate ones.
In some sense, the systematic errors are worse because they don't average out. They correspond to biases that always go in the same direction. If you read hundreds of stories about a topic and take the average "P" from these stories, you can still have a very distorted opinion about the true value of "P". For example, a vast majority of left-wing blogs and even the mainstream media will always tell you that the climate phenomena will be more catastrophic than what science actually says. They have all kinds of reasons why they're doing so: catastrophic and oversimplified stories sell well. Moreover, many journalists are activists who want to reduce the human freedom and to increase the regulation of the world. Or at least their bosses and colleagues want their whole teams to help to increase the regulation.
Systematic errors are bad because they can systematically "push" the answers in the same direction and lead the readers to a distorted picture of reality. However, there is a sense in which the systematic errors are easier to deal with. Imagine that you have a source with low statistical errors but a nonzero systematic error. For example, it always tells you that the number "P" is 10% higher than it actually is. Everyone knows that a newspaper has a left-wing bias. A wise reader can figure out that it is the case and develop a new algorithm that subtracts 10% from the number "P" that he can see in the source.
Well, in the real world, the bias is usually much higher than 10% (and depends on the particular question in ways that are somewhat hard to predict) and it is harder to subtract the right amount. But I wanted to show you that such an approach is possible in principle. It's like adding light to a photograph that was too dark (or subtracting light).
Among other things, this comment means that if the journalists are trying to change the opinions of their readers by systematically pushing their stories in one direction, they will fail if the readers are rational and unbiased because the readers will eventually learn how to deal with this systematic bias. That's one of the reasons why your humble correspondent can read the New York Times and treat it as a useful source of information: it has rather low statistical errors while most of the systematic errors (usually connected with politics) can be expected and subtracted.
Of course, if the readers are gullible or if they are biased themselves, they will end up with biased opinions. But it's important to notice that the newspapers can only manipulate readers who are either intellectually limited or who were biased from the very beginning, at least in the long run. You can't really permanently "convert" rational readers by writing biased stories.
And the education systems should try to educate citizens who are not gullible and who can't be manipulated easily: citizens who are able to subtract the bias if they clearly see one (or have solid evidence that it exists).
Changing stories: frequency and amplitude
One of the important points that Revkin has addressed is that the stories about the hurricane/warming link, among many other topics, change rather frequently. The stories about the healthy lifestyle are even more dramatic in this sense: is it healthy to drink water even when we are not thirsty? Yes, no, yes, no. In the climate context, Revkin is worried because the readers could start to think that science is uncertain.
Well, the main problem is that science is uncertain and far from settled, indeed. Some topics are more understood and others are less understood. But none of them, especially not those that are discussed by scientists near the cutting edge, are completely settled. As the scientists keep on doing their research, their best estimates of the probability that the warming increases the hurricane rate (or the best coefficient determining how much the hurricanes increase or decrease per one degree) is changing. Revkin wants to suggest that these changes don't influence the "big picture", namely that we should "act".
But of course that they do. The answer to the question "should we regulate CO2?" is a complicated function of other, more elementary and specific answers (including the hypothetical hurricane/warming link). It is not an answer that can be determined before others. It simply couldn't have been determined yet (because the independent variables such as the hurricane/warming coefficient have not yet been settled) and the people who think that it has already been settled are profoundly unscientific.
About 20% of the explanations why a slightly warmer climate is supposed to be dangerous has been based on the hypothetically increasing hurricane rate. In 2005, after Katrina, this ratio went rather close to 50% because some dishonest ideologues and pseudoscientists found the link convenient because of the people's immediate emotions. This was a temporary peak but even 3 years later, it is damn important whether this effect (a causally justifiable correlation) actually exists or not. It decides about 20% of the justification for the "action".
The question whether the Greenland begins to melt quickly (and to increase the sea level) decides about another 20% contribution or so. Just ask "why should the warming be a bad thing?" and see how many people will start to talk about the hypothetically increasing (or accelerating?) sea level. If you combine these two uncertain effects, that's already nearly 1/2 of the motivation to "act". And there are similarly important questions that are uncertain. How the hell can someone say that these individual scientific questions of higher-than-medium importance don't matter (as soon as the answers start to be inconvenient) for the debate? They obviously do matter. For example, if five such answers change the sign in the same direction, it may become more rational (but not quite rational) to attempt to warm up, and not cool down, the climate. And that's a big difference.
But these are the obviously tendentious, cheap, and outdated aspects of Revkin's essay. Let's look at the more general questions that are unaffected by Revkin's irrational climate change quasi-religion.
Once we accept that the scientific opinions actually keep on changing, what should the media be doing about these changes? Well, they should report them if they actually exist. And the education systems should teach all students that science may be changing and to rationally estimate how frequently and how much it can be changing. It is not a good idea to systematically teach the public that science (or its particular discipline) is more certain and fixed than it is; it is an equally bad idea to teach the public that science (or a discipline) is more uncertain and variable than it is.
The media should report the changes according to reality. Again, the ideal media don't exist. The media may err in both directions:
- They don't report changes and pretend that science is more "constant" than it is
- Their reports are changing more rapidly (or more frequently) than the actual scientific results.
In the second case, the media suffer from a short memory and mood swings. The reason behind the frequent mood swings are often increased statistical errors that we discussed at the beginning or the desire to write diverse stories that differ from the previous ones. Sometimes, a female or male journalist who only talks to a limited local circle of sources but who frequently changes her or his friends or "friends" may be the culprit. Media-made controversies, backlashes, and back-backlashes belong to this category.
So once again, the media can be wrong in both ways. They can present a more dogmatic picture but also a more fluctuating picture than what science actually says. These errors may become systematic: some media may be systematically dogmatic (or conservative, with a specific meaning of this adjective) while others want to catch up with every newest fashionable trend. Both adjectives, "conservative" and "fashionable", are meant to be negative labels in this context: they are different types of biases.
More importantly, the media can have - and, in fact, almost always have, because of understandable reasons - another type of a systematic error: the uniformly positive (more likely than negative) correlation between the frequency of changing their stories and the desired political or societal or egotistic impact of the latest changes. What does it mean? What I mean is very simple so let me use simpler words.
If science changes in the direction they like, they report the changes very quickly and amplify them. If science changes in the direction that they don't like, they hesitate, and even if they report the change after some time, they don't give enough attention to it and they never present it as a clear-cut story.
It is not hard to see that virtually every newspaper tends to behave in this way. This behavior is another kind of systematic error. Once again, good and objective journalists should be more immune against the temptation to act this dishonestly and good and sensible readers should prefer the newspapers that don't have similar systematic errors.
It is important to note that the newspapers don't have to be explicitly lying in every article (and not even a single article) for their coverage to be dishonest. If you can statistically reveal that a certain type of stories is reported much faster, much more loudly, and with much less uncertainty than another type of stories, it is enough to see that the newspaper is biased.
Summary: the future
To summarize, the inaccuracies and biases exist, they will always exist, and they can have many forms. Sensible readers should know how to evaluate the news stories and eliminate the systematic biases. If the systematic biases are too high, they should prefer sources with lower biases. The education systems should teach the students how to choose their favorite trustworthy sources (much like they should prefer accurate clocks or anything else) and how to eliminate their residual biases.
If you're an optimist, you can predict the following: the intelligence and the rational behavior of the public will be increasing. Among more intelligent readers, the accurate and unbiased sources will be winning. Both statistical as well as systematic errors will be dropping in average. At the end, the serious media will pretty much report exactly what science actually says. There will still be fluctuations but these fluctuations can't be eliminated because science is not over (and it will never be over): the fluctuations are real and they always exist, except for the old quasi-settled facts that should be written in textbooks, not in newspapers.
If you're a pessimist, you can predict that the readers will be unable or unwilling to uncover, reject, or subtract biases and errors. The inaccurate media will be flourishing, the pressure on the media to be accurate will be dropping and criteria different from the truth and accuracy will become more important. All kinds of non-scientific pressures will start to dominate the material that is actually being published. At the end, the media reports will have nothing to do with science and the society will return to the Dark Ages. We may still be using advanced technologies that are being developed by profit-driven companies that are interested in science for financial reasons. But everything that is influenced by the public opinion will become mostly unscientific.
Am I an optimist? Well, I have some worries but I am more likely to be an optimist than not. Natural selection (even in the context of knowledge) naturally keeps on improving things, including the media. If the public is a bit intelligent and if there is at least some pressure on the public to think and act sensibly, they will learn how to search for high-quality information and this demand will influence the supply side, too. Such dynamics won't occur if one or more of the following things becomes true:
- virtually all the people become too inherently stupid
- the people won't have any reason (pressure or innate interest) to make their opinions compatible with science: for example, rational reasoning will become really unpopular and unstrategic
- the whole societies will make it impossible for the information to propagate: I am talking about extreme plans of Nazis, communists, or environmentalists to ban certain kinds of public speech which, I hope, will only be realized very locally and very temporarily and won't affect the world globally again.