Why 1850? What happens with altitude and variations in the grid?
In the previous postings, I discussed the local data extracted from the "raw" HadCRUT3 dataset that was just released. Annual mean temperatures range from 55 to +33 °C, JulyJanuary differences go from 40 to +60 °C or so, and the warming trend at various places is measured to be 0.75 plus minus 2.35 °C per century, showing that the shift of the global average is just a very small contribution to the local centennial variations.
You shouldn't take this graph seriously. We will discuss it later.
But people are interested in the global mean temperature, anyway. It has become an obsession of the postmodern era. I don't think that the global mean temperature is too useful or welldefined but people want to know whether the graphs are valid.
The satellite measurements only go back to 1979. There are some subtleties but we essentially trust them. They show the warming trend to be 0.14 °C per decade or so  even though in the last 10 years only, this becomes zero or slightly negative. But what about the temperatures before 1979?
People didn't have any weather satellites so we rely on surface weather stations. The typical graphs show the temperatures from 1850 or 1880. For a long time, you couldn't really know why the beginning was 1850. Why not 1840, for example?
It's actually a very legitimate question. If you look at the raw data, there's no discontinuity in 1850. This is the decadal logarithm of the number of stations that may be used to calculate temperature change in various years:
Log graphs are not terribly readable. So let me say that the oldest HadCRUT3 data are available from the year 1701 but there is a gap. If you're only interested in data that can be continued up to the present without a gap, you have to start in 1706. The graph above shows that before 1710, we have two stations for a while. Then there's one again. It jumps to 2 after 1720, to 3 in 1730 and it's 3 plus minus 1 until 1750 or so.
It quickly increases to 8 in 1755 and continues to grow to almost 30 in 1800, 85 in 1830, 150 around 1850, 440 in 1880, 1,500 around 1900 and reaches the maximum over 4,000 in the 1960s. Then the civilization and the number of weather stations begins to decline and the current number is just slightly above 1,000.
The graph above is mainly meant to convey the point that the number of stations was changing pretty continuously even before 1850. Let's look at the reconstruction of the 17062010 global mean temperature again (whose problems will be discussed momentarily):
Click to zoom in.
You see a possible answer why they didn't try to go to 1840 or 1830: the best reconstruction could produce quite some trends and wiggles! They would show that the recent "trend" is nothing special.
If climatology were controlled by antialarmists (I don't mean skeptics like us: I mean zealots just like the alarmists but with the opposite sign), then antialarmist Antimike Antimann would use my method to publish the antihockeystick graph above in Nature (or Antinature) and he would "prove" that the industrial society has made the natural, nice, large variations of the climate nearly vanish. Well, that didn't help: the antialarmists would surely say that this stabilization is dangerous as well and the result would be carbon taxes and environmentalist fascism, anyway. ;)
The graph above is very problematic for various reasons. First, it is a civilizationbased average. I will ultimately compute the areaweighted averages but in the graph above, one computes the averages over stations. That would lead to a horribly discontinuous evolution of temperatures so I only compute the arithmetic average of the monthonmonth jumps of the temperature anomalies (those that are known).
Obviously, if the number of stations  and regions they cover  is small, the variations are much bigger because the local temperatures don't accurately reflect the global average. The statistical fluctuations of the global average are much smaller  but still nonzero! That's also the reason why the graph looks much more uniform on the right hand side.
Frederick I of Prussia was coronated the new king in Königsberg in 1701. The picture above is from a 1701 celebration of the coronation in Berlin. In the same year, the HadCRUT's oldest weather station was launched in the same city. Already on December 17th, 1700, elector Frederick I was very horny despite the snow and cold.
The gigantic oscillations of temperature on the left side of the graph are of course just an artifact of having just one station  the oldest one going back to 1701 happens to be Berlin in "West Germany", the database says. They probably mean some kind of West Berlin with an 18th century version of a Berlin Wall. :)
On the other hand, it's likely that a big part of the oscillations would survive even if the number of stations after 1701 were large. The beginning at 1850 was almost certainly cherrypicked to include the rather flat plateau between 1850 and 1900 and eliminate any more intense variations that took place earlier. HadCRUT3 folks have cherrypicked their own short version of a hockey stick.
Obtaining the optimized areabased averages is tough but my motivation is not too high because it's a lot of work  I know a nice mathematical formulation of the "optimum weights" in terms of orthogonality with a subset of spherical harmonics but it would be extremely timeconsuming to calculate the optimum weights in practice, if the number of stations may be as high as 4,000+ for a given month.
(One would need to solve a set of 4,000+ linear equations for each month of the record, and after you're finished, you get a graph that isn't too different from the graphs obtained by simplified methods.)
So instead, let me discuss the simplified methods and their likely, most serious defects.
Gridding
The Met Office page that offers the data also contains a link to two PERL scripts. One of them separates the stations into a grid and the other computes the global mean temperature out of the grid.
I don't want to doubt that if you run their scripts, you get the usual HadCRUT3 global mean temperature graph. Someone should however try to run the script before 1850, too! My experience and current ability to run PERL files are limited and my motivation is low.
What I find more interesting are some inherent and incurable problems with this whole methodology.
People are asking whether the monthly data in the ZIP files have been corrected for the urban heat island effects. My guess is that the answer is No. Nevertheless, I don't see any clear indications of this effect in the maps etc.
Mephisto asked another good question: have the temperatures been corrected for the altitude (temperature decreases with the height)? Here, the answer is almost certainly No. Taking a lapse rate around 6 °C per kilometer, ordinary towns like Pilsen  elevation 300 meters  would have to report temperatures seen at the "sea level" beneath the city :) which would be almost 2 °C higher than the thermometer readings. I don't believe that such an ad hoc transformation of the data has ever been done. This would be dramatic, indeed.
So my best guess is that the average monthly temperatures in the ZIP file are actual averages of the thermometer readings  perhaps corrected for some "obvious and local" systematic errors of the environment but not for urbanization and not for altitude.
Changing altitudes
If I understand the gridding algorithms properly, they do the following. They simply divide the interval 180°`...+180° for the longitude to 72 intervals with spacing 5°. And similarly, they divide the 90°...+90° interval to 36 pieces with the same spacing 5°. In this way, the Earth's surface is separated to 72 x 36 "rectangles". Well, they're curved rectangles and they get very thin near the poles.
As far as I can say, all stations located within the same face of the grid are treated as equivalent. After all, the PERL scripts don't look at the altitude. In fact, they even can't. Out of the 5,113 stations, 519 of them don't have a proper value of the altitude  in those cases, it's reported as 999.
Can this sloppy identification of everything within the single face of the grid affect the result? You bet.
Imagine that all your stations tended to be in the cooler parts of the grid. But these colder ones were either canceled without a replacement or they were replaced by new stations in warmer parts of the grid. How will this change influence your calculation of the global average?
To have an estimate, one must have an idea about the temperature differences within a rectangle of the grid. Those 90° of the latitude difference correspond to something like 60° C of difference in the annual mean temperature. So 5° of the latitude shift  from the top to the bottom  corresponds to 3 °C or so. If you closed weather stations near the boundaries of the rectangles that are colder (closer to poles), and you opened new stations in the same rectangle closer to the equator (and in fact, one of these two steps may be omitted), you could increase the calculated global mean temperature by up to 3 °C.
Clearly, the pattern in which stations are being opened and closed is not this selfevident so most of the change cancels out. But 3 °C is a lot. If 1/5 of this worstestimate error isn't canceled, you may cancel the whole 20th century "global warming". More realistically, if the real effect from this shift within rectangles is 1/10 of the maximum, you will reduce the 20th century warming to 1/2 of its reported value.
Similar complications exist for the altitude as well. Imagine that you close weather stations in the hills on the periphery of your cities and replace them by weather stations that are 300 meters lower. Again, if you do something like that with all your nodes of the weather station network, you will get something like 2 °C of spurious global warming. Obviously, this wasn't happening everywhere but it's still true that some portion of this "worst estimate" really contributes to the error.
This may substantially change your ideas about the actual change of the global mean temperature in the 20th century. Again, I want to stress that the PERL scripts make it very clear that the potentially changing altitude of the weather stations isn't taken into account. The scripts don't try to read the value of the altitude. And after all, 519 of the stations only says 999, anyway.
So there's a lot of inevitable trouble when you try to make any composite global mean temperature graph out of this dataset. The gridding isn't really enough to correctly account for the differences between the weather stations. At the same moment, I do think that all those subtleties may be "solved" in a way that is more optimal than other ways.
It would be helpful if someone solved these issues in a way that doesn't try to "steal" the ad hoc gridding algorithms from Jones and the Met Office  primarily because those algorithms are truly amateurish. If someone calculates his own global mean temperature, I would like to know how it looks like if you try to push it before 1850. Of course, the error grows  but the error is there after 1850, too. I want to see the optimized mean value, anyway.
Bonus: average elevation
Here is the graph of the (arithmetic) average of the altitudes of all stations that report nonempty data for a given month/year:
You see that between 1850 and 1920, the average elevation grew from 200 to 400+ meters. That decreased the thermometer readings by 1/5 of the lapse rate per 1 km, i.e. about 1 °C. Clearly, if the people working with the datasets believe the absolute shift of the measured temperatures and not just the monthonmonth increments, they will produce global mean temperature graphs where 1 °C of warming between 1850 and 1920 is being hidden.
In the same way, the average elevation dropped by about 100 meters sometime in the 1990s. If the absolute magnitude of the temperature were trusted  and not just the monthonmonth increments (i.e. if the algorithm isn't able to appreciate that stations with similar longitudes and latitudes may still have very different temperatures because of different altitudes)  then this change of the composition of the stations could have produced a spurious warming episode by 0.5 °C in the 1990s.
This could potentially be responsible for the apparent "jump" of the temperature to a higher level after 1997:
Note that this jump to a new level is much less apparent in the satellite data such as UAH AMSU.
Note that even if the algorithm is immune to the firstorder "big effects" of the changing altitudes, there may be secondorder effects related to different seasonal normal temperatures as a function of altitude, and so on.
Average mod latitude
For the sake of completeness, here's the graph of the average Mod[Abs[latitude],5] from the reporting stations:
You see that this is not likely to be a source of big problems. The average is not far from 2.5°  just like you would expect  and the change per century isn't much bigger than 0.05° (angle) which could be responsible for less than 0.05 °C of spurious warming. So the latitude part of the location distribution within the 5° x 5° grid is unlikely to be an important problem. However, you see that there was a kind of significant dip around 1823 that could have masked up to 0.5 °C of cooling around 1823  if someone were careless and dared to draw the graphs of the global mean temperature up to that point.
Ask Ethan: Is Zero Gravity Really A Thing? (Synopsis)

“It was a strange lightness, a drifting feeling. Zero gravity. I understood
that everything that once seemed solid and immovable might just float
away.” L...
1 day ago
snail feedback (0) :
Post a Comment