Have you ever asked the question how does the observed recent temperature trend - i.e. the magnitude of "global warming" - depend on the month? It's simple to divide the data to individual months. For UAH, this is the result of those 12 linear regressions applied to the monthly global temperature anomalies since December 1979:
For partially sociological reasons, I believe the UAH professionals that their product is the most accurate one: they argue that there is no major systematic periodic error related to the satellite cycles. It seems that the dependence of the trend on the season is dramatic: the trend goes from 0.64 °C/century in May to 1.80 °C/century in February - and there is pretty much one large annual wave in the chart.
The charts obtained from RSS look very different, much like the graphs from GISS and HadCRUT3. While UAH says that the temperature trends for February and May differ by a factor of 3, other teams claim that they're almost equal.
This very inability of the people to determine the temperature trend for a given month up to a factor of 2 or 3 raises some additional doubts about our collective ability to know the "right" temperature trends with the sufficient accuracy needed for qualitative conclusions.
Let me assume that the UAH graph above is pretty much real. In that case, the dependence of the trend on the season is another problem for the greenhouse explanations of the warming: it's another wrong fingerprint.
Why? The reason is simple. The carbon dioxide is almost instantly spreading all over the atmosphere and its concentration is elevated regardless of the location, day/night cycles, and seasons. Also, the trace gas helps to absorb the thermal radiation emitted by the Earth - and this radiation is pretty much uniformly emitted by our blue planet, regardless of the location, day/night cycles, and seasons.
It is pretty natural to use the graph above to guess that only between 1/3 and 2/3 of the observed warming in the last 30 years (i.e. between 0.1 and 0.2 °C) is due to "season-independent" effects such as the greenhouse effect while the rest is due to other effects that do depend on seasons.
However, we may also try to be more IPCC-friendly. Let's write all the temperature trends on the graph as feedback-free temperature trends plus feedbacks. Just for fun, let's include all the natural effects - the things that one would normally consider the essence of climate science - among the "feedbacks". ;-)
In the last 30 years, we've been increasing the CO2 in the atmosphere by 1.75 ppm a year or so. That corresponds to Log[(388 + 1.75)/388]/Log = 0.0065 of a CO2 doubling. One CO2 doubling adds 1.2 °C to the temperature if the feedbacks are omitted which - if multiplied by the previous small figure - gives us 0.0078 °C per year or 0.78 °C per century.
This is a useful number you may want to remember. The feedback-free greenhouse effect caused by our modern CO2 additions to the atmosphere gives us a modest 0.78 °C of warming per century. This rate is unlikely to change much because while the CO2 emissions may grow "exponentially" (with the e-folding time close to 57 years), the greenhouse effect kind of depends on the logarithm, and these two nonlinearities almost cancel against each other (not quite because the exponentially growing CO2 is additively shifted by the 280 ppm baseline).
So you may take the temperature trends in the graph and subtract 0.78 °C per century. If you do so, May and June will end up with negative "feedbacks" - recall that we use this term for all effects except for the bare greenhouse effect. The "feedbacks" will be positive for all other months.
By the way, I think it is kind of natural that the warmest months - and May and June are not far from them - witness the smallest warming trends.
Analogously, it is also true that the warmest locations - the tropics - see the smallest temperature trends, too. (Well, Antarctica has seen cooling, but the average of the Arctic and the Antarctica has seen a much more rapid warming than the tropics). The ice-albedo feedback is among the amplifiers in the polar regions.
Also, it's a well-known fact that daytime temperatures are increasing less quickly than the nighttime temperatures.
In other words, when something - a place, a season, a part of the day, or their combination - is already warm, Nature will be kind and this "something" won't be warming much. Why? The regulating mechanisms - various paramount negative feedbacks - become really strong when the temperature substantially deviates from the average.
A part of the story is that the thermal radiation isn't proportional to the temperature in Kelvin degrees but to its fourth power. In fact, when we compute average temperatures, it would be much more reasonable to compute the "energetic" averages of the temperatures - the averages adjusted by the Stefan-Boltzmann law that actually remember the total (and average) amount of thermal energy that was emitted. By the "energetic" average of two temperatures T1, T2, I mean:
Tenerg. average = [( T14 + T24 ) / 2 ]1/4This "energetic" average is somewhat closer to the higher temperature than the arithmetic average. Consequently, the temperature of places, seasons, and daytimes that are already warm influence the "energetically averaged" trends more than those that are cold. But we have said that the "already warm" things generally see a smaller temperature trend.
As a result, the rate of warming computed by the "energetic" averaging would be smaller than the rate of warming computed from arithmetic averages of the temperatures although the difference is not "spectacular". For example, the relative influence of the Arctic to the global temperature trends decreases by a factor of 4/5 or so. Similarly, the contribution of rapidly warming yet cool months such as February or November to the "average" warming trend will decrease by one percent or a few percent. A similar result holds for the nights.
However, it is conceivable that the "realistic" type of averaging should be even more nonlinear than one based on the fourth power from the Stefan-Boltzmann law. Locally, it could even have some discontinuities and positive jumps. Needless to say, if you used some "really rapidly" growing function to compute the average instead of the 4th power (imagine the 30th power), the calculated average trend would be almost entirely determined by the places, seasons, and parts of the day that are already warm. And such a trend would be substantially smaller - and who knows whether it would be positive at all.
Such a reduced figure wouldn't change anything about the fact that if there is going to be some warming, it will probably be amplified in the polar regions (although Antarctica completely defies all these predictions so far). However, you shouldn't forget that for those regions, seasons, and parts of the day that are currently colder than the average, some warming wouldn't be bad at all.