Wednesday, July 28, 2021 ... Deutsch/Español/Related posts from blogosphere

Science Magazine, climatologists: models overstate temperature changes by a factor of two or more



Dr Roy Spencer's 2014 page. Similar graphs showing that almost all the models simply overstate the warming have been published by climate skeptics before 2014 as well as after 2014. It's clear that the longer time the climate modelers follow a flawed methodology that clearly overstates the warming rate, the larger the deviation between the models and the reality becomes – and at some point, the climate modelers won't have the stomach to defend models or a model methodology that is clearly flawed. Note that Al Gore and the IPCC got the Nobel Prize in 2007, it is almost 14 years ago. For that time, median models predicted over 0.4 °C of warming but we only observed 0.2 °C or so.

Bill Z. sent me a remarkable pair of articles by Paul Voosen, a staff writer, in the Science Magazine:

U.N. climate panel confronts implausibly hot forecasts of future warming
New climate models forecast a warming surge
It has been eight years and... the IPCC (the Intergovernmental Panel for Climate Change) will release its first new assessment since 2013. What has changed since 2013? Well, nothing detectable has changed about the climate – Voosen has the duty to start his first article by writing that the Armageddon has escalated, almost all the ice has disappeared, the end of the world is very close now. Nice. It is ludicrous but no longer surprising. What is surprising is what he writes after that.



Well, one change that has occurred in the irrational climate hysteria since 2013 is that the IPCC has become pretty much irrelevant. Eight years ago, it still seemed important for the movement to find some people with PhD degrees and scholarly positions to say completely wrong (catastrophic) things about the climate that were convenient for the anti-freedom pseudoscientific movement. It's no longer necessary. The main people driving the movement are no longer people pretending to be great scientists. Instead, it is people like psychiatrically ill, unhinged Scandinavian teenagers. The IPCC has become these teenagers' increasingly irrelevant appendix because everyone in the movement understands very well that proper science has never been the point or the goal.



OK, the political movement is still intersecting with a scientific activity that has some legitimate or even interesting aspects; and that at least resembles scientific research. So people are still simulating the climate by methods that are roughly similar to the physics simulations in computer games. To simulate the atmosphere, as the article reminds us, the representation of the clouds remains the most difficult detail, one that is likely to introduce the main mistakes.

In the previous generation of models, clouds were mostly about "ice crystals". The newest models expected to be used for the coming assessment reports clouds contain lots of "supercooled water" instead. They must think that this change of the rough picture is progress, it may be debatable whether it's progress. They also praise most of the models as realistic. However, in those articles, as Gavin Schmidt (the post-Hansen boss of a NASA climate body) and others confirm, their current models overpredict the temperature changes, especially swings caused by CO2, by a factor of two or more.

Needless to say, the comment that the climate models overstate the sensitivity to CO2 by a factor of two or more has been said by climate skeptics for decades. It's really completely trivial to establish this proposition. Climate models like to predict the warming trend by more than 3 °C per century; but the trend in recent decades (which has no reasons to significantly accelerate in the future: the dependence is roughly logarithmic and the CO2 emissions' "acceleration" has dropped to zero, anyway) indicates something closer to 1.5 °C per century.

So it's great that after these decades, folks including Gavin Schmidt finally admit that indeed, the predicted warming is excessive. In the article, the excess is mostly blamed on some positive feedbacks from cumulus clouds (mainly in the tropics) that should act much like some powerful extra greenhouse gases according to the models; but seem to have a very little impact according to the observations. For decades, climate skeptics knew about the wrong fingerprint. Models predicted the fastest warming 10 km above the equator; the real world observations see a very small warming rate there. I've had these pages in my presentations for over 15 years. Great, in 2021, the Science Magazine finally admits that indeed, the (late) Fred Singer and Luboš Motl (whose names are suppressed) have been right all along and the greenhouse-with-feedback prediction for the tropopause above the equator contradicts the observations. The observations don't produce any significantly elevated warming there.

I think that while the models may look like rather good, realistic computer games (I still believe that the physics simulations in commercial PC games serving the gamers have better physicists to do such things than the climate community, however! The commercial sector attracts better talents in the physics simulations), their usefulness for the predictions of the climate change in the next 100 years or so is basically non-existent. It's because the realistic description of some local phenomena and patterns is basically completely separated from some long-term parameters such as the climate sensitivity. This separation not only sounds as Ken Wilson's "separation of effective theories according to the scale". Instead, the climate separation is a special example of Ken Wilson's separation!

The reason is that all the cumulative effects from the microscopic or local phenomena are "renormalizing" all the overall fundamental parameters such as the climate sensitivity – here I am comparing the climate sensitivity to a parameter in a renormalizable quantum field theory. So to do this complex physics correctly, you should basically introduce a fudge factor (or a counterterm) to renormalize any effect like the CO2 warming effect. A cloud has a complicated shape, the simplest representation of a cloud affect some quantity by X, but you must admit that the right effect is K*X where K is a coefficient of order one. At the end, the coefficient of the fudge factor must be extracted from the observations because it's not feasible that your model is so accurate that the complex effects of the turbulent clouds and their relationships predict the correct fudge factors from the first principles.

Because your models can't calculate difficult long-term or long-distance "emergent" quantities from the first principle, the observed value of the climate sensitivity finally has to enter the calibration of the coefficients as long as your model is realistic (and admitting that the simulation from the first principles is not feasible). But if you agree that you are basically adjusting the climate sensitivity (through the fudge factor) by using the observations, there is no reason to use the models for predictions of the global temperature change at all!

You may very well just take the observed change of the global temperature change from the observations; and calculate the future temperature change as a multiple of it. (I am almost sure that I have made this pro-phenomenology on this blog many times, despite the fact that my character is that of a "pure theorist", not a "phenomenologist".) Because the CO2 emissions will continue to be roughly constant, we will be getting the same roughly 1.5 °C warming trend per century that we saw in the thermometer data in coming decades. Every model that predicts something "totally and obviously different" should be labeled an experimentally falsified model!

I think that the whole problem – these models' getting a 4 °C per century of warming instead of less than 2 °C per century etc. – occurred simply because many people wanted the result to be higher because more alarming "predictions" is how they get larger and new grants! If this bias and failure of the scientific integrity were completely removed, it seems obvious to me that all the fudge factors – which have to be there – would be adjusted to predict the same warming rate for the coming decades as the rate that is indicated by the direct observations, namely something like 1.5 °C per century, and the overstated and underestimated values would be approximately equally represented. The realistic local details of the computer-game-like models are completely useless for the most important quantities that some people claim to "extract" from these models – but it is really impossible to extract them.

The extraction of such long-term and macroscopic parameters from the realistically looking computer simulation is as implausible as the following random PC game example. Imagine that you have some great PC game which simulates the movement of many soldiers. There are tons of NPC soldiers (controlled by some artificial intelligence) whose behavior seems realistic in the details, in the short run and locally. But then you decide to use this bunch of NPCs in a computer game to calculate how much time it takes for a real world army to annihilate the enemy. Well, you just can't because the fudge factors are still necessary. When a soldier shoots successfully, he may need some time to regain the energy to keep on fighting, or he loses the precision as a function of the distance between the enemy's soldiers, and so on (effects that go beyond the "realistic appearance" at the very short time scales or distance scales). All these details add new parameters or fudge factors to your simulation – that you are almost guaranteed to do unrealistically in your first simulation. When you realize the hopeless complexity of the fudge factors that should be there but they're absent because your PC game is ultimately rather naive, you will agree that it's easier to measure the speed of annihilation of armies in the real world battles, not in the computer simulations that only "look" realistic but they are actually not realistic in the aspects that haven't been trained to match the observations! And it is only the "appearances of realism", usually some short-distance patterns, that are programmed to "look" realistic!

Again, the time (or distance) scales are separated from each other which is why the short-distance or short-term realism of a simulation says nothing about the accuracy of its prediction for emergent, long term (or long distance) quantities.

This meaningless game of using complex models for predictions of long-term parameters should be completely stopped because it's not feasible at all. The warming apparently caused by 1/2 of a doubling of CO2 in the atmosphere may be approximately extracted from the thermometer data (it is roughly one degree per half of the CO2 doubling), it has some error margin (that also increases with the uncertainty whether CO2 was dominant for the temperature changes in the 20th century at all, but let me say that a 50% error is OK), and this figure (including the error margin) may be employed to predict the future warming, too. PC game-like simulations may look sexy and realistic but they don't actually help to calculate the unknown emergent parameters at all and the increasingly impressive hardware and aesthetics only simulation are only useful to obfuscate this uselessness in the eyes of the people who can't actually see what is going on, what is the actual flow of information, what can be derived from something else and what cannot.



You may want to watch e.g. this 16-minute comparison of the 2002 Mafia game and its 2020 "Definitive Edition" remake. The point of the remake is that the storyline and places should be equivalent. The original game was incredibly realistic; but those 18 years are seen damn too well. The 2020 edition is strikingly more realistic than the original. But think about the question which is analogous to the case of climate models: Would these improvement of the superficial realism of the scenes allow you to calculate more accurately e.g. "how many people in the city you kill if you drive along some trajectory in the city by some speed?"? It is a matter of common sense that the increased realism doesn't allow you to be more accurate about those things because they have nothing to do with the realism. The realism only applies to the "appearances" of the processes at the 1-second and 1-meter scales. The behavior of the pedestrians and their likelihood to be hit by car is still a parameter that you need to multiply by a fudge factor (extracted from experiments) if you want the game to be realistic in that respect. You just cannot realistically extract these numbers from the first principles because the realism of the pedestrians is still being faked and only trained to be realistic at some time scales. The same thing applies to the complex effects caused by clouds in the climate models.

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :

(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1828728-1', 'auto'); ga('send', 'pageview');