Saturday, May 09, 2020 ... Deutsch/Español/Related posts from blogosphere

Deborah Cohen, BBC, and models vs theories

Just like a huge fraction of pundits including former "rightwingers" has completely failed in this viral test of their lifetime, we sometimes see the opposite cases, journalists at unexpected places who have passed the test.

Dr Deborah Cohen is an award-winning health journalist who has a doctor degree – which actually seems to be related to medical sciences – and who is working for the BBC Newsnight now. I think that the 13-minute-long segment above is an excellent piece of journalism.

It seems to me that she primarily sees that the "models" predicting half a million of dead Britons have spectacularly failed and it is something that an honest health journalist simply must be interested in. And she seems to be an achieved and award-winning journalist. Second, she seems to see through some of the "more internal" defects of bad medical (and not only medical) science. Her PhD almost certainly helps in that. Someone whose background is purely in humanities or the PR-or-communication gibberish simply shouldn't be expected to be on par with a real PhD.

So she has talked to the folks at the "Oxford evidence-based medicine" institute and others who understand the defect of the "computer models" as the basis of science or policymaking. Unsurprisingly, she is more or less led to the conclusion that the lockdown (in the U.K.) was a mistake.

A posteriori, we see that the models producing cataclysmic predictions were wrong. But it wasn't just a matter of bad luck. It is not a matter of bad luck that everything that Neil Ferguson has ever predicted was wrong. Even a priori, there was absolutely no reason to think that computer models such as Ferguson's would be accurate or helpful in predicting complex phenomena in the world – and by complex, at least in this text, I mean "having very many interacting ingredients".

What is such a "computer model" from a scientific viewpoint? It has some aspects that are totally unrelated to science – as a method to look for the truth about Nature. It could have a nice graphics, good PR, whatever. Clearly, a person with the IQ above 80 understands that these aspects shouldn't influence our trust in the predictions at all. They have nothing to do with the science.

The part of a "computer model" that has something to do with science is simply

a numerical solution of a coupled system (differential equation or its approximation) with very many variables.
So Ferguson's model remembers – in one format or another – the number of people in a given area who have some age and some degree of exposure to the virus. All these numbers – a huge number of functions of time – are evolving in time, according to some differential (or discrete difference) equations. These equations have a huge number of parameters which represent how the people affect others (or what is the probability of one influence or another: Ferguson's models are stochastic ones and depend on the random generator, too). All these parameters, or at least all parameters that significantly influence the outcome, must be demonstrably (approximately) correct for us to have a justified faith in the (approximate) correctness of the predictions of the model.

Ferguson was light years away from obeying this condition.

Let us take some physical system. We may find evidence that the function \(y(t)\) evolves according to the differential equation\[

\frac{dy}{dt} = A(1+y^2), \quad A\in \RR.

\] Well, a computer modeler may solve this equation numerically. He imagines that the function \(y(t)\) is represented by its values \(y_N = y(N\epsilon)\) where \(N\in\ZZ\) and \(\epsilon\) is a small positive constant with the units of time. The differential equation above may be reduced to the difference equation\[

\frac{y_{N+1}-y_N}{\epsilon} = A(1+y_N^2)

\] and a computer may simply be told to calculate \(y_{N+1}\) out of this equation from \(y_N\) recursively. All these steps may be done numerically, with some small error margin. The precision may be huge and the coder may be proud about her ability to write this code to solve the differential equation. She may get an expensive supercomputer for one billion dollars from the government to do this task. She gets some results and interprets them as predictions of some fatalities or something else.

It must be great science because she is female, the computer was expensive, and lots of calculations were done by this infallible pile of silicon.

Meanwhile, a real good scientist is thinking very differently. First, someone who has at least some basic skills in college mathematics may immediately see that the differential equation above may be solved right away (despite the fact that it is a nonlinear equation):\[

y(t) = \tan (At+B).

\] Here, the parameter \(A\) of the original equation influences the solution. A new parameter \(B\in\RR\) may be chosen – differential equations of the first order have one adjustable parameter (imagine it is equivalent to choosing the initial condition \(y(0)\)). Great. So the first important lesson – which is way too frequent in the real world – is that the government has wasted one billion dollars by buying an expensive compouter for an incompetent female scientist.

This is a trivial task that may be solved analytically and a good scientist should actually know how to do it. The addition of the computer makes the whole "science based on models" less trustworthy, not more trustworthy.

Second, a good scientist cares about the quality of the assumptions or input. So he asks

* Do we really know the value of the parameter \(A\)? What is the error margin?
* And more importantly, do we really know that the real world system we want to predict obeys an equation of this type? Is it a good approximation? Have we chosen good terms on the right hand sides of the equations? Good functional relationships? Have we at least chosen the relevant variables? Don't the viral doses matter? Was it justified to completely ignore this real variable describing an infection?

If someone is sloppy about these fundamental things, it's just too bad. The product is unlikely to be good because of GIGO: garbage in, garbage out. He can't improve the quality of his science by sleeping with government officials of assorted genders or with married, Soros-funded eco-terrorists. He can't fix the problems by having an expensive computer with a contrived program running on it.

Now, the equation above was elementary. The derivative of tangent (sine over cosine) of \(x\) is "derivative of sine times cosine minus derivative of cosine times sine" over "cosine squared". Unsurprisingly, these two terms are exactly \(1+\tan^2 x\). You don't even need to know that \(\sin^2 x + \cos^2 x = 1\). If we want to predict the spreading of a virus, we need a huge number of functions of time, \(y_j(t)\), which replace the single function \(y(t)\) in our trivial examples.

The equations that make them evolve depend on all these variables. The derivative of every variable \(y_j(t)\) is affected by a function of all other variables \(y_k(t)\). And because the equations are almost never linear, this dependence of "all the functions" on "all the functions" cannot be summarized by a "matrix" of parameters \(A_{jk}\) which would generalize the parameter \(A\) above. You need an infinitely greater number of parameters to describe a general equation that actually governs the system.

If any (a single one) of these parameters is unknown or incorrect, then the predictions of your equations – and, equally, the computer model that just calculates these equations – become untrustworthy junk. And of course, if you are predicting how many Britons die, it's easy to find a parameter that was unknown or seriously incorrect – the case fatality rate.

If your equation – or computer model – assumes that 5% of those who contract the virus die (i.e. the probability is 5% that they die in a week if they get the virus), then your predicted fatality count may be inflated by a factor of 25 assuming that the case fatality rate is 0.2% – and it is something comparable to that. It should be a common sense that if someone makes a factor-of-25 error in the choice of this parameter, his predictions may be wrong by a factor-of-25, too. It doesn't matter if the computer program looks like SimCity with 66.666 million Britons represented by a piece of a giant RAM memory of a supercomputer. This brute force obviously cannot compensate for a fundamental ignorance or error in your choice of the fatality rate.

I would think that most 3-year-old kids get this simple point and maybe this opinion is right. Nevertheless, most adults seem to be completely braindead today and they don't get this point. When they are told that something was calculated by a computer, they worship the predictions. They don't ask "whether the program was based on a realistic or scientifically supported theory". Just the brute power of the pile of silicon seems to amaze them.

Even if you happened to use a realistic case fatality rate, the contrived model has lots of other moving parts and parameters that will make your predictions incorrect (and often wildly so) if the moving parts are chosen incorrectly. To "compensate" this risk, someone increases the resolution or the number of moving parts. Surely it must help, he tells his bosses and the public.

This is not science. It is stupidity. Needless to say, like so many defects of the society's relationship to science, this fatal defect has been nurtured by the climate hysteria and the coronavirus hysteria has used the existing infrastructure of stupidity and the superhighways used to push the stupidity into every little corner and down everyone's throat. The same things were happening with the climate models that were almost always predicting complete gibberish – and indeed, the same Neil Ferguson has been participating in that criminal atrocious enterprise, too. A single parameter, like the CO2 sensitivity of the climate (or the feedback coefficient that enhances it), were clearly essential for all the predictions but almost no proper work was done to find the truly adequate value of this parameter (and others). Instead, the climate modelers were impressing morons by having a big grid with datapoints, an application of the brute force of the computers that wasn't helpful for the science at all.

So we always agreed e.g. with Richard Lindzen that an important part of the degeneration of the climate science was the drift away from the proper "theory" to "modeling". A scientist may be more leaning towards doing experiments and finding facts and measuring parameters with her hands (and much of the experimental climate science remained OK, after all, Spencer and Christy are still measuring the temperature by satellites etc.); and a theorist for whom the brain is (even) more important than for the experimenter. Experimenters sort of continued to do their work. However, it's mainly the "theorists" who hopelessly degenerated in the climate science, under the influence of toxic ideology, politics, and corruption.

The real problem is that proper theorists – those who actually understand the science – can solve basic equations on the top of their heads, and are aware of all the intricacies in the process of finding the right equations, equivalence and unequivalence of equations, universal behavior, statistical effects etc. – were replaced by "modelers" i.e. people who don't really have a clue about science, who write a computer-game-like code, worship their silicon, and mindlessly promote what comes out of this computer game. It is a catastrophe for the field – and the same was obviously happening to "theoretical epidemiology", too.

"Models" and "good theory" aren't just orthogonal. The culture of "models" is actively antiscientific because it comes with the encouragement to mindlessly trust in what happens in computer games. This isn't just "different and independent from" the genuine scientific method. It just directly contradicts the scientific method. In science, you just can't ever mindlessly trust something just because expensive hardware was used or a high number of operations was made by the CPU. These things are really negative for the trustworthiness and expected accuracy of the science, not positive. In science, you want to make things as simple as possible (because the proliferation of moving parts increases the probability of glitches) but not simpler; and you want to solve a maximum fraction of the issues analytically, not numerically or by a "simulation".

Pseudoscientists such as Neil Ferguson have used their pseudoscience to shut down much of the world for several months. They have caused damages equal to trillions of dollars. I am sure that the execution of Neil Ferguson and every single person on Planet Earth who has ever said something positive about this giant aßhole wouldn't be a sufficient compensation for the gigantic losses that they have caused to mankind.

However, the bastardization of pure science is probably something that I find even worse.

And yes, I find the stupid people's trust in charlatans such as Ferguson to result from the same rudimentary misunderstanding of science as the attraction of the people to "theories of everything" similar to Stephen Wolfram's. Underneath both groups, you see this utterly irrational and stupid idea that "when a computer was used and did lots of operations, the results must be important and/or true". It's just not true at all. It is not true in any approximation, it is not true in any important subclass of problems and situations. A computer is just a slave of a human. The human prescribes what the computer should do. If the human tells the computer "do something stupid in a contrived way", the computer will do exactly that, and that's what is happening most of the time because most of the people who have access to computers are morons.

And Stephen Wolfram (who certainly isn't generally stupid but he just behaves in such a way in the context of "physics") – with graphs or cellular automatons – much like Leo Vuyk – with his pictures of entangled strawberries – are just drawing stupid pictures. They may be "constructing computer representations" of some objects that may resemble objects from the real world. But this is not physics – more generally, it is not natural science. Physics and science isn't about drawing pictures. It isn't revolving around "creating just some mathematical, let alone discrete, representations of objects from the real world". Instead:
Science is a systematic framework to figure out which statements about Nature are correct and which are incorrect.
And according to quantum mechanics, the truth values of propositions that include "predictions for the real world" must be probabilistic. Quantum mechanics only predicts the "similarity [of propositions] to the truth" which is the translation of the Czech word for probability (pravděpodobnost).

It is the truth values (or probabilities) that matter in science – the separation of statements to right and wrong ones (or likely and unlikely ones). Again, I think that I am saying something totally elementary, something that I understood before I was 3 and so did many of you. But it seems obvious that the people who need to ask whether Leo's or Stephen's pictures are "theories of everything" must totally misunderstand even this basic point – that science is about the truth, not just representation of objects.

In mathematics – which is a good enough metaphor for science here – you may construct objects. The technical details how you do it seem to be the obsession of axiomatic mathematics and set theory. In set theory, you can really create everything out of the braces and commas. {} is the empty set. You may have a set that only has one element, the empty set: {{}}. Then you can have more complicated sets such as {{},{{{}}}}. In fact, by combining braces and commas in this way, you may represent all integers, sets of integers, all general functions, functionals, coherent sheafs, whatever you like (assuming that you allow infinitely long sequences of braces and commas).

This minimalistic set of the building blocks may be said to be cute but it doesn't help you with most of the mathematics that we need. Nested braces won't help you to solve the differential equation we started with. It won't help you with an overwhelming majority of mathematical problems, either. Mathematics isn't primarily about "building objects from simple building blocks such as braces". Mathematics is about proving valid assertions assuming that some initial propositions (called the axioms) are valid. Mathematics is about the truth. So is the science. Mathematics and science are about the truth, the thing obsessed with the pictures are visual arts. Arts and sciences aren't the same thing (although they're clumped to the Faculty of Arts and Sciences at Harvard).

My real point is that if you draw pictures of create discrete representations using strawberries, cellular automatons, or graphs (not to mention various octopi and other things that have been offered by hundreds of childish crackpots in the past), you aren't segregating the truth from the falsity at all. My real point is that using strawberries, cellular automatons, graphs, or octopi, you can draw things that resemble the truth but you can also draw things that resemble the untruth. Because your work doesn't help us to segregate the truth from the untruth, its scientific value is strictly zero. It is simply not science. It is absolutely obvious that Leo or Stephen aren't deriving the Born rule or the Standard Model Lagrangian. They are drawing pictures (often using an artificially restricted toolkit). You simply can't ever find the truth just by drawing pictures. All the details about the strawberries and graphs are totally irrelevant. You don't need to study them; it is guaranteed that there is nothing of scientific value inside.

As a high school student, I read Richard Feynman's "Surely You're Joking, Mr Feynman" for the first time. I read most of it during the German classes around 1991 – just one reason why my German sucks after that one year LOL (the German teacher confiscated my copy which was borrowed from a friend but I cleverly and successfully stole the book back in the dining hall, she was angry LOL) – but I loved it and Feynman sort of replaced Einstein as my most favorite scientist. I thought that I understood "everything" and why it was true. But the reality was a bit different. I did need some years to appreciate some things. For example, Feynman has written about an early computer boss at the Manhattan project (only his wife Mary – whom he also hired – is celebrated today due to the omnipresent feminazism):
Well, Mr. Stan Frankel started this program and began to suffer from a disease, the computer disease, that anybody who works with computers now knows about. It’s a very serious disease and it interferes completely with the work. It was a serious problem that we were trying to do. The disease with computers is that you play with them. They are so wonderful. You have these \(x\) switches that determine, if it’s an even number you do this, if it’s an odd number you do that, and pretty soon you can do more and more elaborate things if you are clever enough, on one machine. And so after a while the whole system broke down. He wasn’t paying any attention; he wasn’t supervising anybody. The system was going very, very slowly. The real problem was he was sitting in a room figuring out how to make one tabulator automatically print arc-tangent x, and then it would start and it would print out columns and then bitsi, bitsi, bistsi and calculate the arc-tangents automatically by integrating as it went along and make a whole table in one operation. Absolutely useless. We HAD tables of arc-tangents, but if you’ve ever worked with computers you understand the disease. The DELIGHT to be able to see how much you can do. But he got the disease for the first time, the fellow who invented the thing got the disease.
For years, I didn't quite know where I could see the phenomena that Feynman has discussed. I thought that Feynman just "disliked computers" and I was a bit different, perhaps because it's a different epoch, and so I could just ignore Feynman's story about the "computer disease". But today, I understand the "computer disease" perfectly – and the point he was conveying was much more objectively valid and important than his personal sentiments towards computers. It's the disease of becoming an addict in the "playing", producing ever more elaborate constructions in the "virtual world", that completely distracts you from the real world and the problems you should be actually solving. While I thought that "no one could have been possibly suffering from the computer disease that prevents one from distinguishing the objects in the RAM from the real world", I know that my optimism was misplaced.

All the people who believe in the – totally inadequately contrived – models (such as Ferguson's) must be suffering from this kind of a "computer disease". And all of them completely misunderstand what the scientific method – or the skills and procedures done by good theorists – are all about. A complex enough equation or a computer model is only scientifically useful if at least an overwhelming fraction of its traits and values of the parameters are known with a sufficient near-certainty or accuracy. If you haven't measured too many properties of an object (like Covid-19), then it is completely inadequate to describe the object (the disease) with equations or models whose general form depends on many parameters! It's just so obvious to me but it is clearly not obvious to most people.

Consider even the elementary question: How does Ferguson's program "know" that it is not emulating common cold or measles instead? Clearly, the programmer couldn't have "quite" inserted the information for the computer to know what exact disease we are dealing with (because the relevant properties of the disease weren't really known, and for other reasons). If you aren't really sure that Ferguson's program is more applicable to Covid-19 than to common cold or measles, why would you trust it as a prediction for Covid-19? Most people seem incapable of this basic reasoning and similar basic reasoning.

Instead, when you just know a bunch of things about a virus, you need to use much less contrived theories, not models, as the mental image what is going on. And another good point is that if you have a sufficiently simple mental image what is going on, a good enough scientist may simply predict – either qualitatively, approximately, or completely precisely – what the theory predicts "by pure thought" or "on top of his head" or "analytically". I surely can solve a huge fraction of the problems with the adequate degree of certainty or accuracy and if someone tells you that it is impossible, he is simply lying. If he needs a computer to answer some simple questions (those with a small number of moving parts, like the differential equation at the top), then it proves that he is a bad theorist, not a good one! Yet, a majority seems to get these things totally upside down.

A majority suffers from the computer disease and the most powerful "scientists" of the world that we inhabit who are helping to create policies are imbeciles who can't really solve any problems on the top of their head so they spread the totally wrong ideology that a good science means a mindless trust in the output from a computer program, regardless of how the program was written down. That's another reason why it is so easy for charlatans such as Mr Neil Ferguson to brainwash millions and promote their cataclysmically destructive policies.

Already decades ago, I actually did notice the widespread inflated laymen's understanding of the role of computers in science. Some people would tell me that one uses computers all the time while doing particle physics or string theory. It's the computers who must do the thinking. Not really. In a big majority of important papers, it's almost purely the humans who do the thinking. If a computer sometimes calculates something useful (like a law using lattice QCD, a rare situation in which the computers might show their muscles), a good theorist simply learns the lesson obtained with the computer's help – and next time, he no longer needs a computer for that! Consequentemente (apprendi Portugues), almost all of good science (theory) takes place in good scientists' heads, not on computers!

A theoretical physicist is a top intelligent profession exactly because it is done without "tools"; occupations such as the excavator operator, toll-taker at the Golden Gate Bridge, or a cashier use excavators, ticket printers, and checkout counters for repeated mechanical operations exactly because these jobs are not terribly intelligent and some brute force is more important. A scientist-theorist is an intelligent job description so he just can't spend most of the time by doing mechanical things repeatedly – and (with some qualifications) computers really can do mechanical things only so far.

Incidentally, the laymen's tendency to favor "models over theory" might be a manifestation of a somewhat more general trait, namely their belief that "physics as a unification of sciences is really impossible". The phrase "theory of everything" may sound as an oversimplified PR but it is really the opposite mistake that the laymen usually do. They imagine the "wisdom about the world" as an infinite system of mostly unrelated things that can't ever be understood by someone. But science and especially physics pretty much is doing what they find impossible. A good scientist really can solve "most things", at least in his discipline or subdiscipline, and do so using his head, not a computer. Every question may be mapped or reduced to some insights that he has understood, a finite number of them or their combinations.

Add to Digg this Add to reddit

snail feedback (0) :

(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//','ga'); ga('create', 'UA-1828728-1', 'auto'); ga('send', 'pageview');