Friday, May 18, 2012 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Where and why people's reasoning starts to diverge from the physical one

Introduction to all conceptual mistakes that people do when they think about science and Nature

When you look at the whole set of scientific misconceptions that I have been trying to correct and clarify on this blog for years, whether they are all about the climate panic, rejection of quantum mechanics, denial of the arrow of time, hopeless research projects in quantum gravity, or anything else, you could think that this set depends on a large number of isolated technical details that one should simply learn and many people haven't.

But I don't actually think it is the case; I think that most of the wrong attitudes, wrong conclusions, and delusions are due to some more general mistakes in people's thinking, due to their revolt against some very universal principles of science. If one learns these principles and starts to think scientifically, he or she may exploit them many times. In other words, I believe that most of the people's mistakes are about the rejection of principles that people should probably internalize well before they're in puberty – otherwise it may be too late. And maybe it's not too late.

Let me try to map this tree of the scientific approaches (well, there is only one scientific branch at the end although it may be accessed from several directions) and their "competitors".

Science vs non-science

Near the very root of the tree, let us decouple the people who reject the scientific method as a matter of principle. When they face a new or old claim that someone wants to prove or check or dispute, these people just don't believe that the right answers may be looked for by the evaluation of the empirical evidence that may be done now, in the lab and repeatedly, in combination with the logical and mathematical reasoning.




Of course, now I am talking about folks like my sister, some relatives of yours (I hope), and a large group of people whose number of elements is – I believe – greater than 5 billion. It seems to me that a vast majority of the world population refuses the idea that the truth about most general enough questions that refer to the real world may or should be studied by the scientific method. Why do they do so? One way to answer is that most people just haven't tried so they just don't know that it works.

They haven't ever successfully managed to complete one important enough exercise which led to a true yet impressive result about a previously uncertain or mysterious question, one calculation of this sort, and if they have done it, one wasn't enough. They haven't crossed the critical mass of such arguments to convince themselves that observations and doable experiments are, together with a mathematically oriented analysis, the superior way to decide which answers are true and which answers are untrue.

One could perhaps argue that the number of people who can actually usefully use science in "new contexts" – i.e. contexts away from some highly specialized situations in which they were trained to behave according to some mechanical ad hoc rules – isn't much larger than those 2 billions which means that the 5 billions people are doing a sensible thing when they refuse science. They couldn't do it well, anyway.

I am not sure whether this counting is right. It seems conceivable to me that the influence of the scientific method could be much higher than it is if the intelligence and skills of the people were the only limiting factor.

In other words, I believe that the deliberate priority of various superstitions, obsolete religious dogmas or, on the contrary, newly created religious superstitions such as a frying Earth and so on are seriously lowering the efficiency with which most people on Earth use their brains. For all of them, the world is full of witches, homeopathic solutions, prophets, dowsing sticks, lucky numbers, geopathogenic zones, miracles, divine truths revealed to shamans, tipping points leading to the Armageddon, and so on. Scientists – people who actually try to use brains, logic, and mathematics applied to the observations – are just some exotic freaks who deserve humiliation and who can't ever reach the glory of the true leaders such as the witches and prophets. Many of these 5+ billion people would be capable of disproving this opinion of theirs if they tried (and if they decided not to fool themselves) but they just don't want to try. They have already made their mind – either individually or they were forced to adopt it – and doubting it would mean for them to undermine their own spiritual existence which is what they don't want to do.

Fine. If someone rejects the scientific approach to the truth in the most general sense, you can't do much against it and there isn't too much to explain. Most people on Earth are just either too silly or too uneducated or too brainwashed or too emotional or unexposed to the ideas underlying the modern science and technology. But this blog entry would be very cheap if they were the primary target. I want to talk about the folks who superficially claim that they want to pursue the scientific method but they don't or they make errors that look rather elementary, at least from some viewpoint. There are still many levels at which one may deviate from the scientific reasoning.

By the scientific reasoning, I mean an appropriately accurate, rigorous, and reliable analysis of the past data and, whenever possible, data that can be repeatedly obtained by experiments. This analysis depends on mathematical logic and mathematics in general. In other words, by the scientific reasoning, I mean the physicist's approach to questions.

It doesn't mean that I am only talking about questions that are traditionally studied by the physicists. I mean any sensible questions about the observable world. Sciences different from physics will be considered approximations of the legitimate approach, i.e. physics. As long as these approximations are OK for some purposes, these sciences will be viewed as OK; once they deviate, of course that widespread attitudes in the other sciences that differ from the physicist's approach will be identified as errors as well.

Rejection of mathematical logic

The first branch diverging from the scientific one right after the superstitious branch discussed above is a branch that denies the mathematical logic. I am talking about the people who don't think or who don't "agree" that our knowledge or their knowledge about the world may be organized into propositions that are either right or wrong or something in between but whose validity may be, in principle, studied, whose validity matters, which can a priori be right or wrong, and which may be correlated with other propositions by the rules of mathematical logic.

For example, if we know that "A implies B", we also know that "not B implies not A". Also, if we know "A" and "A implies B", we also know "B". And so on. You hopefully know what I mean by mathematical logic.

If we're looking at a scientific problem, we must first transform its open questions and mysteries into some operationally meaningful propositions whose validity may be studied. When we find lots of evidence that some proposition is true, we have also found evidence that the propositions that contradict it (e.g. its negation) are false. There are important aspects of such propositions: they must have some consequences that aren't "totally obvious" i.e. that depend on further research; and – this is really the same thing as the previous point although it sounds different – we have to be open to both possible answers, Yes/No, at the beginning. If you can logically prove that a proposition is true, then it is a tautology and you shouldn't study it anymore. On the contrary, if you can't prove or disprove a proposition, you must always be open to both possibilities until you collect enough evidence for one of the answers. And if a proposition doesn't make any impact on the things you may observe, not even in principle, then it means that you will never learn anything about it and you shouldn't try to study it because such research is futile. Questions that "don't belong to physics" because they are too philosophical are examples of this category.

You might think that no one who believes in the scientific method can question this basic logical framework. But of course, you may actually find lots of people who do – including those who consider themselves highly "philosophically" sophisticated when it comes to science and its methodology. In fact, I would say that the whole movement attempting to reject the proper quantum mechanics in its Copenhagen form (or its modern, equivalent presentations) is an example of the refusal to follow the mathematical logic as a vital skeleton of science. Why?

It's because these people want to declare certain statements as true and important ones – e.g. that the world has objective accurate properties before the observations – but even though these statements are important in their approach and can't be proved by pure logic, they don't want to discuss whether the evidence supporting these claims is actually stronger than the evidence supporting the negation of these claims.

You may see that I am interpreting these folks as cherishing dogmas that can be proved neither by pure logic or mathematics; nor by the observable evidence; nor by their combination. In this case and many others, these dogmas may actually be proved false. From a perspective, you may say that their desire to believe certain dogmas by disallowing their negations to be even considered makes this branch a subset of the previous one, the purely religious or superstitious one.

Similar comments apply to many other erring "scientists". The global warming fearmongers want to make other "key" statements, e.g. the climate is changing dangerously, but they never want to be sufficiently specific so that the validity of the statement may be compared with the validity of its negation. More generally, they want to do everything possible so that the negation can't even be considered. But according to the logical approach, if the negation can't even be considered, then even the original statement doesn't carry any information, anything that could influence a rational person. Only statements whose validity (or probability) is shown to be different from the expectations may influence a rational person's opinions.

Needless to say, many people who are denying the basic rules of logic actually know that they're doing something wrong but they're addressing their demagogic arguments to consumers who honestly can't find the problem. I am confident that a vast majority of people hired as climate alarmists by the universities and other places would be able to figure out that when you do all the steps honestly and correctly to determine whether it is a beneficial idea to dramatically reduce the CO2 emissions within a few years, they would be able to find out that the answer is No. But the mixture of scientific propositions with the pop science propositions (i.e. not-so-scientific ones) and with various mutually inconsistent conventions to gauge the validity of these propositions is explosive and may be abused to find lots of space to change the final conclusions.

Refusal to consider the context and adjacent propositions whose validity is clearly relevant for yours; refusal to isolate questions from completely decoupled ones

Fine, so the first issues that science may want to clarify before its mathematical apparatus gets sufficiently sophisticated for detailed calculations are Yes/No questions. Scientific statements that deserve further research are never tautologies; we don't know whether a proposition or its negation is true; evidence must be collected to decide – or try to decide because we're never guaranteed that a problem may be fully solved within a period of time.

I decided to insert this short section that covers two errors that are opposite to one another. The climate alarmists will be my examples once again but the errors are much more widespread.

The first error is that many people try to answer a question but they ignore other questions that are demonstrably relevant; the second error is that they fail to stop talking about questions that are demonstrably irrelevant. Let us mention examples.

When we talk about the evolution of the Earth's temperature, we are talking about the increase or decrease. The simple question "is the temperature rising?" is too ill-defined because there isn't just one temperature (consider many places on the globe and above the globe) and there isn't just one time scale (and one particular position in time) in which the question may be addressed. All these details have to be added to the question.

Once they're added, it's important to notice that the answer may be both "increasing" or "decreasing" (or, and it is a measure-zero possibility that is however very important in the presence of nonzero error margins which includes many real-world situations, "not changing at all"). And indeed, the probability is 50% vs 50% for increasing vs decreasing temperatures for most well-defined implementations of the question. One has to do lots of operations – averaging temperatures over 20-year or longer periods (to get rid of the short-term "noise" from various sources) and over the whole globe (to get rid of the "regional weather" and "regional climate" which is also a "noise" for this question) – if he wants to see a clear excess of "increasing temperatures" over "decreasing temperatures". And one will only find such an excess in a few recent centuries, since the little ice age or so. For longer periods of time, the signs start to be mixed once again.

I want to say that all the known sources of temperature variations are clearly important for the question "whether the temperatures are rising". Especially in a strongly interacting system, various phenomena influence others. We clearly can't predict anything important about the evolution of temperatures if we don't evaluate the contributions from changes in the cloud cover, volcanoes, and many many other things that obviously have a sufficient magnitude to matter. And on the contrary, we can't ask a question "just about some change attributed to something" because it's not directly measurable. Thermometers don't show us which fraction of the temperature came from clouds or power plants. They just show a single temperature.

And when the atmosphere is changing, many things may be changing with it. One can't neglect changes in the cloud cover, ocean currents, and many other things. Clouds influence the surface temperatures and vice versa. Here I want to say that it is a fundamentally flawed idea to try to isolate a subdiscipline that clearly can't be isolated. If you have a "chunk" of questions and mechanisms that strongly influence many others in the "chunk", the "chunk" is the minimal entity that may deserve its own scientific discipline or specialization.

Many people violate this obvious derived rule. They try to overlook the forests for the trees. In many cases, it's flagrantly obvious that there is a forest around your question (your tree) and this forest makes an impact on your tree. These questions "adjacent to the object of your obsession" are relevant for the "object of your obsession" and you simply shouldn't ignore them. Your tree usually can't be modeled as a tree in the vacuum if it is a tree in the dense forest. Also, you shouldn't pretend that the "object of your obsession" is very important unless you actually have some evidence for that statement. And you usually won't find such evidence if the object is demonstrably just a single small wheel or gear in a larger internally interacting conglomerate.

If I use a positive language, an important sequence of steps in the development of a scientific discipline is to find out what actually belongs to the discipline and what doesn't. Many people have said that the true art in physics is about the ability to find out what can be neglected. The things that are interacting with your favorite objects all the time – in both directions – clearly can't be excluded. On the other hand, the entities whose interactions are negligible may be and probably should be neglected. Again, actual evidence (and not unjustified dogmas defended by loud screaming or "authorities") is needed to find out whether something may be neglected or not.

People err on both sides. It's easy to invent an example of the opposite mistake. When some people start to speak about Richard Lindzen's being a smoker who pays a few dollars to tobacco companies by smoking XY cigarettes every day in the context of the analysis of the H2O circulation patterns in the atmosphere, you may be pretty sure that you have included too detached issues to the analysis of H2O in the atmosphere and the people who think of tobacco when they try to analyze the energy flows between the clouds probably won't get too far in the accuracy of their research; they're too distracted by irrelevant things (like the artist who says "it's just like f*cking" when Richard Feynman tries to teach him about solenoids), if I have to avoid the term imbecile.

This example was meant to be a bit comical; the people who try to link the atmospheric physics to conspiracy theories about tobacco industry or the Big Oil are real nuts. But even among people who are not obvious nuts, you will find lots of people who isolate their questions from others although they can't be separated; or people who mix topics – classes of propositions – that have no (or almost no) impact on each other.

Rejection of quantification of claims; failure to appreciate continuity of quantities

Mathematics is a key subject behind the natural science. But it has many subdisciplines and mathematical logic discussed above, while it's paramount, is "more discrete" a discipline than the disciplines of mathematics that are really critical for the actual hard work in physics. The important disciplines deal with continuous numbers – real numbers or other number systems that may be built out of real numbers (although I surely don't like this "constructive" definition of the complex numbers which are ultimately more fundamental than the real ones).

While there are important Yes/No questions everywhere in science that can be sharply answered – for example, "Are the postulates of QM exactly right?" or "Is the information exactly conserved in principle when a black hole evaporates?" (both answers are "Yes", we've learned) – most questions in science are about more continuous things. It means that when we try to make them more meaningful from a scientific viewpoint, we should convert them to the form starting with "How much...". The empirical evidence that is relevant for such questions is about the measurement of a priori continuous, real values of various quantities. It doesn't mean that there are no examples in which we may measure binary things but it's clear that if we're measuring something continuous, we're getting more complete and more accurate information.

Physics – and science – is all about the continuous numbers. The a priori values of pretty much any quantity we encounter in physics are real. That's true for distances, times, voltages, and so on, and so on. It doesn't mean that we can't ever find quantities that are shown to take values in a discrete set – e.g. the angular momentum in quantum mechanics – but such a restriction may only be assumed if there's actually some evidence or proof for that. It's absolutely nonsensical to try to assume discreteness of a quantity without the evidence – or contrary to the evidence. This approach amounts to an additional assumption, an extraordinarily unlikely one, that the quantity can never take values in infinitely many a priori sensible intervals.

Of course, various discrete physics revolutionaries err in this elementary aspect of physics. In fact, their would-be revolution is all about the attempt to deny the continuous character of almost all propositions (and measurements) in physics. They either deny that measuring devices show continuous values; or they deny that there's an a priori nonzero probability for the devices to show any number in an interval; or they do a similar mistake.

Let me emphasize that the importance of the real numbers in physics doesn't "supersede" the importance of mathematical logic. Even when you use real numbers in mathematics, you may still construct propositions involving such numbers that are either true or false (think about the identities such as sin(2x) = 2 sin(x) cos(x)). This is true in physics, too. The only new aspect of physics relatively to mathematics is that some quantities appearing in these propositions represent values that were, are, or will be measured in the real world (think about s = gt^2/2 for an accelerated motion).

I have mentioned "discrete physicists" to be the typical villains violating the demonstrably continuous essence of physics. But many others are doing the same mistakes. Climate alarmists love to talk about Yes/No questions related to the climate change – probably because they sound more impressive – even though questions starting with "How much" are the only similar ones that have a decent chance to be well-defined and answerable by the scientific method. Many people try to hide this obvious fact. When someone screams (or raps) something like "the climate change is real", it is totally obvious that the very purpose of such a screaming is obfuscation and an attempt to prevent one from making quantitative research. Science can't answer as vague and general questions such as "Is climate change real?". Or if it can answer them, the answer is almost certainly "Yes" but this answer has no surprising consequences.

A whole category of suberrors would deal with error margins and statistics. When we talk about the values of quantities in Nature, they're continuous and involve errors coming from many sources. The measuring device doesn't measure "exactly" the quantity we're interested in, either because of its imperfect inner workings or imperfect calibration, or imperfect way how it covers a region, or because of human errors, or because it measures a random quantity similar to the "number of events" which is statistical and inevitably suffers from a statistical error (which becomes less important if we repeat the experiment many times). A scientist must be aware of the existence of error margins which makes all propositions about real numbers from the real world – typical inequalities involving some functions of the measured quantities – to be true or false in the statistical sense only. Many incorrect conclusions occur when people overlook that there are error margins and they deduce, for example, far-reaching conclusions out of very accurate agreements between pairs of quantities that are clearly coincidental because the compared quantities have a larger error margins; many other errors are caused by setting the error margin to zero or making more subtle errors in the way the error margin is treated.

Once again, the quantum deniers may be included into this category of mistakes, too. When they assume that the world has some "objective precise properties" before the measurement, they are doing nothing else than the denial of the logically tautologous fact that "properties that an object has before the measurement can't be measured" (failure to eliminate unphysical questions from physics) or the fact that "the results of a measurement are either uncertain or inaccurate or both". If the evidence shows that the electron lands at a rather random place of the photographic plate, and only statistical properties of the results display predictable patterns, you're just not allowed to assume the opposite, especially not if you don't have any theory that would be compatible with this assumption as well as the observations.

Rejection of order-of-magnitude estimates

An important technique used by all good practically oriented physicists – but even many other good physicists (and other scientists) – all the time are order-of-magnitude estimates or dimensional analysis. These are the terms for simplified calculations that obtain the resulting value of a quantity of interest as the product of powers of the input parameters. This resulting value isn't exact but is a reasonable multiple of the right value; the dimensionless coefficient is neither much larger than one nor much smaller than one. When the exponents in the powers are uniquely dictated by the units of the result and the units of the input parameters, we call this approximate calculation "dimensional analysis".

This method works because you may view it as a "shortened, rough version" of the full calculation and the difference between the full calculation and its shortened version only influences the numerical coefficient in front of the result. The multiplicative discrepancy is very unlikely to be a number much larger than one or much smaller than one because the calculation of the dimensionless coefficient is equivalent to a purely mathematical problem in which numbers of order one are manipulated to get another number and in a majority of situations,  perhaps with some additional assumptions that may be seen to be obeyed in many contexts, one may see that the resulting dimensionless number will be of order one, too (a statement that is a bit vague but you see a number very different from one if you see one).

I don't want to explain the dimensional analysis or order-of-magnitude estimates in detail here. Instead, my goal is to say that they're important techniques whose importance can't be denied by someone who claims to approach questions about Nature in the scientific way. This technique is important not only in the situations in which we don't have enough time and we want to calculate an approximate answer to a question. In fact, this method is providing us with the first step to "get an idea" about a physical system whose details we don't understand yet.

Arguments based on the dimensionless analysis are inaccurate but they're important and legitimate if there are no errors in them. The people who try to ignore them are not acting rationally; in many cases, people are denying such arguments because they're totally unfamiliar with this mode of reasoning. Pure ignorance. But ignorance doesn't mean that the evidence doesn't exist. In other cases, people deny these order-of-magnitude calculations because they find their conclusions inconvenient.

Such estimates are also important to find out whether some effect may be relevant for a particular observation. If we can calculate that the effect is many orders of magnitude smaller than what it is needed for the effect to influence the quantity we have measured, it is a strong argument indicating that the effect – and/or everything that is connected to it – may be ignored or separated when we want to understand the bulk of a question. On the contrary, if the order-of-magnitude estimate says that the effect of something is large enough, comparable to other factors, it is evidence that we shouldn't forget about the effect unless we have some good reason (e.g. evidence that a theory ignoring this effect works much more accurately than what we could expect from a generic sloppy theory that neglects important things).

Refusal to improve the accuracy

Sensible estimates of the order-of-magnitude of some quantities are the first steps in our efforts to understand a conglomerate of questions. However, it is often useful or important to keep on improving the accuracy. We want to learn something about the dimensionless coefficients that we failed to distinguish from one in the step above.

Arguments based on approximate estimates are often vague and have a chance of being qualitatively wrong; more accurate statements allow us to derive more accurate or more reliable statements about other things, too. I will postpone examples for a while but many people are trying to abandon science at this point. They want to hide in the "fog" of the errors and avoid improvements in the precision. Sometimes it's because more accurate measurements or calculations lead to conclusions that exclude their "pet hypotheses". Well, we could mention an example: numerologists. Sometimes they write down a contrived and clearly unmotivated formula to "explain" a constant which works because the formula is awkward enough and some formulae simply have to succeed within the error margins. But in some cases, the numerological formulae don't even work within the known error margins and their proponents want to ignore these facts. They want to preserve a less accurate understanding of a situation because its conclusions seem more convenient for them than the conclusions of a superior, more accurate treatment.

One could also mention the errors in the climate sensitivity. The IPCC still claims it to be between 2.0 and 4.5 °C or so per CO2 doubling. The error margin is of order 100 percent; it's huge. And it's not getting better. If these vague results were the only ones one may find in the literature, this failure would indicate that this particular subdiscipline hasn't gotten beyond the order-of-magnitude estimates. Even if the sensitivity were 3 ± 1 °C, there would probably be a substantial, at least 5 percent, probability that the right number is below 1 °C (assuming a normal distribution – and much of the evidence that the distribution is highly non-Gaussian are really bogus).

However, tiny biases are enough to shift these wide distributions in one way or another. There has arguably been a huge bias in the positive direction. More importantly, the cutting-edge science about the climate sensitivity has gone beyond the order-of-magnitude estimates. For example, Lindzen and Choi calculate the sensitivity to be something like 0.9 ± 0.2 °C or something like that; the standard deviation is 10 times smaller than in the huge IPCC range. Of course that research concluding with figures that seem much more precise should be paid much more attention to. One of the "details" that follow from this result, if true, is that any climate fear contradicts science and any investment attempting to reduce the CO2 emissions or concentrations is a waste of money from the scientific vantage point.

The alarmists' error margin hasn't been decreasing because they're working with the constraint that it is a blasphemy to get the right result around 1 °C which would prove that global warming fears are just stupid. So they must get numbers at least above 2 °C but among these wrong values of the climate sensitivity, there is no "canonical wrong value" that everyone could naturally agree with. Try to solve the mathematical problem: "Find a very large number greater than 2 that is a good approximation of 1." The IPCC researchers wouldn't face an easy problem if they had some scientific integrity. The lower values closer to 1 °C are more consistent with observations; the higher values that are sometimes claimed to be as high as 5 °C are increasingly incompatible with observations but they're very interesting for the granting agencies and politically or financially motivated sponsors because they allow the nonsensical hysteria to continue. Depending on the relative composition of science and "higher interests", the alarmist hired guns may get any number between 2 and 5 °C or so. They haven't converged to a smaller error margin or "consensus" because there can't be any natural consensus about a number if you impose the extra condition that the number must be much higher than the right one.

On the contrary, the people who have actually studied this question scientifically did converge to an answer whose error margin is vastly smaller than 2 °C. Their methods are superior for obvious reasons. If you can measure something with a 5 times or 10 times smaller an error margin, think about your height plus minus 1 cm or 10 cm, the more precise measurement is clearly more relevant and valuable and you should focus on such methods that are capable of telling you something precise.

Of course, the main reason why the skeptics have been converging to more accurate answers – incidentally compatible with the no-feedback value of 1.2 °C which may be more than just a coincidence – is that they were not constrained by the (wrong) assumption that their value should be high. The refusal of the IPCC's climate sensitivity's error margin is another argument why they're not doing research in the ballpark of the truth. And in this case, much like others, the improved accuracy has impacts. The value around 1 °C is compatible with the IPCC range if interpreted as a normal distribution – within two standard deviations or so – but by getting more certain that the right value is really around 1 °C, competent scientists are also getting increasingly more certain that the climate alarm is irrational.

Anything goes
Openness to discontinuities and paradigm shifts with no reason
Wrong opinion that sufficiently general claims are immune to falsification

In the last portion of this blog entry which was given three titles – originally meant to be independent sections – I want to discuss one more class of mistakes: excessive extrapolations of insights and the opposite tendency, an insufficient extrapolation of insights and the desire to forget them as soon as possible.

I think that most people understand why the first mistake is a mistake but they often err in the second direction. But it's still true that the errors appear in both directions.

Imagine it's Spring 1998, a warm year, and the trend of the global mean temperature in the recent two decades was nearly 0.2 °C per decade. Can you conclude that the globe is warming and the trend calculated from the following 15 years could have been 0.2 °C per decade as well? Well, many people have certainly made similar predictions. The actual trend in the following 15 years turned out to be zero, within the error margins (including the differences between the individual methodologies).

The idea that the trend in the next 15 years had to be very close to the trend in the previous 15 years was a hypothesis, a proposition, and it may be wrong. It's obviously nonsense that the same temperature trend continues forever. But one may discuss this question more quantitatively. One may analyze the hypothesis that the temperature is composed out of an increasing "signal" and "noise" and estimate the relative importance of both for the 15-year trends. He will find out that the noise is still huge. However, the hypothesis that we're living in this simple "linearly increasing signal" plus "noise around zero" can be totally wrong and indeed, there is a lot of evidence that it is a totally inadequate description of the global mean temperature as a function of time.

Many sensible people know that the trends in the past can't always be extrapolated to trends in the future. They're not obliged to continue. In many cases, they don't continue. And in many cases in which they do continue, they only continued because of chance and the relationship may break in the future. So a correlation isn't a proof of causation, especially not if the correlation contains too little information and too inaccurate information.

The Standard Model of particle physics works up to hundreds of GeV or a few TeVs, depending on how you parameterize the candidate deviations. The LHC has already increased the domain of validity of the Standard Model by a factor of 3-10 or so, depending on the proposed models of new physics. But can this success of the Standard Model continue for several more orders of magnitude? The answer is simply not known. If you see no new physics once you have tripled your energy reach, it doesn't mean that you won't see any new physics if you multiply it by 20. Sensible people know that this implication would be sloppy and unjustified. Some people are still trying to pretend that this wrong "mathematical induction" i.e. unsubstantiated extrapolation is a legitimate argument. It's not.

I could tell you lots of examples of unjustified extrapolation (to the future or to more extreme values of various physical quantities) from many contexts.

However, my original claim is that people usually fail to apply known insights in situations which are actually included in the well-understood science. They fail to interpolate or extrapolate millimeters away from the known situations. They fail to see that the speed of 20 GeV neutrinos has to be remarkably close to the speed of light because similar neutrinos' speed was measured after the 1987 supernova and a sharp discontinuity of the speed as a function of energy is much less likely than a badly connected cable. Also, many people are ready to expect quantum mechanics to totally break down in rather elementary experiments even though it's been working perfectly in lots of diverse situations, including some very extreme ones. They're ready to take scissors and cut quantum mechanics and replace it, at least locally, with some physical laws that have been excluded for a century – such as classical physics. They don't pay attention to the logical inconsistencies of such unions and they don't pay attention to the direct observations that show that those things can't work.

Many people are excited about hypothetical scientific paradigm shifts that could come in the near future and they often fail to distinguish their wishful thinking from the evidence. When paradigm shifts come, they can't really change the claims about processes where the older theories have been verified, at least not by much. Also, when a new theoretical structure such as relativity or quantum mechanics arrives, it doesn't just "lift" or "relax" the previous constraints such as the Galilean symmetry. Instead, it typically replaces every single constraint and structure by a new one which is qualitatively different but plays the same role and is equally constraining.

It's similar to the replacement of relays by the first transistors in old computers. The newer computers just "don't completely break away" from the obsolete architecture based on relays. They must replace the relays by something else. Analogously, relativity replaces the Galilean symmetry of Newtonian physics by the Lorentz symmetry. Quantum mechanics replaces Newton's equations by the Heisenberg equations. Many previous assumptions such as the conservation of the electric charge remain exactly true in quantum field theory as well as string theory (the probability of violations of the law is zero). Some others, such as the conservation of the baryon charge, cease to be exactly true in physics beyond the Standard Model but the exact conservation law is replaced by an approximate one which comes with some new structure – the explanation of accidental symmetries – and it is a new justification why the law has apparently worked in all the experiments we have made. One simply can't build a renormalizable interaction violating the baryon charge but preserving the other, exact, established symmetries, the explanation says. Such baryon-violating interactions have to be non-renormalizable which explains why they're weak and produce infrequent processes.

Many people are imagining future paradigm shifts as the statements that "some important principle from the past is wrong", without saying anything positive. But this is not how a paradigm shift in science can realistically look like. Things that have apparently worked may only be replaced by things that work even better and that have a more modern structure which is more compatible with the new concepts and experiments. But previously successful principles simply can't be annihilated without a word of explanation why they were successful. And the paradigm shifts are almost never "going back". If you don't like a revolution that has changed the overall character of the scientific description of something about 100 years ago, you may be more or less sure that the future revolutions will move science even further from your preconceptions, not closer. Science isn't really moving "backwards" because the true reason why the old theories were superseded is that they were falsified and falsification is irreversible (and incomplete or a bit fuzzy falsification is almost irreversible but still unlikely, depending on the fuzziness).

Some breakthroughs in science are more important than others but it's always possible to look at them as if they were incremental steps. Most of the true revolutionaries – including Einstein (and Newton himself) – were interpreting their contributions in this modest way, anyway. The people who are imagining themselves as revolutionaries who really "kill" all the previous science are not being sensible; they misinterpret how the actual progress in science works and they don't really resemble the heroes from the history of science. The ideas that previous insights simply "disappear" or that "anything goes" are symptoms of people who can't focus their attention, who have problems with the memory (so they quickly change their mind without a reason), or who haven't successfully tried to understand the previous science.

I have included one more title in this section: the wrong opinion that sufficiently general or unspecified claims are immune to falsification. This is a theme I encounter often and that affects many people, including those I consider very good physicists.

At the beginning, I mentioned that one shouldn't talk about propositions that can't a priori have both answers at all. They're tautologies or anti-tautologies, whatever is the right term for a proposition that can be proved identically false just by using the rules of logic. But surprisingly many people seem to believe that if they leave "enough wiggle room" in their general statements, these statements must be true or, to say the least, they can't be shown to be false.

However, this is a complete misconception. Whether something may be disproved depends on the actual available evidence, not on your vague feelings whether your statement is sufficiently vague or whether your collection of candidate explanations looks like a large army.

In the climate debate, one may say vague things about a class of conceivable mechanisms that may drive the Earth's climate out of control. They believe that if the comments are sufficiently fuzzy, the fog must inevitably contain a detailed hypothesis that is plausible. However, it's not true. The strength of mathematics – and science based on mathematics – is that it may often prove statements that would look amazingly strong or general or unexpected at the beginning. After all, we are discovering the laws of Nature (often previously unknown laws) that are valid in every event that has ever took place and will take place in the Universe. Even narrower theories with a less universal domain of validity may accurately describe millions of observations that often look very diverse. It is often possible to exclude huge classes of candidate hypotheses. In fact, the disproof of a stronger statement is often mathematically easier than the disproof of a weaker, more specialized assertion.

It's easy to look for fundamental unscientific errors in the climate alarmists' reasoning and in their arguments because those folks are extremely far from proper science and its methodologies, as we have seen at many points in this text. However, even people who are trained if not achieved theoretical physicists often fail to appreciate that science has tools to quickly rule out whole classes of hypotheses even if these classes may look "large" if not "formidable". It doesn't matter that a class of hypotheses looks large or is large. All of the elements may still share some lethal characteristics that allow us to instantly kill all of these elements.

It's obvious what the examples are. Some people believe that the world is a discrete machine or a cellular automaton. Or quantum mechanics has to be just a manifestation of some hidden variables. Or gravity is an entropic force in which the entropy may be encoded in many ways. And so on. These people probably impress themselves by a large number of cellular automata they may simulate on their computers; large number of shapes of the basic simplices in their spin foams or spin networks; large ensemble of quaternions, octonions, pilot waves or other things that may play the role of hidden variables hypothetically underlying quantum mechanics; or a large number of codes that may store the entropy differences that hypothetically drive the Earth's orbiting around the Sun.

A special but important example are the anthropic people who believe that the anthropic explanation of the Universe has to be right because the number of the flux vacua they may construct is overwhelming and they "beat" all other possible explanations – just a few modest Cinderellas who are easily killed by the numerous army of the flux vacua anthropic warriors.

In all these cases and many others, those people think that they're creating a resilient theory that can't be disproved because it has many elements or because it has many unspecified details that may still be adjusted. But it doesn't matter! In mathematics and science, the truth simply isn't measured by the number of elements of a set of detailed implementations of your theory. It is not true that the probability of a statement is proportional – in any legitimately usable sense – to the number of "specific specialized versions" of the statement. The validity of uncorrelated statements are uncorrelated and even if you talk about probabilities of propositions that have a statistical character, the probabilistic distribution over diverse sets isn't uniform and can't be uniform in any sense. Uniform distributions are extremely special and therefore unlikely and any claim that a distribution is uniform is extremely strong and must be justified by some evidence (e.g. by a description of thermalization in the case of microstates that may evolve into each other).

This also means that it's completely wrong to clump a statement of interest with some other statements and assume that they are analogous even though no evidence for the analogy exists, to clump a right theory with many wrong theories, and claim that they're analogous and they should "share" the probability. In communism (or whatever term the far-left "progressive" people would choose for it today), people may share resources. But inequivalent propositions and classes of propositions in science don't share the probability in mathematics and science. Each of them has its own "banking account" and as long as two such possibilities are distinguishable, one of them may be totally right and one of them may be totally wrong.

Logic and mathematics offer us sharp and powerful swords that may – and often do – cut the throats of every single element of a class of hypotheses that share certain properties, properties that turn out to be incompatible with life. That's why we know that hidden-variable "explanations" of quantum mechanics are wrong; discrete models of the Universe are wrong; all forms of the "entropic gravity" theories are wrong; and so on. All these classes are wrong despite their having many elements because the reasons why all of their elements are wrong are independent on the detailed differences between the elements of each class! On the other hand, right theories may be interpreted as elements of larger sets of candidate theories but all other elements in the sets are still wrong and the right theory doesn't really share the credit with others! You can't argue against a (potentially?) valid theory by pointing out that there exists a similar theory that is demonstrably wrong.

I discussed this category of "flagrantly unscientific ways to think" at the end of the blog entry because I know several if not many people who could be called "the world's top physicists" who often like to impress themselves by the shear size of the sets of detailed possibilities, by the huge wiggle room in which their seemingly flexible theories may be adjusted, and so on. But the large number of microscopic possibilities, detailed theories, or the large wiggle room may often be – and often is – insufficient to protect hypotheses and claims that are simply wrong.

Summary, limitation of such a general essay

It is obvious that the general observations above can't save us from every wrong claim we do about science or the real world – especially because many errors people are making or we are making do depend on some technical details in a discipline. But if people were kind enough to think about the general "logical" issues underlying science and errors in science that I described in this text, I guess that we could avoid many if not most of the persistent, constantly repeated errors that keep many people on the wrong track for years or decades or whole lives.

And that's the memo.



Ryan Long wrote a nice essay about the relevance of these ideas, especially in the social sciences, and I recommend you the essay wholeheartedly. He also expresses some of the ideas in better words or uses better examples – but I may certify that he understood what I wanted to say.

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :