Thursday, October 10, 2019

It's clear why the number of Earth-like planets is so imprecise

...because the term is neither quantitatively well-defined nor useful...

Two weeks ago, Ethan Siegel more or less defined the scientific method as the mindless obedience to Greta Thunberg (combined with the mandatory anti-quantum-mechanics crackpottery). I thought it wasn't the right definition and virtually everything he wrote was upside down.

One week ago, he wrote a text
Astronomers Debate: How Many Habitable Planets Does Each Sun-Like Star Have?
which highlights his mathematical illiteracy and the mathematical illiteracy of several astronomers – and the wisdom of Michael Crichton. To make the story short, the astronomers agree about the interpretation of the Kepler data but they still have a very uncertain estimate about the number of Earth-like planet per Sun-like star – the estimates go from 0.013 and 1.24. And it's very surprising, Siegel wrote.

Well, for those of us who always knew that science wasn't a mindless worshiping of someone or something by words from a human language, it's not surprising at all.

In 2003, late ex TRF reader, trained physician, and the main father of the Jurassic Park Michael Crichton gave a wonderful Caltech Michelin Lecture
Aliens Cause Global Warming.
You should definitely read these 12 pages if you have never done so. He argued that the Drake equation – a way of rewriting the estimate of "the number of ETs" as a long product (of the number of planets and various percentages or probabilities) – was the first example of a bad science that pretended to be quantitative but in reality, the underlying statements clearly boiled down to the people's prejudices.

And that speech is also full of examples how consensus has failed in science and why it's sensible to expect that it fails; historical explanations making it clear that it's silly to predict the state of mankind for one century in advance, and more.

And this kind of šitty science that wasn't really science was expanding and has led to the climate hysteria – which also abuses some meaningless mathematical masturbation to give credibility to some prejudices or even totally unscientific worries in the eyes of many people who don't really understand the science. In 2019, the hysteria is incomparably worse than it was in 2003. Good for us that we don't have to explain to him how we could allow mankind to deteriorate in this breathtaking way, how we could allow the power to be transferred to whole governments that listen to an unhinged, ill, and illiterate teenager and her brain-dead servants.

OK, as Crichton recalled, the Drake equation said\[

N=N \cdot f_p \cdot n_e \cdot f_l \cdot f_i \cdot f_c \cdot f_L

\] where \(N\) is the number of stars in the Milky Way and the other factors are the numbers of planets per star and percentages or probabilities that generally add conditions – to make the planets habitable and capable of producing ETs. Crichton's main point was that by rewriting an unknown variable as a product of many other variables, one doesn't learn anything about the value at all. The uncertainty doesn't disappear a tiny bit.

In practice, the equation was used as irrational propaganda by those who believe (or who want others to believe!) that there should be many ETs. The number of stars and planets is high and the life is almost unavoidable, they think (they want you to believe that all the probabilities or percentages are comparable to one), so we may predict lots of ETs enriching our society. "Where are they?" Fermi famously asked. And I still ask the same question. You know, we haven't seen any – so the only empirical evidence goes in the so far inconclusive direction that "the number of ETs is low" – and there isn't any truly careful, rational argument that the number of ETs should be too high. Life and intelligent life may be rare and unlikely for lots of reasons, starting with the conditions on the planet and the complicated RNA/DNA of the first life forms that may reproduce, and ending with worries about the civilizations' self-destruction, stagnation, or perhaps the galactically unavoidable Gr@tinism. There isn't even any strong enough argument that there is another intelligent civilization in our galaxy. Everyone who is claiming something else is just deluding himself – and others – and promoting something that is pure faith as if it were science.

Why doesn't the uncertainty decrease when we rewrite a variable as a product? Well, if you don't learn anything tangible about a number, like a result of a measurement, and if you don't fully or almost fully reduce the number to some other numbers that were actually measured, you just haven't changed anything about the knowledge of the number at all. Like in quantum mechanics, some measurement is needed for a preferred value of a variable to "exist". There are some special points to be made in the case of a long product. If you have a long product, it may happen that
  • most factors are known relatively well but it's plausible that you forget about (or you overlook) one or two factors that are uncertain and they actually hide almost all of the huge uncertainty that survives (e.g. one or two critical points in the birth or evolution of life make the life or intelligent hard or the extinction likely)
  • each factor may be uncertain, by a factor of 2 or 10, and because there are many of them, the uncertainties pile up and the total one may be much higher because there are many contributions to the relative error margin
  • we're dealing with multi-dimensional parameter spaces and the words are usually meant to represent some sharply divided regions in the multi-dimensional parameter spaces; the volumes hugely depend on the precise choice of the shape
Well, we're already seeing how totally right Crichton was about the Drake equation – because Kepler has already done the measurements relevant for the initial 3 or so factors in the Drake equation. Siegel wrote:
The funny this is this: we've had the Kepler data for the better part of the past decade, and as of 2019, estimates range from a low of 0.013 Earth-like planets per Sun-like star, to a high of 1.24: a difference of a factor of 100. This is an extreme rarity in science.
The actually funny thing is that a mathematically literate person cannot be surprised by this error margin at all – and he or she knows that comments about "an extreme rarity" are just proofs of the writer's stupidity. It's actually extremely trivial to "define" variables whose uncertainty spans orders of magnitude. In fact, most of similar words – like "the set of brilliant people in the world" (Ten people? Two billion?) – are at most ill-defined terms in soft sciences. Why?

We must look what the Earth-like planet actually means. The terrestrial planet usually refers to "a planet with silicates and metals". An Earth-like planet may either mean a terrestrial planet; or a planet that is about as massive as the Earth; or an Earth analog which is required to be similar to the Earth in many other conditions. And Siegel clearly means the latter.

If you want to count such planets, surely you need a somewhat precise definition, right? You may look either at the conditions on the "Earth analog" Wikipedia page; or read this list by Siegel – the details won't matter for my conclusion:

* The planet should have about the right mass
* and about the right radius
* and about the right distance from the star.
* Here at home, we inhabit a rocky planet
* with a thin atmosphere
* that orbits our star by rotating rapidly on its axis,
* with liquid water stably on its surface for billions of years.
* We have the right temperature
* and pressure at our surface for continents and liquid oceans,
* and the right raw ingredients for life to potentially arise.

The Wikipedia page also requires the star to be a solar analog (about two parameters) and the hydrological cycle to be Earth-like (at least two more parameters must be in the right range). Fine. Now, imagine you discover all planets in the Milky Way and count the Earth-like planets. Each discovered planet must be given at least 10 parameters that were listed above – that decide on whether or not the planet is Earth-like and promising for life.

And you check whether some inequalities are obeyed by the parameters and label all the planets (in the Milky Way) Earth-like or non-Earth like.

How precise the number of the Earth-like planets will be? Well, it obviously depends on the size of the region in the 10-dimensional space – and it's at least 10-dimensional. Let me call it a 10-dimensional space to make the discussion superficially look more scientific (because 10 is the dimension of the spacetime in perturbative string theory). Well, I am pretty sure that the number of actual real-valued parameters – in a quantification of the problem – that are constrained by these people to define an Earth analog is at least 20 – e.g. the "right raw ingredients" is a condition on the concentration of at least five chemicals. You should also add the star's parameters because the star is required to be Sun-like. But to be generous, let's say it is just 10 parameters.

The 10-dimensional region may be represented as a 10-dimensional hypercube – a product of 10 intervals along each axis. If that's the case, it matters how wide the intervals are, doesn't it? If you make the allowed interval along each axis (the allowed error margin) twice as large, the volume grows by a factor of 210=1024. So the estimated number of "new Earths" may easily change by three orders of magnitude if you just slightly adjust the tolerated error margins – by a factor of two for each of them. (If there were 20 parameters, you could easily add or remove 6 orders of magnitude; you're still with me, right? The actual ratio would be a bit smaller than one million because some of the parameters would be strongly correlated with each other, and therefore "effectively not independent" of each other.)

Now, the 10-dimensional hypercube is an extremely unnatural shape, isn't it? You should better define the allowed region as something smooth, something closer to a ball. The \(N\)-dimensional hypercube whose edge length is \(2\) has the volume \(2^N\), doesn't it? A similar shape, the unit ball which is the maximal ball inside this hypercube, has the volume\[

V_N = \frac{\pi^{N/2}}{\zav{\frac N2}!}

\] (which I derived when I was ten because I actually cared about such "details") and is strictly smaller than the volume of the edge-length-two hypercube, of course. What is the ratio of the volumes? Well, just substitute the numbers to the formula for the ball. The ten-dimensional unit ball has the ten-volume around 2.550. That's about 400 times fewer than the volume of the edge-length-two 10-dimensional hypercube, 1024.

Be sure that the difference between balls and hypercubes matters a great deal. The gap between the volumes obviously increases if you go to even higher dimensions of the parameter spaces (and you surely need to do it if you want to impose extra conditions on the planet for it to be considered "promising for life"). For much higher dimensions than 10, the factorial in the formula for the "volume of the ball" above wins and makes the volumes of the unit balls tiny relatively to one (i.e. even relatively to the unit-edge hypercube), thus escalating the gap between the ball and the hypercube.

I wrote that the 10-dimensional ball is clearly a more natural region than a 10-dimensional hypercube to work with in science. If you were doing some even more meaningful science, you wouldn't have sharp boundaries at all. You would have probability distributions (or functions encoding the "mean value of ETs on planets at some point of the parameter space") that would be constant along spheres or similarly smooth regions. With such a better way of counting the Earth-like planets, you would find a similar source of uncertainties (as the ball-hypercube difference) that would increase your ignorance about the "number of Earth-like planets" by several orders of magnitude, too. It is totally trivial to introduce the error of 10 orders of magnitude by "modified details" of all these procedures – which is enough for the uncertainty to span from "one habitable star/Sun in the galaxy" to "all stars have habitable planets".

I claim that every person who is actually a good and experienced enough mathematician finds it obvious that vaguely defined quantities such as "the number of Earth-like planets" end up being extremely imprecise, by many orders of magnitude. Michael Crichton had the same correct guess – probably not because he was a great mathematician. He just had good common sense and intuition. If you don't actually count the ETs, some representation of their number as a product is bound to be extremely misleading and the value remains extremely imprecise. And I gave you some formula showing why this general expectation he had was totally correct.

The search for ETs cannot really be rigorous – as a man totally implausibly demanded, see the previous blog post – and the ill-definedness of the variables discussed by the "searchers of the ETs" is one major reason. Another reason is that we simply don't know what are the most promising places and planetary types where the extraterrestrial life should be expected first. And we have no known methods to make progress in answering this question. So every claim that the ETs could be or should be "searched for rigorously" is just a nonsensical wishful thinking. If these people use any mathematics, it's just a demagogic trick to make their pure prejudices look more science-like in the eyes of the gullible laymen (just like in the case of the climate hysteria which is much more familiar to us today). But there's no rigorous science in the search for ETs. It's just a hobby. A better telescope always helps but whether the decision to look into another class of stars and planets is a good one is a matter of pure prejudice. You can't improve the strategy by adding new people to your SETI committees.

The people who think that "Earth-like planets" are very useful phrases that may be used in "rigorous science" are mathematically illiterate. Why do they believe these things? Because they have a naive, anthropomorphic idea about the whole Universe. They use binary categories for questions where all rational arguments make it clear that one shouldn't expect any useful binary categories.

On Earth, there are many organisms and one may pretty much accurately divide them to humans and non-humans and count them. Embryos, literally brain-dead people and Siamese twins (and a few more things like that) could add some subtleties, uncertainties, and dependence on the details of the definitions but they wouldn't affect the counting too much. And the mathematically illiterate searchers for ETs assume that just like the "number of humans on Earth" is rather well-defined, "the number of Earth-like planets in the Milky Way" must also be rather well-defined. Aren't these two situations analogous?

They are not analogous at all, as I argued. Why can the number of people on Earth be counted so well? Because humans are some living objects that require many organs to be alive – and they carry some DNA that is copied in each cell. The space of DNA is multi-dimensional but the actual organisms are always clumped in some "islands" which we call "species". Because these islands are separated from each other, due to the millions of years of the evolutionary divergence, the threshold effects are small. So you may count the number of "heads on Earth" that are still alive – and decide whether the "heads" are human by checking whether the DNA carried in the cell of the "head" belongs to the human island in the space of DNAs.

But the parameter space for planets in the Milky Way doesn't have any separated islands in an otherwise empty vacuum. It's almost completely filled. Why do planets differ from animals in this qualitative way? Because planets don't fudge each other. They haven't evolved by sex, reproduction, and Darwin's evolution. ;-) So the situations are completely different. "The Earth-like planet" is just a weasel word. There is no reason to expect any binary categories to be very useful in the classification of planets. A binary classification is an artificial anthropomorphic or bureaucratic procedure. It is an extremely alien procedure within natural science (e.g. in the parameter spaces of celestial bodies).

Don't get me wrong: most planets in the Milky Way are so different from the Earth that almost everyone will agree they are not Earth-like. And there is possibly a very small number of planets that are so similar to Earth in so many respects that a majority of people could agree that they're truly Earth-like. So the term "Earth-like" may be useful in many extreme cases – when the answer is a clear "Yes" or "No". But the planets for which the answer is "something in between Yes and No" are the most important ones for the actual counting of the Earth-like planets. And the dominant influence of these "marginal cases" increases with the dimension of the parameter space. That's why the number is unavoidably so terribly unknown.

Another point is that the absence of a good definition isn't the only problem or the last problem of efforts to turn SETI into a science. If you defined some clear shapes in the parameter spaces of planets and tried to do this stuff really rigorously, you will still face another problem: your definition is almost completely arbitrary and there is no reason why it should be very useful for counting the ETs. Assuming that the parameters may be quantified for the planets at all, you may impose some inequalities for the planet in the 10-dimensional or higher-dimensional parameter space spanned by the planetary properties. But this inequality is pretty much arbitrary and it is surely unequivalent to having aliens, isn't it? As far as I am concerned, most ETs may very well live on planets that are more similar to Mars or Saturn than to Earth and even if the similarity to Earth were the galactic winning ticket, we just don't know which parameters really matter and how precise they must be.

It means that even if you tried to add rigor and precision by turning these "weasel words" into well-defined conditions or quantities, the conditions and quantities will be basically useless for the ultimate question we're really interested in – whether there are ETs, how many, and where. To summarize, all progress and all rigor in this kind of activity is completely fictitious and fake. The long products and other complicated procedures don't bring us any actual new knowledge, they are not helpful for the goals we actually care about, and the usage of mathematics is just a trick to fool the sponsors.

This was really the point of Michael Crichton's 2003 speech and the point is completely correct.

Almost 25 years ago, Michel Mayor and Didier Queloz have found a neat spectral trick and discovered an exo-Jupiter – and got the Nobel prize in physics in 2019. They actually did have a clever idea and solved a particular localized problem, with some good luck. Abraham Loeb's claims that the search for ETs may and should be systematized or "rigorous" are nothing else than calls to get more funding for procedures that are useless and hopeless because they are irrational from a scientific viewpoint.

No comments:

Post a Comment