Tuesday, February 07, 2017 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Why fear of Boltzmann brains is junk science

One hep-th paper that was posted yesterday was very different from the rest:

Why Boltzmann Brains Are Bad
Sean Carroll posted his
[i]nvited submission to a volume on Current Controversies in Philosophy of Science, eds. Shamik Dasgupta and Brad Weslake
I am sorry but the moderators made a clear mistake. The hep-th archive should accumulate papers that are only accessible to high-energy theoretical physicists. If a text may be addressed to generic philosophers of science, it simply doesn't belong to hep-th, especially if the paper is completely irrational from the scientific viewpoint. And be sure that this one is.

This blog has discussed the stupidity of the Boltzmann brains as a topic that physicists should spend a lot of time with many times. Sean Carroll is arguably the world's loudest voice that claims that they're very important and the whole research of cosmology should be largely dictated by the Boltzmann brain alarmism. Most recently, in August, it seemed that after decades, Carroll finally understood why one of the most self-evident mistakes in his whole way of thinking is a mistake, indeed: Why there can't be any "uniform" distributions on infinite sets (or continuous sets of infinite measure).

In his most recent "invited submission", he quotes several other argument showing the problem with the Boltzmann brain alarmist thinking but he is still absolutely incapable of understanding them – why they imply that he is wrong.




The Boltzmann brain is a hypothetical brain that arises out of complete thermal mess without the proper evolution of the Universe, life on Earth, and individual evolution, as a short-term statistical fluctuation. The molecules in a cesspool which contains feces and some other garbage just randomly arrange themselves to locally physically coincide with the human brain. This brain may be feeling the same things as a similar or identical brain that has evolved in more conventional ways or has a more distinguished pedigree.

The probability of this evolution of a "brain out of a thermal mess" is extremely unlikely. Some \(10^{27}\) atoms have to do something rather special and sit at rather special places. If we approximate the probability for each atom to "behave" as \(1/e\), then the probability that a Boltzmann brain is born in a cesspool is something like\[

p\sim\frac{1}{\exp(10^{27})}

\] or \(\exp(-10^{27})\). It doesn't matter much that the outer exponential has the base \(e\), you could replace it with \(2\) or \(10\), too. It doesn't matter too much for the qualitative purposes that the inner exponent is \(27\), it's just important that it's "dozens or hundreds". What's important is that this exponent is "exponentiated twice". So the denominator of the fraction above is "vastly greater" than the number of atoms in a human brain. It's close to the number of states (dimension of the Hilbert space) needed to describe the dynamics of the brain. It's "morally" closer to a googolplex than a googol.

When probabilities of some evolution within this Universe are this tiny (morally the inverse googolplex), you may always say that the evolution is impossible and neglect the possibility. The "effective equivalence" of \(p\sim \exp(-10^{27})\) and \(p=0\) is one of the reasons why the concept of the probability is useful. When someone makes this probability of an evolution larger by adding a huge (typically infinite) factor (e.g. one proportional to the volume of the Universe or 4-volume of the spacetime), you may be certain that what he is doing is just totally wrong. Impossible events can't be made possible in this way.




OK, Boltzmann has introduced this concept – the Boltzmann brain – in order to show that according to the logic of statistical physics, things that were thought to be "strictly impossible" aren't quite impossible anymore. They're just unlikely. Boltzmann has seriously considered the origin of the whole life on Earth (or something even broader) as a result of an accidental decrease of entropy in a Universe that was close to the equilibrium (had the nearly maximum entropy) before that fluctuation.

But what's certainly wrong is that idea that a TRF commenter (like you) could be a Boltzmann brain so that he originated in a thermal noise that existed "right outside" the boundaries of that brain or its life (or even shorter periods of time). Why? As Feynman and others immediately pointed out whenever this topic was discussed, the "Boltzmann brain hypothesis" makes some predictions that are immediately falsified.

In particular, in my representation of the Boltzmann brain, it's a piece of organic matter that has accidentally arranged itself to locally perfectly resemble the human brain. By design, our Boltzmann brain had no skull. So it's still a piece of šit swimming in between other šits in the cesspool, so soon afterwards, it gets mixed up and dissolved to the rest of the šit and ceases to exist. If a Boltzmann brain were protected by a skull, it could exist for a while but the assumed environment is usually much less hospitable than the cesspool – it's either too hot or too cold – so the Boltzmann brain would be quickly destroyed by temperatures that aren't compatible with life.

OK, so the "Boltzmann brain hypothesis" makes the prediction that you die quickly and/or you immediately start to observe that you are living in a cesspool, surrounded by swimming excrements, instead of a nice planet covered by beautiful plants, cute animals, and canyons inside a nearly empty Universe with romantic stars. The conditional probability that given the cesspool has accidentally created a brain, it has created billions of animals and many canyons as well, is still tiny, in fact a bit tinier than the number I mentioned, perhaps \(\exp(-10^{100})\). So the Boltzmann brain theory predicts that with the probability of 99.999...999% – where the number of digits 9 is around \(10^{50}\) which is much much more than just \(50\) digits – you won't see an organized universe with a sensible history, gradually increasing entropy, and evolution. But we see all those things, so the Boltzmann brain theory is ruled out.

We needed an observation to say whether the hypothesis is right or not. Natural science usually needs some experimental or empirical data. Even Newton, Darwin, and Einstein needed some empirical data to seriously propose their famous theories. And quantum mechanics made it clear that any and every prediction about the future of the world needs to be a function of some observations made in the past. Carroll acts as if it were the ultimate sin to use the empirical data. Like the typical philosophers, he believes that the most important or qualitative truths about everything may be determined by "pure thought". After all, that's why he is contributing invited submissions to groups of arrogant morons who pompously call themselves "philosophers". But some experimental data, e.g. at least the fact that we don't seem to live in a completely random messy cesspool, are simply needed to correctly answer some questions about the world around us. There's nothing wrong about it. For centuries, scientists were proud about having discovered the importance of the empirical data. Carroll must have forgotten everything.

OK, that was some introduction.

Defenders of the Boltzmann brain panic "know" most of the answers at some level. At the end, Carroll "knows" that he is not a Boltzmann brain. What's the actual problem is that he totally incorrectly believes that theories in physics and cosmology – including some of the most convincingly established ones – "predict" that we are a Boltzmann brain, and they must be eliminated for that reason. This opinion is utterly and absolutely wrong. No physical or cosmological theory or model "predicts" that we are Boltzmann brains. Only Carroll's "theory" – his totally flawed thinking based on a complete misunderstanding of the probability calculus and rational reasoning in general – may make predictions such as "we are the Boltzmann brain". That's why it's absolutely wrong to eliminate any conventional model by a reference to Boltzmann brains. Like a catastrophic global warming, the Boltzmann brains are just a straw man. There is absolutely no credible threat that our Universe or a Universe based on any conventional candidate dynamical laws could be "hijacked" by the Boltzmann brains in the sense that "we would be obliged" to become ones.

At most, what Carroll writes may establish that his own may be classified as the Boltzmann brain as I defined it – as a worthless piece of floating excrements. But his argumentation doesn't have any implications for your brain or my brain.

Fine. So let us describe these issues a bit more systematically. The Boltzmann brain alarmism may be seen to be wrong at many levels. The first separation of these blunders is the following:
  1. Regardless of the detailed mistakes in the argumentation, we know that the conclusion is wrong because the most convincing established theories in physics and cosmology are actually being rejected
  2. The same kind of thinking may be seen to be ludicrous because if it were allowed, it would imply lots of silly conclusions even in everyday situations
  3. The argumentation based on the Boltzmann brain panic violates the rules of logic, probability calculus and Bayesian inference, and causality in many ways
Great. Concerning the first point: Carroll basically wants to eliminate all cosmological models that imply that over the whole history of the Universe that these models imply, there's a nonzero chance that the thermal mess does accidentally combine to something resembling our brain. Well, the main problem is that virtually "all good theories" agreeing with our observations, including the observation of a positive cosmological constant, would have to be eliminated. Why?

In the late 1990s, cosmologists saw that the dark energy was nonzero. The overwhelmingly most natural "detailed" explanation for dark energy is the cosmological constant, a term in Einstein's equations originally introduced by Einstein for other reasons (to deny the expansion of the Universe). It's been measured to be a small but positive number, a constant. Because of that cosmological constant, the expansion of the Universe is actually getting accelerated. The Universe is becoming emptier every day – the density goes down by a rate that increasingly accurately resembles the exponential process.

We're already living in a stage of the Universe where most of the vacuum energy density (70%) is due to the cosmological constant. So our Universe is already "rather close" (although not yet "overwhelmingly close") to the empty de Sitter space, a cosmological spacetime resembling a hyperboloid and describing what our spacetime with a nonzero cosmological constant will look like in the far future. While we will enjoy the life on Earth for a while, the macroscopic data for the Universe indicate an unstoppable "decline" that already started billions of years ago: Every 11 billion years or so, the linear distances between the galaxies will double and the density of baryonic and dark matter will decrease by a factor of 8 or so.

And quantum mechanics says that within a de Sitter space, there is a cosmic horizon – very similar to the black hole event horizon. It's a sphere that apparently emits a black-body, thermal radiation. And this cosmic horizon guarantees that the interior of the de Sitter space has a nonzero temperature. It is filled with thermal radiation whose origin is analogous to the Hawking radiation of black holes except that the temperature is much tinier.

The temperature \(T\) of a de Sitter space is such that the energy \(kT\) is comparable to the energy of a photon whose wavelength is equal to the radius of the Universe \(R\):\[

kT \sim \frac{\hbar c}{R}.

\] These are extremely low-energy photons. But the probability that you find higher-energy photons or even other particles is strictly speaking nonzero. Although the probability that they get arranged to a brain is even smaller, if you have an infinite history in which the temperature is bounded by the small but positive constant \(T\) above, the radiation in the Universe will arrange itself to coincide with a human brain at some moment and some place, and then infinitely many moments and places again and again.

(Some "hardcore quantum gravity" approaches indicate that even de Sitter space must be either unstable, or the "extremely long timescales" such as the Poincaré recurrence time – when events start to repeat themselves – "don't exist" in some sense. I will ignore these conceivable ideas. Even if they're correct, they can be in no way "proven" by arguments revolving around the Boltzmann brains.)

Because the infinite history is predicted to contain infinitely many brains, the total number of these brains predicted over the whole history is infinite and therefore larger than the number of brains that resulted from the ordered evolution at places such as our blue, not green planet. And because we should be typical, Carroll keeps on arguing, we are obliged to be equally likely to be those accidentally arranged pieces of cosmic feces googolplexes of years after the Big Bang. The theory of thermal de Sitter space therefore predicts that we're almost certainly the Boltzmann brains. Because we're apparently not Boltzmann brains, the thermal de Sitter space must be eliminated.

The conclusion is silly because the theory that the Universe is close to a de Sitter space and it contains some low-temperature thermal radiation is the result of the best things that pre-string-theory science has taught us about the Universe, and string theory largely seems to confirm it, anyway.

Let us now jump to the third bullet, the wrong steps used all over Carroll's defective thinking. There are lots of places at which we may see that Carroll's reasoning is incorrect – way before we see that he has been led to wrong conclusions. The most important ones are
  1. The general assumption that "we are one particular member of a set S" has to have the same probability for all elements of a randomly defined set S: in most cases, there's no justification for this kind of egalitarian assumption and it may indeed be seen to be wrong
  2. Carroll's and others' inability to see that the previous "law" can't be true because the law would say different, incompatible things depending on how inclusive a set S would be chosen
  3. The worse version of the egalitarian assumption that "postulates" the same probabilities for all members of an infinite set
  4. These folks' general delusion that a higher entropy makes the probability of a hypothesis exponentially more likely
  5. Another worse version of the egalitarian assumption that "postulates" the thermalization between objects that exist now and those in a hypothetical distant future: this worse version heavily violates causality and unavoidably contradicts the conventional dynamical, causal laws of Nature
The first objection above says that the whole reasoning based on the "typicality" or "egalitarian approach" is utterly irrational. Within a cosmological model associated with some field equations of motion, you may say that the Earth is a planet (or your brain is a conglomerate of organic matter) a few billion years ago after the Big Bang; or you may say that the Earth (or your brain) is an object that exists googolplexes of years after the Big Bang.

Even though Carroll completely fails to realize this fact, these are two totally different, inequivalent hypotheses. You just can't say that they're the same. They're as different as you can get. The idea that you were born out of the process of natural evolution that takes billions of years is completely different than the idea that you were born as a happy statistical fluctuation. These two hypotheses are exactly as different as Darwin's evolution differs from creationism. For this reason, Carroll's asssumption that "we are a Boltzmann brain" must have the same probability as "we are a brain that arose from Darwin's evolution" is exactly the same kind of an utterly flawed argument as an argument that creationism must be assigned at least 50% probability and at least 50% of time at schools. It's simply not true. They are different propositions, so they agree or disagree with different pieces of evidence. Consequently, their probabilities of being valid are different – extremely different from each other. You just can't assume that the probabilities are the same. If you make this assumption, it indeed increases the probability that your brain is just a piece of šit that accidentally took the shape to fit into a skull.

I would say that the "insight" that inequivalent hypotheses must be treated separately – their probability must be determined and updated differently – is a cornerstone if not "the cornerstone" of any rational thinking. This different treatment of different hypotheses is also what Bayesian inference is all about – even though extreme leftists could describe this different treatment as a "discrimination". It's flabbergasting when someone fails to get this point – but Carroll obviously does fail to get this point.

OK, my second objection is that Carroll "demands" the probability that you are a brain that has evolved on Earth to be equal to the probability of "any other brain that will have arisen as a Boltzmann brain in some thermal mess in the future". So there's a set S of brains and all elements of it must be equally likely. One reason why this "postulate" can't be a logically consistent law of Nature is that it depends on what the set S exactly is. What do you exactly count as a Boltzmann brain that is "as good as the one on Earth"? Do you demand that it's surrounded by a skull, as I have discussed? Do you demand the whole body so that the brain may survive for decades? Do you demand hair or sexual organs that aren't needed for survival? Is the DNA of the brain cells enough? Maybe the DNA molecule should be considered enough – Richard Dawkins told us that it's only the DNA inside that struggles for its reproduction or survival and it uses the rest of the organisms as tools, just like we generally use other tools such as hammers etc.

And so on and so on. There are trillions of such questions that would need to be answered to define the set S. For each version of S, you would get a vastly different estimate of the number of Boltzmann brains per "googolplex of years" and therefore different probabilities that you live on Earth. The estimated probabilities could easily differ by a factor of a googolplex, too. If it can produce many "equally good numbers" as the predicted probabilities that differ by a factor of a googolplex from each other, the law is obviously completely ambiguous – or self-contradictory. A rational person can't possibly use or believe this law.

Note that this particular issue surely does have analogies in the political ideology. Extreme leftists often "postulate" some complete equality between the people – a piece of šit serving to Daesh somewhere in Northern Iraq is "equal" to every big personality in the West etc. However, this attitude is ultimately indefensible. When they promote affirmative action and open borders, they finally figure out that someone cannot be equal, after all. So why they want to flood the universities and the Western countries in general by lots of "equal" scumbags from Daesh etc., they find out that the proper conservative Westerners aren't "equal", after all. So they don't need to be represented in politics or universities etc.

The logical problem is that there can't exist any canonical, completely universal definition "who is equal". People and animals are unequal and they fill a nearly continuous set of different values – and other characteristics in a multi-dimensional parameter space. To count someone in a set or not to count him means to choose sharp boundaries in this set. Every particular choice of boundaries is completely arbitrary, none of them is better than others.

My following objection was about the non-existent uniform measure on infinite sets that I discussed e.g. in August 2016. The probabilities \(p_i\) or the distribution \(\rho(x)\) simply can't be "uniform" if the index \(i\) can take infinitely many values or if \(x\) belongs to a continuous set whose measure is infinite simply because there exists no real number \(y\) such that \(\infty\times y=1\). The probability distribution can't be normalized. This proves that all the reasoning based on the assumption that this probability distribution is the actual one – i.e. that it also has to exist – is unavoidably wrong. Viable theories of physics and cosmology, including the thermal physics of de Sitter space, don't talk about any such ludicrous non-existent uniform distributions which is why they don't have any problem. The claim that they have a problem is a falsehood. It is only a problem of a completely different "theory" or way of thinking that is totally inequivalent to the one that is used by the actual physicists and cosmologists.

The fourth objection deals with Carroll's general strategy to "favor theories that deal with many microstates" i.e. with high-entropy states. If a theory comes in many, \(\exp(S)\) microscopic versions, where \(S\) is some entropy, then – according to Carroll – it is \(\exp(S)\) times more likely than a theory without this degeneracy. But this assumption is absolute and utter rubbish, too. There's absolutely no law that would say that higher-entropy states are favored according to such a proportionality law.

The only "morally similar" law that holds is the second law of thermodynamics that says that the total entropy (almost) never drops by a macroscopic amount; the total entropy is a non-decreasing function of time. For finite systems, this law implies that after a long enough time, finite systems in a box evolve towards the thermal equilibrium and in the thermal equilibrium, all acceptable microstates are equally likely.

But none of the "details" in the previous paragraph may be ignored. The entropy is only increasing if time is increasing. If time is decreasing, the entropy is not increasing. It is decreasing just like time. The arrow of time cannot be "dropped" from the second law of thermodynamics. Indeed, the second law of thermodynamics is a physical axiom preferring the correct arrow of time over the incorrect one. Carroll still believes in some bastardized "second law" that doesn't have any arrow of time in it. There's no such a law in science. Only complete crackpots believe that something like that exists.

Also, when you assume equilibrium, you must actually have it and the waiting is necessary to achieve it. In particular, the initial state of the Universe cannot be assumed to be in equilibrium. It almost certainly isn't. In fact, even much later stages of the evolution according to cosmological models may refuse to be equilibrium states. Only if the interactions between "different materials in the Universe" are strong enough and the expansion is slow enough relatively to that and other conditions are obeyed, the Universe approximately reaches the equilibrium. In many other cases, it doesn't – even after long enough periods of time.

If Carroll actually knew something about the building of cosmological models by the particle physics community, he would know that there exist models where dark matter was at equilibrium as well as those in which it wasn't. He apparently thinks that a physicist is "obliged" to assume the equilibrium everywhere – all microstates are equally likely – which is just absolute rubbish. Also, when physical systems evolve towards an equilibrium, it's simply not true that the "fastest conceivable path" towards the increasing entropy is the one that will take place. The entropy increases when you mix mortar. The ingredients could quickly get mixed up and approach the highest-entropy state of a uniform mortar. If Sean Carroll had ever worked with his hands, he would know that one needs quite some time and work to mix mortar well.

But even if the thermalization (evolution towards equilibrium) is fast enough, the way how Carroll uses the entropy to increase the estimates of probabilities is completely wrong. Imagine that you play paintball against 1 million of other players. You shoot blue paintballs, your foes have blue costumes except for one player hiding randomly in the field whose costume is red. When a red costume is hit by a blue paintball, the red and blue colors get mixed to purple which increases the entropy by a greater amount, about \(10^{24}\), then when a blue paintball hits a blue costume.

Now, shoot randomly at one of the enemies. What will be his color?

Well, you – a person with common sense – know that you will almost certainly hit a blue enemy because 99.9999% of the people have blue costumes. However, Carroll will tell you that the number of final microstates is \(\exp(10^{24})\) times greater in the case when you hit a red costume. This factor is much greater than one million so it wins. According to Carroll, you will almost certainly hit the red soldier. This argument is isomorphic to all his "calculations" that prefer Boltzmann brains or high-entropy states.

Who is right? Just try to play the damn game. Or do anything in the real world that isn't completely detached from reality. You will see that Carroll is wrong in 100% of his propositions about the world that deal with probabilities or entropy. The higher entropy of the red-blue mixture just doesn't matter and cannot matter because this extra entropy increase is a consequence of some event that is equally likely as the similar event involving a particular blue soldier. If the extra entropy is a consequence of an event, the probability of the event obviously cannot depend on the magnitude of this extra entropy. Whether events happen only depends on their past, not their future!

Again, this trivial point is misunderstood by Carroll and similar "philosophers". The appearance of the Boltzmann brains in the infinite future of the thermal de Sitter space is a consequence of this spacetime's infinite lifetime combined with a nonzero temperature bounded from below. These Boltzmann brains will emerge as a consequence of the fact that this Universe was born, took on a nonzero temperature, and survived. Because they're a consequence, the detailed properties of this future world just can't affect the probability that the events leading to these consequences happened.

So I finally got to the final point – that the reasoning of the Boltzmann brain alarmists is absolutely acausal. They believe that the calculations of probabilities that something is happening now depends on properties of the world that may arise in the future. That's simply not the case. It's untrue regardless of the precise interpretation of the "future" – it may be a future according to the cosmic time in a particular cosmological model or the future according to someone's subjective perception of time, too. The dynamical laws of Nature – including the most modern, established laws in quantum mechanics (including quantum statistical physics) – calculate the probabilities of outcomes now purely out of the observations made in the past. These are the correct probabilities. Carroll is proposing an alternative way to calculate probabilities of outcomes now that also (in fact, primarily) depends on the details of the world that may arise in the future.

It's damn obvious that these two calculated probabilities don't agree in general – in fact, they almost never agree and their disagreement is often "by factors of a googolplex". Only one of these two laws, either the dynamical laws determined by physics or Carroll's egalitarian acausal probability calculus, may be right. The other is a product of a brain composed out of excrements that randomly filled a skull. I hope that you know which of these two candidate theories is the trustworthy one and which one is the šit.

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :