Saturday, March 28, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Boltzmann eggs can't exist

Off-topic: Discovery landed smoothly: video

In science, the goal is to deduce correct answers to well-defined questions, using insights that have been directly or indirectly induced from the experiments and observations. To realize this task in a particular context, we

  1. must define the questions accurately enough - we must know the context, some information about the system, its initial or final state, or at least some information about it or them
  2. want to deduce something else about the system, usually in terms of probabilities of different answers, outcomes, predictions for the future, or retrodictions about the past (because certainty is rare) - propositions that can be tested, at least in principle
  3. must use the correct microscopic laws to calculate the probabilities, the correct identification of the mathematical objects and the observed ones, and the correct laws of mathematical logic to deal with propositions and/or their probabilities.
On the picture above, an egg in a closed box is created out of a complete chaos - avoiding the evolution as envisioned by Charles Darwin. Such an egg that is nothing else than a random statistical fluctuation is referred to as the Boltzmann egg, in analogy with the Boltzmann brain and other Boltzmann things, even though this particular egg could also be called an "intelligently designed" egg.

Can the process on the picture above be realized in the world we inhabit? Can the existing science answer this question?

The answer is that science can obviously answer this question: the latter step can be realized while the former step cannot. To see why, we must carefully formulate what we know, what we want to find out, and what are the correct formulae to compute the odds.

Entropy and its difference

The initial and final "chaotic" states have entropy S_0 which is greater than S_1, the entropy of the intermediate state of the egg.

If we use quantum mechanics - or if we divide the classical phase space into cells whose volume is comparable to the appropriate power of Planck's constant "h" in quantum mechanics - there are roughly exp(S_0) configurations or microstates that look like the "chaotic" state.

A much smaller subset of configurations, exp(S_1) of them, macroscopically resembles the egg in the middle of the picture. The difference between S_0 and S_1 will be called dS, and it is comparable to 10^{25} times Boltzmann's constant (which will be set to one, by a natural choice of units) for realistic eggs.

This means that the number of microstates corresponding to the "chaos" is roughly exp(10^{25}) times higher than the number of microstates corresponding to the "egg" because almost each atom gains/loses one unit of entropy or so. It's a huge factor and you should convince yourself that no details about the counting of the microstates will affect this number "qualitatively": only the logarithm of the logarithm of the number of microstates is easy to imagine, and we don't know whether it is 24 or 26, anyway: it is certainly close to the exponent from Avogadro's constant.

Calculating the probabilities

Let us now estimate the probabilities that the evolution takes place as the picture dictates. Among the exp(S_0) high-entropy states, a random initial microstate has pretty much the same odds - which must therefore be comparable to exp(-S_0) - to evolve into any other microstate in the ensemble. For each pair of microstates, we will assume the transition probability to be equal to exp(-S_0).

It follows that the probability that the initial "chaos" evolves into the final "chaos" is pretty much equal to 1. The factors of exp(S_0) cancel.

What happens if the "egg" is the initial state or the final state? Well, the time evolution of an "egg" microstate by a sufficient time "t" (much longer than the time needed for the egg to get spoiled, and for any conceivably emergent chicken to die and decompose) gives you almost certainly a microstate that looks like "chaos".

So what is the probability that an "egg" evolves into "chaos" after time "t"? It is essentially equal to 1.

What about the other (left) process, namely a process whose initial state is "chaos" and whose final state is "egg"? The stupid people without any common sense and without any knowledge of thermodynamics think that the probability will also be equal to 1, by the time-reversal symmetry.

The more sensible people know that the egg can't be born out of chaos. So this probability is essentially equal to 0. Note that we are not talking about any vague philosophical or untestable questions. We are talking about a very physical, observable question whether an egg will be created out of a complete chaos. And the two candidate answers to the probability question - either almost 0 or almost 1 - are as different as you can get. If science couldn't answer this question, it could clearly answer no realistic questions at all.

Who is right? The stupid people or the sensible people?

Yes, the sensible people are right. How do we calculate the probability that a "chaotic" configuration evolves into an "egg"? Well, the initial state is a "chaotic" microstate. We don't know which state it exactly is. So we must assume that it is one of those states that look like "chaos". Each of these microstates is pretty much equally likely - or at least microstates that only differ by extremely tiny microscopic differences are equally likely as the initial state.

The final state is a state of an "egg". Again, we don't know which microstate it is. What we must be really asking about in our macroscopic question is what is the probability that we obtain any microstate that looks like an "egg" in the future. We sum the probabilities over all final egg-like microstates.

A simple multiplication

As we have mentioned, the probability of evolution from any initial microstate out of the set with exp(S_0) elements and any final microstate from the same set is comparable to exp(-S_0). This is also true if one or two of these statets is egg-like. What is the probability of the "chaos to egg" evolution? We must pick one initial microstate but we must sum over all egg-like final microstates because all of them are OK.

The precise choice of the initial microstate doesn't matter because virtually all of them evolve in the same way. The probability of any "microstate to microstate" evolution is exp(-S_0). But because we allow exp(S_1) possible final egg-like microstates, the total probability is
Prob = exp(S1) exp(-S0) = exp(S1-S0) = exp(-dS).
For our numbers, the probability is close to exp(-10^{25}). It's zero. It's important that there are two exponentials embedded in each other. Even if you allow the "egg creation" to occur anywhere and if you multiply exp(-dS) by things like the number of particles in the visible Universe, i.e. surely by less than 10^{100}, you will get zero for all practical purposes.

The process can't ever occur in the real world. This conclusion has nothing to do with cosmology or anything whatsoever that occurs outside the box or before or after the hypothetical process, for that matter. It is a completely robust conclusion of local physics.

In the 19th century, people didn't know that our Universe had a localized origin - the Big Bang. So they may have been confused about the "infinite past" of our Cosmos and various "infrared" regularizations needed to interpret the past properly. But they knew that the Boltzmann brains and eggs were less likely than anything they could think of.

For example, even if you assume that there is no organizing principle on the landscape of stringy vacua and our Universe had to be finely selected from a seemingly random jungle of 10^{500} equally likely (unrealistic!) vacua, this reduction of the probability of life is still negligible relatively to the suppression of probability needed to create an egg by a statistical fluctuation: 10^{500} is much much less than 10^{10^{25}}. Let me write this sentence more uniformly:
10^{500} is much much less than 10^{10,000,000,000,000,000,000,000,000}. Got it? :-)
And we have only required that a single egg is created for a while, before it gets rotten. Imagine that we would be more ambitious about the "life". ;-) No, eggs or brains can't be created as statistical fluctuations. According to everything we know, the probability of such a hypothesis is zero for all conceivable practical purposes.

Any hypothesis that offers a different mechanism behind the evolution than a statistical fluctuation can be scientifically shown to be vastly preferred over the hypothesis of a statistical fluctuation - as long as the probability it predicts for the observed phenomena is nonzero. We don't need any fancy science to derive this conclusion and this conclusion doesn't contradict any other well-established insights about science.

Time-reversal asymmetry of logic

Note that the probability of "egg to chaos" was equal to one while the probability of "chaos to egg" was equal to exp(-dS), i.e. essentially to zero. How is it possible that these two probabilities are so vastly different even though the processes seem to be time-reversal images of each other, and should have the same odds according to the time-reversal (or CPT) symmetry?

The answer is, of course, that the actual macroscopic processes we considered are not time-reversal images of each other. They cannot be.

If we considered the evolution of one particular initial microstate to one particular final microstate, it would be possible to revert the evolution and use the time-translation symmetry to derive the conclusion about equal probabilities. But we are asking macroscopic questions: we must be careful what these questions actually mean and how their odds are computed.

The "egg to chaos" evolution refers to
  • a random initial egg-like state
  • a random final chaos-like state
while the "chaos to egg" evolution refers to
  • a random initial chaos-like state
  • a random final egg-like state.
As you can see, the description above still looks T-symmetric. But the formulae to calculate the probabilities are not images of each other at all! If you try to write down these formulae, you will also be forced to refine the descriptions above and you will see that there's no T-symmetry here. The "egg to chaos" macroscopic probability is computed by
  • averaging over initial egg-like states
  • summing over final chaos-like states
while the "chaos to egg" macroscopic probability is computed by
  • averaging over initial chaos-like states
  • summing over final egg-like states.
Once we tried to be a little bit quantitative, we saw that the past-future symmetry was really broken in all these macroscopic statements. The initial state is "average(d over)" while the final state can be "any" within a certain ensemble. These two adjectives may sound similar - like the word "random" - except that one of them (initial) is determined and assumed when the evolution is computed, while the other (final) is not.

The averaging over initial states is necessary because we said that the initial state was random within a certain ensemble. On the other hand, all final states that agree with a certain macroscopic description are equally good.

So there's no averaging over final states: we sum over them. The dynamical laws actually tell us whether the final state is random or not. And indeed, the final state is almost always random within its macroscopic ensemble. This fact can be derived but it can only be derived for the final states. For the initial states, it must be assumed.

On the other hand, we can never assume anything about the future. The information about the future is always derived - because the future itself has always evolved from the past. Only nutcases who think that they are prophets are starting with an assumption about the future, in order to derive something about the present or the past. All sensible people know that they don't know what the future will be - and the only way to learn something about it is to process the data known from the past (or the present).

You might say that this asymmetry between the past and the future is merely "psychological": we just don't know what the future is but "someone" objectively does know.

The stupid people might think that science should "circumvent" this detail. But science can never circumvent the "detail" that certain things are not known. The future can never be known a priori and this is a fundamentally important fact for anyone who wants to deduce answers to any questions rationally. Feel free to think that it's "just" an issue of psychology but if something is not known and cannot be known, anyone starting with assumptions about it - pretending that they're the primary insights about the world - is a nutcase.

Different denominators

At any rate, it's clear where the factor of exp(-dS) comes from. We are always averaging the probabilities over initial states and summing over final states. These sum/averages can always be written as a double sum over both initial and final states divided by a certain number.

But for the allowed process, "egg to chaos", the double sum is only divided by exp(S_1) while for the forbidden process, "chaos to egg", the double sum has to be divided by a much larger number, exp(S_0). That's why the probability of "chaos to egg" is much smaller - and essentially zero. The ratio of the probabilities is exp(dS) which is a hugely huge number.

The stupid people never give up. They could try to mess up with the rules and argue that one should sometimes average over the final microstates - or sum over the initial microstates (without the denominator needed for averaging). But the first error is equivalent to denying that
P(A or B) = P(A) + P(B)
for mutually exclusive A, B. And the second error is equivalent to denying that the total probability of all mutually exclusive initial states must be equal to one (which is why we must distribute the prior probability in between them, i.e. why we must average over them).

At any rate, only crackpots misunderstand that the mathematical logic and probability calculus inevitably introduces a logical arrow of time into all questions that deal with the incomplete knowledge of the physical system, and with the probabilistic quantification of the word "OR". The word "OR", when inserted in between the probabilities of two microstates, always implies summation if the microstates are final but averaging if they're initial.

There's no way to avoid this elementary asymmetry of mathematical logic: "A implies B" is something different than "B implies A". They follow different rules. The same thing holds for the relationship between the information about the past and the information about the future because "the past A evolves into the future B" is a special case of "A implies B" applied in the context of natural sciences.

And this simple argument is the ultimate reason behind all irreversible processes in physics, including the most general second law of thermodynamics itself, friction, viscosity, diffusion, dissipation, decoherence, and many others.

Appendix: an example of asymmetry in logic

I said that "A implies B" and "B implies A" follow different rules. Let me give you the simplest example from mathematical logic. In logic, the following proposition is tautologically true:
A implies (A or B)
You may verify this tautology. The only case when "X implies Y" is false is when "X" is true but "Y" is false - but that can't happen here because "(A or B)" is more often true than "A". On the other hand, the "T-reverted" proposition
(A or B) implies A
is not tautologically true: if B is true while A is false, the proposition is false. ;-) Now, to get the relevant message for evolution in physics, it's enough to interpret A and B as conditions on a state - as two projectors, if you wish. They act on the initial state if they're written on the left side from "implies" (now: "evolves into") and on the final state if they're written on the right side from "implies" (now: "evolves into"). We obtain the following simple observations:
The initial state satisfies A and the final state satisfies (evolved) A or B
is always true. On the other hand:
The initial state satisfies A or B and the final state satisfies (evolved) A
is not always true. Indeed, the only difference between these two propositions is that the adjectives "initial" and "final" are interchanged (and the subtle word "evolved" is added). But that makes a huge difference!

It's because the assumptions about the initial state, whenever they are a part of the formulation of a physical question, are always "true" throughout the solution of the physical problem, and can be assumed to be true (or their probabilities are determined as a part of the homework): they're supplemented with an exclamation mark. That's also why the sum of prior probabilities of possible initial states must be equal to one.

On the other hand, propositions about the final state are always uncertain, accompanied by a question mark (and their probabilities can never be determined a priori). That's also why the probabilities of possible final microstates must be added without any normalization factor in the denominator.


Some people might understand that there is a one-to-one correspondence between individual initial and final microstates in physics. But if they don't know mathematical logic and its inherent asymmetry between the assumptions and their consequences, they don't know 1/2 of the basic things that are needed to think rationally. Mathematical logic is paramount in science - especially when whole ensembles of states are considered at the same moment - and it doesn't satisfy any generalized T-reversal symmetry between the assumptions and their consequences.

To determine probabilities of macroscopic propositions, one needs to know both logic and the microscopic dynamical laws. There is no contradiction and there cannot be any contradiction between these two parts of the physicist's toolkit.

It's extremely painful that Caltech, a previously prestigious institution, became a place that harbors full-fledged crackpots who try to deny very basic facts about the statistical origin of the thermodynamic phenomena, facts that have been fully understood for 150 years, and who try to expand the ignorance and stupidity of a large portion of the public to bring profit to Caltech, an institution that should actually promote knowledge rather than idiocy.

And that's the memo.

Add to Digg this Add to reddit

snail feedback (2) :

reader SFJP said...

Well, Lubos, as a logician i would just say that part of your argument is just tautological, which means of no informational value.. Why? Because it is well known, since at least the writings of Olivier Costa de Beauregard about time in the 70-80, that all the arrows of time are equivalent between them and in fact are just equivalent to the basic empirical law of statistics: the law of the great numbers, which says that after a sufficient number of random tries you'll get results more and more closer to their a priori probabilities. So you cannot explain one by another, because they are the same.

Now the real mystery about time macroscopic irreversibility could be just stated as "why do we always verify the law of the great numbers when making enough tries?". And this is the mystery.

Also about your analogy with the logical implication you're making a very usual mistake about the real signification of this usual logical connector: the confusion between the logical expression "A implies B" which just states "(Not A) or B" and inference recording "from A, B has been derived", which is not a simple boolean logical expression but a meta-statement about effective causation that can be captured only as the expression of a more sophisticated modal language..

reader Lumo said...

Dear SFJP, the "logician",

indeed, I agree that most of my statements are tautologies. That's why a logician can have no doubts about them: they are valid and one doesn't need any assumptions to see that they are valid. The only way to dispute them is to dispute logic itself.

This text was about the arrow of time, however. Doubts about the law of large (not "the great"!) numbers are another level of people's lunacy. The law of great numbers is not a speculative topic waiting to be debunked. It is a theorem that can be easily proven.

Of course, if one disputes that it is possible to calculate events, divide them, or that there can exist any situations where the outcomes have a probabilistic distribution, it will be impossible to talk to him about any science, mathematics, or anything that requires rational reasoning. I am not disputing that.

Indeed, one of my assumptions is that one has to be able to think rationally, count, and check whether a prediction was satisfied by observations. Without these things, science is impossible.

Best wishes