Friday, August 29, 2008 ... Deutsch/Español/Related posts from blogosphere

Boltzmann brains: popular misconceptions

Today, there are two anthropic hep-th preprints concerned with the so-called "Boltzmann brains". One was written by Bousso, Freivogel, and Yang, while the other was constructed by De Simone, Guth, Linde, Noorbala, Salem, and Vilenkin.

Many of them are big names, we like them, yet the papers are obviously wrong. Every individual sentence in the abstracts of these two papers actually seems to be incorrect by itself. The same comment applies to most sentences in the bulk of the papers. They use all kinds of wrong proportionality laws between the "volume" and the "probability" - the kind of laws that small pupils believe when they hear about the first proportionality laws in their science classes. Everything is proportional to everything else, isn't it?

Well... No, it's not. But let me start with some comments about:

Boltzmann brains

The entropy of a system never decreases "macroscopically", at least not "systematically" for long periods of time. This observation is called the "second law of thermodynamics", it underlies the so-called (thermodynamic) arrow of time, and using general methods of logic and statistical physics, it can be proven to hold in any physical system that admits macroscopic changes of the entropy: see the previous article about the second law.

And if the entropy does decrease for a while :-), the probability that such a thing occurs decreases exponentially with the total decrease of the entropy: when other things are "equal", whatever it exactly means, the probability goes like exp(-|entropy_jump|) which can be an impressively tiny number. Whenever you (approximately) say that the entropy is "macroscopic" (i.e. a large multiple of the tiny Boltzmann's constant, a number that is not too far from Planck's constant), it is effectively infinite in the natural microscopic units, and the probability of such an evolution is zero.




Still, the real entropy changes (locally, the entropy must decrease in some regions when we want to obtain life) needed to create e.g. a brain are finite so the probability is not strictly zero. If you consider a gas with a lot of hydrogen, oxygen, and carbon atoms, it's a messy, chaotic system. But there is some nonzero probability that the atoms in a square foot organize themselves in such a way that their positions and velocities are indistinguishable from the positions and velocities of atoms in a human brain that includes some particular memories.

Such a randomly created brain that is locally indistinguishable from a normal brain is referred to as a Boltzmann brain. Ludwig Boltzmann was the first man who played with these ideas: our life could have resulted from a random, very unlikely fluctuation of a high-entropy system towards a much lower entropy. Such an assumption would be, in fact, somewhat natural in a Universe that exists since t=-infinity.

Boltzmann, much like everyone else, realized that all the brains we have ever observed around are normal brains, i.e. results of well-known and canonical rules of evolution that takes place on a planet in the Solar System inside a Universe of the type that we know. More precisely, he knew that the entropy has apparently been increasing at least for hundreds of millions of years. Today, we know that it has probably been increasing for billions of years and because of the Big Bang theory, this time interval is likely to be the "whole story". The new biological and cosmological evidence has simply made Boltzmann brains unnatural and unlikely while orderly evolution became natural and likely.

Nevertheless, it is still possible, at least in principle, for brains that are locally indistinguishable from ours to be born in a complete chaos, by an unexpected entropy drop but without all the complicated evolution.

That's a fun game except that many people recently started to say that such games can actually constrain possible models of cosmology. Of course, they can't and any framework of ideas that claims to rule out a cosmological model by counting some Boltzmann brains can be falsified by itself. Most of the anthropic researchers, including the well-known cosmologists, seem to share a couple of childish, elementary misunderstandings about the character of Boltzmann brains and all these myths are crucial yet shaky pillars of the anthropic papers. Let's look at them and try to correct them.

Myth: the Boltzmann brains immediately turn into chaos

This myth arises because its advocates probably don't understand quantum mechanics well enough and they inconsistently mix several interpretations of quantum mechanics, to achieve nonsensical conclusions. To see what actually happens with Boltzmann brains in one second, let us begin with classical physics.

In classical physics, we really talk about the positions and velocities of all the atoms inside the Boltzmann brain. If electromagnetic waves inside the brain are important for its identity and memory, their configuration in a Boltzmann brain must reproduce a normal brain, too (by definition of a Boltzmann brain).

But as we know, the laws of physics are pretty much local. So if we know that the normal brain will be thinking at least for a second or so, until the signals from the exterior influences manage to propagate inside, it follows that the corresponding Boltzmann brain will be doing the same thing. The same initial conditions in a region imply the same results!

Of course, the environment in which a Boltzmann brain lives will influence its future evolution. If a Boltzmann brain is placed in the vacuum or some light gas, the subsequent evolution will bring him or her the same sensations as the feelings of a normal brain of a person who just appeared under the guillotine in the critical moment. I haven't tried it ;-) but whatever you feel, the evolution of a Boltzmann brain and a normal brain will be qualitatively identical. The positions and velocities of the atoms of brain evolve in the same way while the external conditions only begin to influence the internal structure of the brain after much longer intervals than the typical intervals needed to make a few operations in the brain.

If a Boltzmann brain is equipped with a full-fledged Boltzmann body, chances are that it will survive for several minutes, before it or he or she suffocates. ;-) If you begin with a Boltzmann brain supplemented with a Boltzmann body inside a properly isolated Boltzmann hotel equipped with Boltzmann food and Boltzmann toilets (inside a chaotic Universe), it or he or she can happily live for 70 years or more. Such a life will be pretty much indistinguishable from the life of a normal brain.

This conclusion should be obvious in the classical setup because everyone can imagine the differential equations that dictate what happens with the atomic coordinates and momenta: identical initial conditions will lead to the same outcome. But it is a "macroscopic" conclusion that is guaranteed to be shared by quantum mechanics, too (because quantum mechanics reduces to classical physics for similar macroscopic questions). However, the anthropic people often incorrectly say that their Boltzmann brain is instantly transformed into chaos.

Their mistake follows from an erroneous interpretation of the phrase "quantum fluctuation".

They essentially use this phrase in the same way as we normally do in Feynman's path integral approach to quantum mechanics: we sum over all histories and most of them are "chaotic". But the fact that one of these chaotic histories looked like a Boltzmann brain at t=0 isn't enough to say that the system was found in the state of a Boltzmann brain at t=0. These are entirely different things.

If we say that the initial state looked like a Boltzmann brain described by a wave function, psi, at t=0, it means that the path integral should be organized as a sum over all histories with various given initial conditions at t=0, weighted by the wave function (or functional) for each initial condition. When you do so properly, the local predictions will, of course, be independent of the question whether the brain "is" a normal brain or a Boltzmann brain. Physically, it's the same brain, after all.

So it is wrong to expect that Boltzmann brains would behave completely differently than normal brains. Because of the universal, local laws of physics, they behave in the same way. The greater volume we decide to arrange properly, the longer time will be needed to observe any difference between the situation of a normal observer and a freak observer.

Moreover, it doesn't matter whether we think about these issues in terms of classical physics or quantum physics: a full measurement of a complete set of observables - i.e. a full identification of a wave function - leads to equally "specific" initial conditions as in the case of the classical initial conditions. Quantum fluctuations do influence the evolution of physical systems, according to quantum mechanics, but the influence is identical for normal brains as well as Boltzmann brains. Once a Boltzmann brain is defined by initial conditions at t=0, to reproduce a normal brain, their physical properties will be indistinguishable for positive t, too.

Truth: Boltzmann brains have a genuinely nonzero probability to emerge

Because the volume of spacetime is "infinite", the anthropic people imagine that it is enough to "beat" the smallness of the probability density that a Boltzmann brain occurs in a region. Because the probability density is nonzero, it is guaranteed that infinitely many Boltzmann brains occur in the whole spacetime.

Such games involving indefinite products, such as zero times infinity, are subtle and one shouldn't be as naive as Zeno when he was designing his famous paradoxes. In the real world, there are actual cutoffs that imply that infinite things are not quite infinite. This is true for the infinite volume of spacetime, too. However, we will see that the conclusion - that the large volume wins - is correct, anyway.

To be very specific, let us try to construct an actual realistic brain from the thermal quantum fluctuations in our (asymptotically in the future) de Sitter space.

If the temperature of the space were exactly zero, a brain would never be spontaneously created. This is a special case of my previous comment that "quantum fluctuations" are not able to do certain things that the anthropic people incorrectly imagine. If the Universe is known to be in the vacuum state, i.e. the eigenstate of the Hamiltonian with the minimum eigenvalue, it will be in the same vacuum state forever. With probability one: for sure. The vacuum state itself "includes" many quantum fluctuations (complicated trajectories contributing to the path integral; or non-trivial wave functionals with nonzero chances to have nonzero velocities) but it is exactly stationary.

So we really need some "chaotic material" to create the brains randomly. The de Sitter radiation from the horizon can play this role. But its temperature is really tiny: the typical thermal wavelength is proportional to the size of the Universe, about 10^{60} Planck lengths. The temperature is therefore 10^{-60} in Planck units. We need to create an elementary "electroweak" particle whose energy is something like 10^{-30} Planck energies. The Boltzmann factor, exp(-E/kT), adds a huge exponential suppression. In this case, it is exp(-10^{-30}/10^{-60}) = exp(-10^{30}).

Now, you should create 10^{27} different, a priori "independent" elementary particles (the number is related to Avogadro's constant: let's neglect the interactions, hoping that they don't change the result qualitatively), to construct the whole brain. So in order to obtain the combined probability for 10^{27} "independent" events of the same type, you need to take a huge power of the previous tiny probability, exp(-10^{30}): the exponent in this new power must be 10^{27} itself. These are very tiny numbers and we are combining the powers in too many confusing ways so let us write a displayed formula:

exp(-1030) ^ (1027) = exp(-1030 x 1027) = exp(-1057).
This is a similar supertiny, expo-exponential probability to the previous one, with the expo-exponent of 30 replaced by 57. ;-)

This probability may look ludicrously small and it indeed is. But the huge volume of spacetime is still sufficient to beat this tiny number. I haven't told you but the volume of spacetime shouldn't be thought of as an infinite number: instead, you should think that the maximum age of the Universe is the Poincaré recurrence spacetime volume, exp(10^{120}) in Planck units, where 10^{120} is the estimated maximum entropy in our Universe (given by the area of the de Sitter horizon, which goes like the radius, 10^{60}, squared): the actual entropy of the "interior" of our Universe today is only 10^{100} or so. After this very long time, exp(10^{120}), the Universe tends to "repeat itself", if we simplify the situation a bit. You shouldn't believe that longer time intervals are "independent".

So if you multiply the "nearly infinite" spacetime volume, comparable to exp(10^{120}), by the tiny probability density to create a Boltzmann brain, exp(-10^{57}), you still get something like exp(10^{120}-10^{57}) which is still essentially exp(10^{120}). The volume wins. It is no coincidence that the number 57 was smaller than the number 120: whenever the "brain" fits into the Universe, your exponents will be sorted in the same way. So does it mean that even a simple de Sitter space with an infinite future predicts that "you" should be a freak observer inside an incoherent, chaotic Universe? Can this simple and universal counting falsify an ordinary de Sitter cosmology?

The answer is, of course, "No". It was the probability density for a brain creation itself, around exp(-10^{57}), that determined the "importance" of the effect of a spontaneous brain creation. If we multiply this small number by a huge volume, we obtain a huge result that has no physical relevance for any place of the Universe in the present, past, or future (because the factor of the "spacetime volume" transformed the quantity into some "statistics of events in the whole spacetime" i.e. a global quantity which can't possibly influence any observable phenomena in a region of spacetime, by locality and causality).

Nevertheless, many people say "Yes, the huge number of Boltzmann brains in the whole spacetime volume rules out even an ordinary de Sitter cosmology (and many others)". Their incorrect "Yes" answer is partly built on another:

Myth: physical initial conditions are defined in terms of brains

To be specific, let us consider a looming collision of two planets. We know that they're going to collide and our task is to predict how many big pieces will be created.

The anthropic people think that the initial conditions of this physics problem involve a brain that was just informed about the coordinates of the colliding planets and physics should be able to predict what the mouth, connected to the brain, will say after new observations following the collision that update the brain's knowledge. ;-) So they formulate the problem in the following way:
Input: Brain, connected to eyes, sees two planets
Desired result: What will it see after the collision?
But that's actually not quite the same problem as physics is ready to solve. It may sound counterintuitive to the anthropic people ;-) but the problem above belongs to psychology, not physics. Physics defines the initial conditions and the questions in a different way:
Input: Two planets actually exist at some points of space
Desired result: What will their material do after the collision?
Now, I will assume that the reader believes me that it is the latter, objective formulation of the problem that physics would actually like to answer. The previous, subjective formulation can perhaps be a good proxy. It is a good reformulation of the physical problem assuming that it is equivalent. But is it really equivalent?

Well, the answer obviously depends on the quality of the instruments that measure the state of the planets before and after the collision, including telescopes, eyes, preprint servers that collect the data, and brains. ;-) If at least one of these players fails to faithfully reproduce the reality and transmit the real data, physics of planets will have (almost) nothing to say about the perceptions of a particular brain.

If the telescope is actually directed at a science-fiction movie on TV, if the experimenter is drunk, if the preprint server is hacked, or if one of these important people who transmit the data from the telescope to the theoretical physicist is an imbecile, which is often the case, the theoretical physicist will clearly be unable to make good predictions about the future observations extracted from the telescope. ;-)

Even if someone defines the "psychological" physical problem insufficiently accurately - for example, he is only careful to define the initial state of some neurons (or atoms) but not others - the result is the same as it is for a lens (in the telescope) with too many cracks or a microprocessor (owned by the preprint server) with too many burned transistors: the link between the actual physical phenomena (involving the planets) on one side and the perceptions of the brains (that try to observe the phenomena) get disconnected. Physics of planets will have nothing to say about the brains. The correct predictions of the "brain problem" will depend on the precise character of the cracks, unknown or missing neurons, or burned transistors.

Now, I am convinced that when the situation is described in these clear terms, every sane person will agree with me. Nevertheless, it seems that the anthropic people often seem to implicitly disagree. I guess that they feel uncomfortable about the fact that physics talks directly about the planets even though the perceptions of our brains are the only "real" physical phenomena that actually inform us about the initial and final conditions of physical systems.

Well, I agree with that. Even though we usually imagine that the real world "objectively" exists, we always learn the "objective" information about it in some indirect ways that include the brain cells. But as my presentation indicates, the situation of a Boltzmann brain is pretty much equivalent to the situation with a fake telescope or a very drunk astronomer (for pedagogical reasons, I avoided drugs in this explanation). There's nothing really mysterious about it. But we could still ask:

Why are most of the brains we observe working well?

In other words, why are they normal brains? The first obvious answer is that they are not. ;-) Fine. But it still seems that the percentage of brains that work pretty well is much higher than the tiny percentage in the hypothetical ensemble dominated by Boltzmann brains. Is it a paradox or an observation that deserves a nontrivial dynamical explanation?

It's certainly not a paradox for me. First of all, it's not shocking that all observed brains are normal if at least one of them is. This proposition largely follows from the definition of a normal brain. A normal brain XY is a brain found inside a Universe where large entropy decreases don't appear more often than predicted by the second law of thermodynamics, including the microscopic fluctuations, and where the evolution largely follows the known dynamical laws.

If a brain XY has a Boltzmann brain somewhere in its vicinity - for example, on the same planet - it proves XY cannot be a normal brain either. This conclusion follows from the definition of a normal brain in the previous paragraph. A single Boltzmann brain is enough to prove that the whole setup and the whole planet is a "fraud". Whether a brain is a normal brain or a Boltzmann brain depends on the environment.

So if someone is trying to pretend that the Boltzmann brain paradox becomes more serious whenever we observe another Boltzmann brain, i.e. when the number of normal brains is very large, he is incorrectly assuming that the Boltzmann vs normal character of two or many brains are independent questions. They're not independent at all. Either the whole planet is a fair evolutionary setup with normal brains only, or the whole planet is fraud and all of its brains should be called Boltzmann brains.

Because of this simple reason, it is enough to understand why at least one brain on Earth is a normal brain. For me, it's most natural to consider my own brain. I recommend you to use yours if you want to reproduce the deduction below. ;-) Of course, I will find out that my brain is almost certainly a normal brain which is also why I can trust that it reproduces the data about planets (and other objects) well. Let us show both methods to determine whether my brain is a Boltzmann brain: the sane (a.k.a. scientific) method and the anthropic method.

Is my brain normal? Anthropic method

According to this method, my brain must be a generic brain in the ensemble of objects that look like brains in our de Sitter spacetime (assuming that this is the right cosmology). How many conditions a piece of material has to satisfy in order to be called "my brain" (especially how large a region around the brain must coincide) is never determined but despite this cutoff dependence, the anthropic people believe that the number of Boltzmann brains is physically relevant, anyway. (It's not!)

There are about exp(10^{120}) of such brains - within one Poincaré recurrence 4-volume - and virtually all of them are Boltzmann brains. Because there is a full democracy between these brains, the anthropic people think, those Boltzmann brains win. So I must be a Boltzmann brain, too.

Note that this approach is politically correct because each piece of organic junk somewhere in the future chaotic dumping ground of noise is on par with a human being only because it looks similar. These pieces of junk don't have to thermalize, fight, or find a support of the Pentagon or the Kremlin. They're there so they immediately and eternally have all the rights and if you say that they're just pieces of organic junk, you are being politically incorrect and, according to the anthropic "science", you are even wrong.

Well, that was funny but let us now use a normal brain, and not a freaky one, to determine whether my brain is normal or freaky. ;-)

Is my brain normal? Sane method

First, we must understand what the question actually means. What does it mean for the brain to be normal? It is a question about the whole Universe: does the Universe surrounding my brain satisfy the laws of physics, including the second law of thermodynamics at macroscopic scales?

More precisely, we ask whether it has satisfied them in the past. Why? Because in the future, unless we make insanely unlikely assumptions about the present, the second law of thermodynamics will automatically hold. So we are really asking:
Is/was the history of the Universe surrounding my brain compatible with the (macroscopic) second law of thermodynamics?
This is the only way how one can refine the original question so that it is well-defined.

At this moment, it is crucial to notice that this is a question that assumes - and must assume - something about the present (the existence of a brain at t=0) and that wants to deduce something about the past (e.g. validity of the second law). So it is a textbook example of a retrodiction. In quantum mechanics, the future probabilities can be calculated "directly" from the squared complex amplitudes but, as has been explained many times on this blog, e.g. in the article called Bayesian inference, retrodicted probabilities can never be equally canonical or unique and they always depend on priors.

In this particular case, I am trying to decide whether my brain is normal or a Boltzmann brain, as clarified above. These are two hypotheses. At the beginning, I must choose some prior probabilities for these hypotheses. A fair and rational treatment always gives a nonzero chance to every qualitatively distinct possibility. So I choose the priors to be 50% for a normal brain, 50% for a Boltzmann brain.

Now, I can collect some evidence: the data can be used for logical inference, to refine the probabilities that the two competing hypotheses are correct. So I observe a few macroscopic phenomena in the world around me. For example, I observed that two eggs were broken but no egg was unbroken. The probability that this is what happens with the eggs according to the normal brain hypothesis is essentially 100%. The probability that such a thing occurs in a Universe near equilibrium, without a well-defined arrow of time, i.e. in a Universe where every change of the entropy is just a fluctuation, is something like exp(-EntropyIncrease) which is really tiny, something like exp(-10^{27}). Most likely, different pieces of the eggs should have increased or decreased their entropy pretty much randomly.

So the evidence has brutally disfavored the Boltzmann brain hypothesis. It is obvious that we can easily perform a few macroscopic measurements that can make the posterior probability of the Boltzmann brain hypothesis pretty much as low as we want. The only alternative hypothesis, the normal brain hypothesis, is thus proven beyond doubt. Instead of eggs, it may be more conceptual to observe the standard evidence of evolution and the Big Bang cosmology: this standard evidence implies that it is extraordinary likely that our brains are results of evolution in a normal Universe with an increasing entropy i.e. that they are not Boltzmann brains.

If someone obtains the opposite answer, namely that our brains should almost certainly be Boltzmann brains, he is clearly making a mistake because the scientific evaluation of the available evidence and of the existing theories can't lead to two profoundly different figures for the probability. For example, if Roger Penrose argues that the Big Bang couldn't have been at the beginning because low-entropy states are always unlikely, the evidence simply disagrees with his theory. He may impress the Hard Talk host with huge numbers like exp(10^{120}) but if he uses the numbers incorrectly, the value of his argument is zero, anyway. In a children's game where you win if you say the largest number, he could win. But in science, he can't.

My setup is self-consistent, consistent with data, compatible with all the conventional (as well as slightly less conventional but conceivable) cosmological models, and pretty much complete. It implies that the Big Bang expansion and the evolution of species almost certainly did take place; so it means that every derivation that concludes that the Big Bang expansion and evolution didn't occur and that our brains are Boltzmann brains has to be wrong.

Indeed, my conclusion means that our brains are very far from being "generic" in the (almost infinite) set of all physical objects that look like our brains in our de Sitter spacetime. But this "non-genericity" can be easily shown - and has been easily shown - to hold: one macroscopic observation is enough to settle the question. There is nothing wrong with this non-genericity. The same evidence that supports evolution and the Big Bang theory also falsifies all theories and all "philosophies" that either assume or imply that our brains should be generic in the whole de Sitter spacetime. The genericity hypothesis is simply falsified by extremely robust and universal arguments.

If you don't like that I needed an observation (to see an increasing entropy of eggs or some evidence of evolution or the Big Bang expansion), let me repeat that it is impossible to "predict" the probability that my brain is a normal brain without making any observations simply because the "prediction" of the Boltzmann/normal character of our brains is actually a retrodiction and retrodictions always heavily depend on priors (assumptions about the past): and there are no canonical, universal priors in Nature.

One always needs to make some assumptions about the history which was surely affected by some "random" events whose existence is implied by the probabilistic nature of quantum mechanics. Some conclusions (some retrodictions) heavily depend on the history and the priors, others don't, depending on the context. But experimental data are always needed to reduce this dependence.

In this logical framework of inference, which is the only rationally justifiable way to make similar retrodictions about the Universe around us, it was trivial to show that my brain, and therefore all brains on Earth, are almost certainly (mostly) :-) normal, non-Boltzmann brains. This conclusion didn't depend on any detailed features of the cosmological model that governs our Universe.

So I have proven that my brain is "normal" in all cosmological models we have considered. It follows that any argument that allows to rule out a cosmological model by considerations involving the Boltzmann brain paradox has to be wrong: by the simple but careful argument above, I have falsified all such theoretical frameworks. In other words, if you combine my general physics considerations and facts with the assumption that implies that our brains are almost certainly Boltzmann brains at least in one conceivable cosmological model, you obtain an inconsistent system of axioms. So you should better be careful and not to add anything that creates such an inconsistency. The culprit is, of course, the anthropic reasoning itself.

We may be missing some extra "laws" or "axioms" about the initial conditions, vacuum selection, and similar stuff but if you try to add "genericity" or some related anthropic axioms to the dynamical laws we (roughly) know, you are just adding "too much" or a "wrong thing". The combination is inconsistent.

The anthropic methods to deduce that a theory predicts that we should be actually Boltzmann brains, given certain cosmological data, are simply irrational and wrong. They reject basic rules of logical inference and replace them with wrong, unjustified, and unjustifiable proportionality laws between quantities that are not proportional in any sense, namely the probability that "we are something" and "the number of somethings in the spacetime". These incorrect proportionality laws are the infamous "genericity assumptions".

I believe that this "selection fallacy", assuming that objects must be "generic" in some randomly chosen class of similar objects (that share some properties but not necessarily others), is ideologically driven. Those people believe that everyone is "equal", where "everyone" depends on the current fashionable conventions and "equal" should even influence the probabilities that different people have different properties. The Boltzmann brains are able to get into their ensembles so they are equal, too: it would be a form of racism to think that Boltzmann brains are just irrelevant pieces of organic junk somewhere in an irrelevant, extremely distant future, wouldn't it?

But such an equality between different elements of a set only holds physically in extremely rare situations, namely when there is a mechanism that "enforces" the equality of the whole class of objects that share some particular properties. For example, different microstates of a gas are (almost) equally likely because of thermalization - that needs all the degrees of freedom of a closed system to interact with each other for a sufficiently long time. Thermalization chaotically probes all possible microstates - or places in the phase space - so all of them are "equally likely", after some time (as the ergodic hypothesis explains).

Analogously, citizens of a democratic country may have "equal rights" in many respects if there are police, army, or courts that enforce such an equality. But if there is no mechanism that enforces or otherwise guarantees the "equality", the equality simply doesn't exist physically (and such an equality often contradicts some of the real laws of Nature that can be determined otherwise). If you imagine that people have God-given human rights or God-given equality with all other people, you are free to imagine that but your imagination will have no physical consequences because it has nothing to do with physical reality, at least until you take over all armies in the world. Until you do so, cannibals can still easily eat you in Oceania. If we knew trillions of aliens, you wouldn't even be sure who can be counted as a human being. These questions clearly depend on social conventions and they can therefore have no immediate impact on physical phenomena or correct physics calculations.

In this most extreme case of Boltzmann brains, a piece of organic junk can emerge in the year exp(10^{120}) somewhere in the middle of de Sitter chaos. This piece of junk can look like my brain and there can be googolplexes of such brains but their existence will influence neither my rights at the present time nor rational calculations of physical phenomena that involve me. In fact, both of these influences would be not only unjustified but even acausal. Everyone who assumes otherwise is deluded and the probabilities in his papers are wrong by a factor of exp(10^{120}) or so which is a pretty bad error: an exponentially worse error than one that leads to the cosmological constant problem that some people wanted to be solved. ;-)

And that's the memo.

Add to del.icio.us Digg this Add to reddit

snail feedback (2) :


reader John John said...

The only normal "Minds" are the minds that see the obviousness of the Creator.
All other minds are detached from reality and can no longer reason correctly. Yes, they can still be smart, invent, and wipe their ass but they are the OJ Simpson jury on matters of Creation.
God has left them to their stupidity and many are too far gone to ever see whats so clear to most. For some its not too late but usually the arrogance that got them in their blind state will prevent them from ever realizing they are willfully in the Line designated Destruction.


reader Natalia Kiriushcheva said...

http://gravityattraction.wordpress.com/2013/04/01/boltzmann-brain-discovery/