**...but no saviors are needed: their irrational Boltzmann Brain alarmism misunderstands what a hypothesis includes...**

On his blog, the Preposterous Universe, Sean Carroll promoted a paper by himself and Kimberly Boddy:External discussion: Jacques Distler will write a few critical sentences about the Boddy-Carroll paper tomorrow. I completely agree with Distler – as he will reproduce some ideas from the text below (and others). It's not possible for hypothetical future events to influence the present; and there is no particular framework of probability theory into which sentences of the kind "we're likely Boltzmann Brains" may be justifiably embedded. Each of these two bugs is enough to identify the paper as crap and the authors as nuts.

The Higgs Boson vs. Boltzmann Brains (his blog)Last week, I was giving a popular physics talk in a planetarium in Northern Bohemia. It clearly turned out to be too complicated for the bulk of the audience (philosophers are sometimes annoying but a group of philosophers is more ready to listen to some almost real physics than a selected 1/1,000 fraction of the general public in a medium-size town!) but we've had some fun, anyway.

Can the Higgs Boson Save Us From the Menace of the Boltzmann Brains? (arXiv)

One of the longest discussions was dedicated to the phase transition that may destroy the Universe; the Higgs field instability is the most ordinary example of such a scenario. In a "seed of doom", the Higgs field (or, more generally, another usually scalar field) may penetrate to a new, lower energy state that is incompatible with life. This "seed of the new lifeless Universe" starts to expand almost by the speed of light and devour everything. You won't feel the pain because your nerves are slower than the inflating nothingness.

I wanted to calm the public. The Universe won't collapse anytime soon. At the end, however, I just couldn't tell them anything else than the truth. And the truth is that empirically, we only know that the approximate lifetime of the Universe after which the "seed of doom" starts to grow somewhere is unlikely to be much shorter than the current age of the Universe, 13.8 billion years. It may be comparable, it may be a bit shorter but it may also be much longer and infinite. If it is finite, it sounds sort of unlikely that it would be comparable to the current age of the Universe which means that it's probably much longer. Don't worry. But there's really no "solid" argument that would prove that the Universe won't start to disappear in the next 1 billion years.

You may find the "Higgs decay" scenario frightening. The Universe may die long before the Sun runs out of fuel in 7.5 billion AD and goes red giant. What a waste! It may be tomorrow. We're not able to present any solid enough proof that it won't happen. However, Boddy and Carroll are scared of something else: that the Universe

*won't*die soon. So they claim that the unstable Higgs field is our savior from the genuine threat: the Boltzmann Brains. This fear is utterly irrational because the Boltzmann Brains aren't endangering us. They aren't endangering physics, either. The won't ever appear on the Earth (much like Category 6 hurricanes which are nothing else than another proof that Al Gore is a liar without any scruples). There's no reason to sacrifice the world (or billions of dollars).

Similar explanations have repeatedly occurred on this blog but here we go again.

Boddy and Carroll – and others – are scared of a competing theory that may "explain" all the observations we have had. It's a theory that doesn't require the usual prehistory that has led to our life – including the Big Bang, the formation of the galaxies, the Solar System, and the lengthy path of evolution, not to mention many less fundamental parts of our life story.

Instead, one may say that there will be infinitely many opportunities in our soon-to-be-almost-empty de Sitter space where brains locally indistinguishable from ours may be created out of pure thermal fluctuations. Carroll and others believe that because the number of such "freak brains" is infinite (when integrated over the whole infinite future of the Universe), they are predicted to be more likely to be "us" than anything else (i.e. the well-behaved brains that have evolved from the Big Bang and evolution).

But this just ain't the case.

If there is an infinite number of something, this infinite number doesn't mean that it "has to be us". For example, \(\pi\) has infinitely many digits in its decimal form but that doesn't mean that you are any of them. The probability that you are just a digit of \(\pi\) is zero which means that not even the infinite number of these digits may force you to become a digit. Even if the probability that you were a digit of \(\pi\) were nonzero, it may still decrease with the location in \(\pi\) so quickly that the overall probability that you are a digit will be tiny.

Let me hope that the previous paragraph sounds trivial to you but be sure that Sean Carroll and others don't understand this simple claim. They confuse the "number of some objects" with "their probability" which are

*completely*different quantities (and in a general situation,

*completely*uncorrelated quantities) because the probability that "you are something" is in no way uniform for all these "somethings".

That's one way to describe the fundamental mistakes in his reasoning.

Another, closely related way to describe the fallacy is to point out that a hypothesis in science is something that explains our observations. To do so, it must not only make assumptions about "how the world works" but also about "where and when we are located or living" i.e. essentially "who we are" (I mean primarily who we are relatively to the rest of the world, not who we are internally and structurally).

Different assumptions about "where we are" and "when we are living" obviously lead to different predictions of what we should be seeing (the world looks different if you manage to live inside the Sun, inside the Moon, or million of light years from the nearest galaxy). So these assumptions distinguish different scientific hypotheses and they may be tested and falsified separately from each other. As a hardcore Marxist, Sean Carroll clearly wants to confirm or falsify all these different hypotheses simultaneously, as a collective, but science just can't work like that.

This simple point has been discussed using many different words on this blog. Several years ago, Hartle and Srednicki introduced the catchy term "xerographic distribution" to emphasize that the assumptions about our location within the spacetime incorporated in a theory is a part of the hypothesis that is being tested i.e. validated or falsified.

Imagine that our Universe will converge to an empty de Sitter space – everything indicates it is so (the world is already dominated by the cosmological constant and each 11 billion years of the cosmic time or so, the linear distances will double which means that the particle-based mass density of the Universe will decrease by a factor of 8 or so). This de Sitter space has a certain Poincaré recurrence time comparable to \(\exp(S_{dS})R_{dS}\) after which it has to repeat up to arbitrarily small errors and people have said many things about the question whether the repetitions should be viewed as independent episodes (the ER-EPR correspondence is surely another conceptual reason to think that these repeated stories should be thought of as being "in the same region of the spacetime" i.e. not independent).

But I don't really think that such questions about the identification influence how science chooses the valid hypotheses.

Fine. The Universe will continue as a nearly empty de Sitter space which is still filled with the thermal radiation at the temperature which is tiny (the typical thermal wavelength is comparable to the curvature radius of the de Sitter space) but nonzero. And because it's nonzero, every state of matter has a nonzero probability and when it's given infinitely many opportunities to be realized, it will be realized. In particular, freaky Boltzmann Brains that perceive the same things as we do even though they haven't evolved through the nice scientific big-bang-evolution path are guaranteed to appear at some very distant moment in the future.

But that doesn't mean that our best theories (assuming that the de Sitter space won't collapse) actually predict that we

*are*the Boltzmann Brains. Even though the Boltzmann Brains will be repeated infinitely many times, science can say – and actually does say – that we're not belonging to their transtemporal society. Again, the number of objects is a different thing than the probability that you are one of these things, stupid!

The usual physical theories with the Big Bang and an infinitely long-lived empty de Sitter space are compatible with our having evolved by the "almost straightforward history" involving the usual events after the Big Bang and evolution, among others, without some exponentially unlikely events. Why? Because these assumptions are a

*part*of the standard physical theory combining cosmology and particle physics! Like most good theories in science, the standard cosmology says that life has evolved without a dependence on some super-unlikely fluctuations or events. As a good scientific theory, our Big Bang cosmology explicitly says that we are

*not*Boltzmann Brains. This claim isn't incompatible with any other assumption of the theory just like the claim that you are not a digit of \(\pi\) (even though there are infinitely many such digits) is not incompatible with the biology of mammals.

So the standard cosmological theory is a

*different*theory than any theory that claims that we are Boltzmann Brains. They are totally incompatible with each other because the standard cosmological theory says that everything we see is a result of a nearly inevitable evolution that was picking the most likely outcomes almost all the time – and that depended on no "super-unlikely" events or fluctuations.

Different hypotheses may be compared with each other. You may compare the standard cosmological theory with the Boltzmann Brain hypothesis of any kind. Needless to say, the standard cosmological theory wins because the Boltzmann Brain hypothesis predicts that whenever you look a bit further than before, you should almost certainly see a disorder that will leak the fact that your brain and its vicinity is just a giant thermal fluctuation. (By the Boltzmann Brain hypothesis, I mean the hypothesis that our brains/civilization etc. appeared from a thermal fluctuation that only began to resemble the usual evolution at times much shorter than the usual age of the Universe or in a region much smaller than the usual size of the visible Universe but is truly thermal elsewhere; if you include large fluctuations that have evolved "ordinarily" in the whole visible Universe for 13.8 billion years, then such a "generalized Boltzmann Brain" hypothesis isn't falsified and may in fact be a good description or philosophical incarnation of our observations.)

The standard cosmological theory predicts that the next galaxy you are going to observe with your next-generation telescopes will be similar to those you already know. And of course that the standard cosmological theory's predictions are pretty much right while the totally different predictions of the Boltzmann Brain hypothesis are falsified.

A scientific hypothesis working with the assumption that we are Boltzmann Brains is empirically falsified – by totally elementary observations, in fact. Simple observations (combined with simple logic) are the easiest ways to falsify a hypothesis in science. But it shouldn't be shocking that one needs at least some empirical data to falsify a hypothesis. That's how science has always worked. Science chooses the right and wrong hypotheses by looking at the empirical data. There's no reason to be ashamed of this fact. It is true and it has to be true, otherwise it wouldn't be science.

So we don't need to assume that our Universe will die in a few billion years if we want to protect our physical theories from the Boltzmann Brains' being us. The empirical evidence is overwhelming that we are

*not*Boltzmann Brains. Because we know that we're not Boltzmann Brains, we may immediately eliminate every hypothesis or its part that would force us to believe that we

*are*Boltzmann Brains. It's that simple. That's why we just don't have to be afraid of "being" Boltzmann Brains or postulate some "liberating doomsday" to protect the good feelings about our identity against some crazy ideas.

At the end, I really think that Carroll's totally wrong reasoning is tightly linked to an ideology that blinds his eyes. As a hardcore leftist (or at least a person pretending to be one in order to improve his social status in a hard left-wing environment), he believes in various forms of egalitarianism. Every "object" has the same probability. Also, much like climate "scientists" (and I am only talking about "scientists" in the quotation marks here, not about genuine scientists), he wants to "collectively test" (and "collectively trust") models (e.g. climate models) whether they are right. But none of these things is scientifically true. Objects, people, and their categories are created unequal, probabilities aren't proportional to the numbers of objects in any reasonable sense, and hypotheses must be validated individually and not "in collectives" because at most one of the inequivalent theories or models may be right at the very end and it's just wrong to clump a right theory with the wrong ones because the very purpose of science is to be disentangling the right ones from the wrong ones.

Let me offer you an analogy that should hopefully clarify why Boddy's and Carroll's way of thinking is totally silly.

Imagine that we discover a stone that looks like a display and it displays one decimal digit every hour. Such a stone looks like a result of Intelligent Design but it doesn't matter whether it's man-made, UFO-made (OK, I meant ET-made), or natural. Assume it's natural but your task is to predict what the object will do. Once people begin to watch the digits and remember them, they record the following sequence:

4,1,5,9,2,6,5,3,5,8,9,7,9,...It looks like a random sequence of digits. However, someone realizes that they look like digits (starting from the third one) of \(\pi\):\[

\pi\approx 3.1415926535897932384626\dots

\] This person will predict that the next digits will be 3,2,3,8 and the prediction is confirmed. It's great. Note that by now, 17 hours after the records began, people have recorded 17 digits from the stone so far.

But someone will start to claim that there is no reason why the digits should be taken from the beginning of \(\pi\). The same sequence of 17 digits appears roughly once in a sequence of \(10^{17}\) digits of \(\pi\) and because \(\pi\) has infinitely many digits, the same 17-digit sequence is bound to appear infinitely many times somewhere.

In fact, someone else will change the statement and say that they will appear somewhere in\[

e\approx 2.718281828459045235360\dots\qquad\\

\qquad \dots 28747135266249775724709369995\dots

\] or somewhere in its powers \(e^n\) where \(n\) is a nonzero integer. Because there are infinitely many numbers of the form \(e^n\) and just one number \(\pi\), someone else may even claim that it's more likely that the stone emits random digits from a number of the form \(e^n\) and not from \(\pi\).

Needless to say, such a claim is unjustified because there's no reason why the \(e^n\)-based explanation should be "equally likely" as an explanation based on \(\pi\). And indeed, the empirical evidence will keep on accumulating (new digits are coming every hour!) that the \(\pi\)-based explanation is the right one while the other hypotheses are just wrong.

The guy or babe who invented the \(\pi\) theory of the stone used \(\pi\) and not \(e^n\) and he or she did claim that the digits are taken almost from the beginning of \(\pi\), too. He or she isn't "obliged" to consider some faraway sequences in \(\pi\) (or even in other numbers) to be "equally justified" predictions of his or her theory because the theory

*includes*the statement that the digits are taken almost from the beginning. The place in \(\pi\) from which the digits are being taken isn't "obliged" to be "typical" – on the contrary, it's a point of the explanation that it is a very special point, the beginning. So every inequivalent statement is a competing hypothesis and it will finally lose. Too specific theories based on specific enough locations in other numbers will be strictly falsified; theories explicitly or effectively claiming that the digits are random will be "fuzzily" (but increasingly robustly) falsified because they predict that (very/extremely) long \(\pi\)-like patterns are (very/extremely) unlikely. But they're being observed which makes these random explanations increasingly falsified.

In this analogy, Boddy's and Carroll's claim about the "desirable" Higgs decay of the Universe is analogous to the statement that \(e\) is a rational number. If the digits of \(e\) start to get repeated, then the same is true for \(e^n\) as well and the numbers \(e^n\) won't have the sufficient infinite diversity of sequences of digits to match the observed digits emitted by the stone. In this way, the analogous Boddy and Carroll will argue, the \(\pi\) theory is protected against the Boltzmann Brain explanation – the explanation assuming that the digits are being taken from a random faraway place of one of the numbers \(e^n\).

Indeed, academically speaking, the Boltzmann Brains-like \(e^n\) theory of the stone could be falsified in this way: if \(e\) were rational, it just couldn't generate an aperiodic sequence of digits. But what Boddy and Carroll (and others) don't understand is that it is not

*neccessary*for \(e\) to be a rational number if we want to scientifically establish that the digits are actually being taken from \(\pi\). The empirical evidence is enough. And indeed, \(e^n\) for \(n\neq 0\) aren't rational numbers which means that the strategy to disprove the \(e^n\) theory of the stone is hopeless because it depends on propositions that are wrong. And indeed, the evidence from the stone keeps on arriving and confirming the \(\pi\) theory while falsifying any other simple enough hypothesis.

Also, I want to mention that we may know a reason or we may not know any reason why the stone prefers \(\pi\) over \(e^n\). But even if we don't know any such deeper reason, it doesn't mean that the \(\pi\) and \(e^n\) explanations are equally likely. Instead, the empirical data heavily break this symmetry and imply that \(\pi\) is vastly preferred. That observation really means that deeper theories about the inner workings of the stone are either "encouraged" or "totally required" to prefer \(\pi\) over \(e^n\). You may believe that \(\pi\) and \(e^n\) are equally good for the stone but much like any belief in science that has observable consequences, your belief may be proved to be wrong and indeed, it is proved to be wrong in this case, too. There's nothing "holy" or "infallible" about egalitarianism; in fact, it's one of the crappiest ideologies around.

We know that we aren't Boltzmann Brains and we don't need to assume a "doomsday scenario" – a Higgs vacuum decay or any other doomsday scenario one could talk about (the very fact that Boddy and Carroll single out the Higgs vacuum decay is a piece of demagogy or a hint that they're unable to localize the actual reasons that lead to certain conclusions) – to be sure that we aren't Boltzmann Brains because the observations we have already made are enough to be absolutely sure. The state-of-the-art scientific theories claim that we are results of a nearly inevitable evolution involving a very dense and hot Universe after the Big Bang, structure formation, and evolution of species and these scientific theories are explicitly stating that we are not random thermal fluctuations that would have to be super-exponentially unlikely.

Someone may think he has reasons to think that we should be Boltzmann Brains or it should be likely that we are Boltzmann Brains. But "where we are" represent a part of the scientific hypotheses – the xerographic distribution – that needs to be tested much like any other part of a scientific theory. The tests are very easy, have been done long before we became homo sapiens, and the result is that the Boltzmann Brain xerographic distribution is safely falsified. So why do people keep on talking about it? It's as safely falsified a scientific hypothesis as any other falsified scientific hypothesis. In fact, more so. It's one of the key principles of the scientific method that we are gradually ceasing to discuss scientific hypotheses and paradigms that have been falsified.

So Boddies and Carrolls of the world, please stop emitting this crap and attempting to raise the stakes by incorporating ever more irrational and ever more megalomaniac "requirements" concerning a doomsday. No doomsday is necessary for the science we have learned to work.

And that's the memo.

## snail feedback (38) :

"feeding the dragon":

http://prd.aps.org/abstract/PRD/v86/i2/e024016

spot the mistake... :)

That paper is complete rubbish, too. It's still about the same misunderstanding of the absence of measurement inside the quantum evolution.

Path integrals in no way "impose" the quantum Zeno effect on us. One may calculate the probability amplitude for a particle going from point A to region C while visiting region B in the mid time. But this probability amplitude only becomes physically meaningful if the location at the moment "b" is actually measured - and then the experiment gets modified.

It's nothing else than the usual discussions about the double slit experiment. Some trajectories go through the left slit, some trajectories go through the right slit, but if there's actually no measurement taking place near the slits, the amplitudes from the trajectories through *both slits* have to be summed before we square the absolute value of the amplitude to calculate probability. If someone is doing something else, he is just doing a fundamental mistake that results from his lousy understanding of the path integrals and quantum mechanics in general. It is in no way a bug of the path integrals.

These two heavily confused authors talk about "restrictions on path integrals" but it's the very basic fact about the paths in Feynman's approach to quantum mechanics that there can't be any restriction, except for the initial state and the final state that must correspond to an actual measurement, at all! Only these complete sums over all trajectories produce probability amplitudes whose absolute values' square has a direct probabilistic interpretation. Everyone who is trying to restrict the path integral in any other way and offer direct interpretations for such restricted sum is badly violating the rules of quantum mechanics.

Wouldn't the multiverse ruin Carrolls reasoning completely? If there are an arbitrary amount of different vacua that could have Boltzmann Brains, the lifetime of this particular one should be irrelevant.

Right, a good point. This really shows that this reasoning is inconsistent with the basic data whatever you do.

But Carroll would object that he is free to assign the thermal fluctuations "somewhere" with the human rights while the fluctuations in other universes don't enjoy this status. But this discrimination is already against the basic rule he uses that thermal fluctuations should be counted as "equally likely to be us". If it only holds for some thermal fluctuations, e.g. those in this part of the multiverse, then we already discriminate and once we discriminate, it's a good moment to admit that we may discriminate according to any other properties, including - and especially - those that distinguish the objects in our vacuum.

OK, Lubos, but if Boltzmann brains are going to appear, obviously they'll suffer, so we should spend trillions of dollars to find a way to bring an early end to the universe. :-)

For example, π has infinitely many digits in its decimal form but that doesn't mean that you are either of them.You mean "any of them."

Thanks, uncritically fixed to please your language sensitivity.

Exactly but this discovery was made by Carroll before you. ;-)

It's always fun to ponder when and how our universe enters an era of inhabitability :) What a waste of time! You should think about that AFTER you have thought how to prevent destruction of Earth in near future (like 10 years or so). We are about to discover the ways to utilize antimatter. With that know-how we have the seeds of destruction of Earth. Immediate actions should be made in order to prevent the worst. I really mean it.

If I were using my language sensitivity, I'd post more often. I actually spent long moments looking for two things for "you" to be either of.

Like most good theories in science, the standard cosmology says that life has evolved without a dependence on some super-unlikely fluctuations or events.To think otherwise is tantamount to young earth creationism, unless I am mistaken.

Boddy and Carroll's idea sounds like a secular version of Last Thurdayism! LOL!

To me something really stinks (I suspect it does because of 'a deep rot of reaoning' caused by a no less deep and corresponding mathematical mismatch with Reality) about the notion of B's Brains.

If 'amplified' to a notion of "B's ecosystems" it (this stench) becomes extra obvious.

Am not saying the 'multiverse notion' is not muddy only that in comparison to "B's Bs" it seems to be one that is much more realistic and scientifically (by string/M theoretical reasoning) motivated (or justifiable).

LOL except that they're on the opposite side. They have a program to execute all Last Thursdayist heretics who claim that the world was created last Thursday - by detonating the whole Earth quickly enough.

Right, albeit there are differences, too. What's common is that they think that it doesn't matter how unlikely miracles are needed for an explanation to explain the observed data.

Dear Kimmo, I am ready to insure you against the destruction of Earth in the next 10 years. ;-)

Very clever Lubos but why only 10 years? Whatever happens he is not going to get his insurance, is he?

Actually, I don't regard this BB idea scary at all. We are part of an intelligent universe - after all, we think about the universe and in this way we are a part of the universe thinking about itself. Likely, there are millions of planets with intelligent lifeforms doing the same, and so far we believed that intelligence was restricted to certain environmental conditions. The BB-idea teaches us that intelligence will prevail even after all planets have disappeared. It just means: Intelligence has to be a universal ingredient of every (sufficiently large) system that has a finite temperature.

Some of these Boltzmann brains may be far smarter than us. Perhaps, they would even be able to find a technology to overcome their status as a random fluctuation and create a stable environment for themselves to live in. Perhaps, this IS the cause of the big bang :-)

Cheers,

Holger

You might have a lucrative business opportunity here :D

sorry I just saw this paper 3 days ago and I was shocked. I trusted Physical Review would be more careful about what it publishes... one doesn't need much care to sort out this kind of nonsense so I am wondering what the average physics competence of a referee at PR or PRL is?

LOL, calm down. It's called "peer review", not "god review", because it's just looked at by someone who is coming from the same environment and degree of reliability as the authors themselves. So if the authors of a bullshit paper encounter someone who is writing comparably bad papers, and chances are that similar people are getting their siblings' papers to review, chances are that the paper will pass.

I believe that PRL is still more prestigious or reliable than PR.

Just came to my mind: You may skip that brain stuff altogether: If there are these fluctuations, then some of them may as well reproduce the conditions of the big bang, generating an inflating bubble and becoming a new universe. In this sense, every de Sitter universe would eventually turn cyclic and repeating itself. I am sure it would be possible to estimate the average lifetime of such a cycle.

Cheers,

Holger

I agree. It seems that they do not understand that the measure of a finite sequence in an uncountable set is undefined. We can set it to any value we wish with some fancy anthropic arguments.

It is always a relief to know that I am a patron of the most correct blog.

I have no time to read these papers, either Carroll’s or any of the ones he refers to, so I can’t decide how seriously these people take this stuff. To me it sounds like a “reductio ad absurdum”: the conclusion is obviously wrong to the whole point is to find where the mistake lies.

I think Carroll and physicists like him would be well advised not to create the impression that the threat of “Bolzmann brains” it the kind of thing physicists spend their time worrying about. To the wider public arguments of this kind will surely seem to belong the same category as the medieval ones about angels dancing on pins and publishing papers on this subject in serious physics journals will not improve prospects for obtaining more public money for physics research.

However, once we make it clear that we are only engaging in this scholastic exercise just for fun and to test our ability to reason logically even about patent nonsense, perhaps we will be forgiven if we continue this discussion a little longer ;-)

I completely agree that notion of “us being Boltzmann brains” has been exposed by Lubos as childish nonsense. But is there any physical and mathematical basis for the idea of a “Boltzmann brain” existing at all?

I have at least two problems with this. First of all, the idea of a “brain” does not seem to have ever been defined by anyone writing on this subject. They simply assume that a random configuration of some stuff could form something possessing “consciousness” - although even the meaning of consciousness is not defined (probably can’t be). Some people seem to replace “brain” by a Turing machine, but that greatly weakens their case. After all, a very simple cellular automaton can work as a a Turing machine so that something of this kind could appear as a result of some random fluctuations seems reasonable but hardly impressive. We know we are not Turing machines.

Another difficulty is with the probabilistic arguments used. In popular writings two examples are used most often: a monkey that types “Hamlet” by striking random keys on the keyboard and the fact that we can find any natural number among the consecutive digits of pi starting in some place. However, these examples are very different. In fact experiments performed on real monkeys have shown (not surprisingly) that they are very poor generators of the uniform distribution. According to the Wikipedia: “Not only did the monkeys produce nothing but five pages consisting largely of the letter S, the lead male began by bashing the keyboard with a stone, and the monkeys continued by urinating and defecating on it. ” So much for that.

The example involving pi is quite different and is based on an unproved but empirically strongly supported fact that pi is “normal”, that is, its digits form a sample of the uniform distribution. If that is so, then the Borel-Cantelli theorem can used to prove that every finite sequence of digits will occur somewhere in the expansion of pi. Note, however, that if the distribution of digits in pi is not uniform the proof is not valid. All we can prove is a zero-one theorem, which says that every finite sequence occurs either with probability 1 or 0. This is hardly helpful.

But is there any basis for believing that the physical world is with respect to formation of “brains” like the second case and not like the first? In other words that whatever is needed to create a “brain” is really uniformly distributed in the universe? Is there any law of physics that guarantees that?

Dear lucretius, I think you are thinking in the right direction -- looking at what actually happened when they gave monkeys a typewriter was a good step; going from the abstract to the concrete: "what would really happen" -- but I think you do not go far enough. I question not only the conclusion but the premise. I suspect that not only are we not BBs, as our Esteemed Host says, I doubt that BBs are even real or ever will be. Will try to produce a reasoned argument tomorrow.

I also don't believe in BBs (does anyone?) but it's hard to disprove an argument if you do not know what it is :-(

Dear Lucretius, the argument of Boltzmann Brain fans is that if a theory allows the Universe to live indefinitely, any Boltzmann Brain fluctuation - however unlikely - will be ultimately repeated infinitely many times.

And because you are obliged to treat these pieces of garbage as your peers who have equal rights to "be" you - equal probability - it's probable that you are this piece of random garbage without history, too.

Of course, the way to disprove it is that the "typicality" assumption is as worthless rubbish as the Boltzmann Brains themselves. The truth - and the right theory - clearly is and says that we are allowed to believe - because of the empirical evidence - that we're not Boltzmann Brains so their probability is zero, regardless of their number. A theory isn't obliged to incorporate the "typicality" or "hardcore communism" because "where we are" is a part of every theory and the theories are supposed to differ in those propositions.

I think since these people who believe in BBs do not think it is necessary to specify what a "brain" is, their argument equally implies that every figment of our imagination, as long as it does not involve contradiction (like being round and square at the same time) must also be realised infinitely many time, and only not everything that we can imagine but everything that "could be imagined". Occam would have hated this world.

The key point is whether or not you would accept the idea of being part of a quantum state - whether you, your memories and feelings are eventually described through some kind of state vector. If we take quantum mechanics seriously, then not only us, but any finite subset of the universe (for example, our Hubble sphere) could be described as a quantum mechanical state. Then everything else is easy: The possible number of those states is finite, and if you have a fluctuating system, it could possibly occupy that particular state within a finite timescale. The result is a Hubble sphere that is identical to ours, including us discussing BBs on a blog.

There are no empirical ways to falsify this view - all observations we could possibly make inside the fluctuation, would absolutely agree with whatever experiment we might conduct in our world. The Matrix reloaded :-)

The vast amount of freedom inherent in this philosophy is also its failure: The predictive power of this idea is precisely zero, because these fluctuations could, given enough time, generate any possible state. We thus have to abandon this idea on philosophical grounds: Even though it might be possible that we are living inside such a random fluctuation, it is incredibly unlikely since we have already created theories that lead to the same world without the help of these unlikely fluctuations.

There is a catch, however: We are uncertain about the initial fluctuation that triggered the big bang. Perhaps, it will turn out that this initial fluctuation was as unlikely as the Boltzmann brains, so we are not completely out of the woods yet ....

Holger

Dear Holger, right, if the fluctuation is in the early stages of the Universe, it will probably evolve to a huge one - 42 billion light years - for a long time - 14 billion years - and the laws of physics inside this large fluctuation will say what they were always saying.

This *may* be us. It's not falsified so it's a viewpoint that a future theory of the early Universe may prefer. Right now, such a scenario looks indistinguishable from others but because of certain other predictions linked to this one in various theories, this question *may* be settled in the future.

What's falsified are just the BB-like fluctuations that look like "the ordered world previously believed by us" in regions that are either spatially smaller than 40 billion light years or at timescales shorter than 14 billion light years. Such smaller BB-like fluctuations predict chaos to be vastly more likely outside the boundaries we have already seen, and every new observation falsifies it.

But a limit of these Boltzmann Branis for the spatial and temporal size of the fluctuation going to billions of (light) years (or infinity) is the ordinary physics we have known for quite some time, and we are *not* allowed to eliminate this limit because there are no empirical data that would falsify it. So the conclusion is what it has always been - we understand the Universe up to some early moment after the Big Bang but what happened before that is at the frontier or behind the frontier of our present knowledge and it's just wrong to constrain what we're allowed to believe about such things by some philosophies such as the "cognitive stabilities" etc.

Thanks. I (more or less) understood or guessed that. I think I had doubts on two points (as I tried to describe). One concerns the first thing you say - whether we can be described as "quantum states" without all the environment we are surrounded by. For example the idea of an "isolated brain" - whether such a thing could exist even for a fraction of a second without any "support system". Of course this problem does not arise if we together with our environment are part of a huge fluctuation (as Lubos says).

The other thing I had doubts about is the mathematics behind it (probably because I do not understand the nature of these fluctuations). For example, to say that that something will occur with probability 1 or even will occur infinitely often with probability 1 does not mean that the mean waiting time for it to occur is necessarily finite (as standard symmetric random walk is an example). Has anybody tried to compute the mean waiting time for a single Bolzman brain to appear? How could one even do that without defining a "brain"?

I am not sure if I believe it but I like it ;-)

As far as I understand, these "brains" are not meant to be taken literally. The idea is far more general: You have a finite system, and its fluctuations may lift it into a state that is complex enough to contain "intelligence" or "conciousness", however that may look like.

We have to estimate the number of possible quantum states in a certain spacial volume. Sure there are ways to do that, for example, we know how much information could be stored in such a volume - it is the entropy of a black hole of the same size. So we take a spacial region of 20cm diameter (our "brain"), calculate the number of qbits of the corresponding black hole, and this number must somehow be related to the number of possible microstates of this system. Even though you don't know what is a brain, you definitely know that one of these mictostates would correspond exactly to the current state of your brain (it has to, unless your brain were non-physical). We then need to know the average waiting time for the system to occupy one selected microstate ...

Cheers,

Holger

I am postponing indefinitely my plan to write out a reasoned refutation of the "Boltzmann Brains" conceit. One, I am too slow, would not be able to finish it today. Two, lucretius is already covering the points I was going to make and doing it better than I could. Here's a few loose ends, maybe you can do something with them: (1) Boltzmann never wrote about "Boltzmann brains"; perhaps (possibly even likely) he woould have recoiled in horror at such a notion. (2) We know the lifetime of the proton is very long but I believe no one has shown that is infinite. If the proton decays after 10^40 years, maybe the crazy idea of BBs fails simply due to lack of time?

There seems to be an obvious problem with this: such a state would last for an extremely short time before fluctuating into something entirely different. I don't know much about the brain but I seem brain would vanish before it could even form anything like a "thought". To actually think you would probably need something like a continuous stochastic process corresponding to what goes on in a real brain and the probability of this arising in this completely uncorrelated way and lasting long enough for any consciousness to be formed seems to be incredibly small and probably impossible to calculate.

Yes, such a state would probably survive on time scales of similar order as light needs to pass through the system, i.e. about a nanosecond in this case, because the chaotic environment would need that much time to interact with the "brain".

Protons or such are not needed - they are part of the fluctuation itself and generated just as the corresponding anti-protons. The energy of that brain-fluctuation would be of the order of its mass - you may imagine how long you would have to wait for a random fluctuation to gain such a quantity.

Its all quite weird, but it seems to me like a rephrase of the big bang problem itself. You have solved one problem, then you have the answer to the other one.

Cheers,

Holger

Supposition:

If quantum causality and the related connectedness that potentially models all of the observable universe, to include quantum entanglement (as an observable causal artifact of great systems of connectedness; needed to be a localized phenomena), then semi-static quantum systems forming physics constants are in play. Evolving components of quantum causality are also required, or there would be no change.

Any one of those physics constants evolving would have the same effect as Higgs Field decay, except that the entire observable universe would be affected instantaneously (quantum entangled connectedness of vast systems).

So the Big Bang may not have been the macro version of particle physics where the universe compressed into the size of a single atom. But the already existing fabric of observable space-time slipping a physics constant and the expansion is the relativity of all the physics constants stabilizing as a result.

Observable physics moving relative to physics a singularity we have come to throw about called the speed of light.

So the Big Bang repeats, every time one of the physics singularities slip.

Whether in a finite or infinite pool of causal connectedness, based upon self-organizing chaos, we repeat; eventually. Non-relativistic causality has no time reference.

So based upon self-organizing chaos in a tending toward infinite field of causality, self-referencing intelligence seems to be a trivial consideration.

Post a Comment