Maybe We Do Not Live in a Simulation: The Resolution Conundrumin which he apparently understood a basic problem – well, a disproof by contradiction – with all these delusions. This problem has been pointed out in a large part of the TRF blog posts about the anthropic/simulation/typicality/BoltzmannBrain topics. The defenders of these misconceptions basically assume that there exists a uniform distribution on an infinite set (e.g. a countable set; or a continuous set with the infinite measure).

Because there exists no real number \(x\) such that \(x\cdot \infty =1\), the uniform distribution just cannot exist, and all reasoning based on the assumption that the uniform distribution exists is therefore flawed.

If we increase the readability of Carroll's texts by an order of magnitude, the specific paradox he understood is as follows.

"We" may be the natural people who were really born in a physical, biological world, the \(P_0\) people. The symbol \(P_0\) will be used both for the "kind of a civilization" as well as the number of people in such a world. However, it seems that in our world, it's easy to simulate many – let's say \(N\) – Universes and there are folks living in them, \(P_1\). They're the simulations run by the natural people.

But within these simulations, simulations are also running, with the people who are simulations of simulations of natural people, the \(P_2\) people, and so on, indefinitely. As the subscript increases, so does the number of people because we may simulate a greater number of people than the number of biological people who are alive, so\[

P_{k+1} \sim P_k \cdot N \quad \Rightarrow \quad P_k \sim N^k P_0

\] But the total sum of people of all kinds\[

P_0+P_1+P_2+\dots = P_0 (1+N+N^2+\dots) = \infty

\] is divergent, so there can't be a uniform measure on this set. It is simply mathematically inconsistent to say that it's "equally likely to be any human in this set of people at all levels". Even Scott Aarronson has been able to notice that such geometric sums may be divergent (although he defended a wrong final answer, too). ;-)

One may show the paradox in other ways, too. If it's \(N\) times more likely for "us" to be the \(P_{k+1}\), more computer-layered, people than the \(P_k\), then we're more likely to be the higher-\(k\) people, and ideally \(P_{\infty}\) people. But this set of simulations-on-simulations-on-simulations etc. doesn't really exist.

So the only consistent "variation" of the original theory is one in which the terms \(P_n\) get terminated at some \(n\). There are people of the kind \(P_n\) who are no longer able to create simulations, i.e. produce people of the \(P_{n+1}\) kind. The last term \(P_n\) is the largest one, so we're the most likely ones to be the \(P_n\) folks. But this conclusion contradicts the basic assumption that it's "easy to make simulations". Because we reached a contradiction, something – at least one assumption – has to be wrong.

It's enough if there is one mistaken assumption and the paradox evaporates. In reality, most assumptions made by Carroll are incorrect. But the most severe mistakes are ones that Sean Carroll doesn't even list. They are flaws in what he considers rational or probabilistic reasoning, as I will discuss later. The most important incorrect

*listed*assumption of all the misconceptions featuring simulations, anthropic principles, and Boltzmann Brains is his fifth one:

5. Given a meta-universe with many observers (perhaps of some specified type), we should assume we are typical within the set of all such observers.The typicality assumption is absolutely irrational. There exists no rational justification for such an assumption and the paradox above – if formulated in a much more general setup – is indeed a proof that the assumption is rubbish. Infinite sets of objects picked according to some condition exist everywhere in mathematics in science; but the existence of infinite sets clearly doesn't mean that the probability calculus is internally inconsistent for an infinite number of options.

What is flawed is the assumption that in general, the "number of elements in a set" is proportional to the probability that we belong to this set. This assumption holds only in a tiny fraction of situations when there is some mechanism or a reason why it should hold (e.g. ergodic thermalization that makes all microstates in an ensemble equally likely). But without such a mechanism, it just doesn't hold at all, not even approximately.

There are lots of ways to make the probabilities \(p_k\) that we are the \(P_k\) people convergent. For example, we may replace the "sharp cutoff" of the maximum \(n\) that Carroll mentioned by a smoother cutoff and we may have\[

p_k \sim \frac{X^k}{k!}

\] The sum \(\sum_{k=0}^\infty p_k\) is the Taylor series for the exponential so it equals \(\exp(X)\), a convergent result (the factorial does the suppression), and the largest term \(p_k\) is one for \(k\approx X\). So if a civilization creates a 50 times larger number of people in the simulations, then it's most likely that we are the \(P_{50}\) people – the simulation of simulation of... (fifty times) of natural people according to Carroll's argument corrected by the inverse factorial.

This result is still wrong, however. It's always vastly more likely that if the observations are compatible with our being the natural people, \(P_0\), we are indeed the natural people, and much less likely that we are the \(P_k\) people with any positive \(k\) i.e. we are in a simulation. Why? Well, it's easy to prove this assertion by replacing Carroll's totally incorrect caricature of the probability calculus by a valid probabilistic calculation, e.g. one based on the Markov chains. The hyperlink in the previous sentence points to an article about the sleeping beauty problem. And indeed, the mistake that Carroll and others are making in all the simulation-like discussions are exactly the same mistake that they are making in the sleeping beauty problem, too.

The point is that all the competing hypotheses – we are the natural people who evolved directly, physically and biologically, from the Big Bang; we are the first generation of simulations of the natural people; we are the simulations of simulations of someone, and so on – may be and should be imagined as places of some tree (a sketch of the spacetime) that begins with a beginning such as the Big Bang.

I originally wanted to create a particular example but we would waste lots of time with the irrelevant details. The picture of the tree above is meant to be just a sketch – ignore all the detailed properties of it.

Even if you don't like a single Big Bang and you prefer theories that involve eternal inflation, pocket universes, and other things like that, you should imagine that there's some beginning \(t=0\) – even if you prefer the coordinate to be \(t=-\infty\) for this beginning – because that starting point is needed to define any consistent probabilistic distributions. It's the beginning, the root of the tree, where you may assume that the "total probability is 100%" and this probability gets divided to the branches.

Now, if you assume the hypothesis that you are one of the \(P_{k+1}\) people, you also assume that there are or were or have been some \(P_k\) people. Your Ms Simulator/God was one of them. So according to this hypothesis, the history from the Big Bang or the true beginning to you includes the evolution from the Big Bang up to your Ms Simulator/God,

*plus*something else. The addition becomes a multiplication, a

*times*, if you want to quantify the probabilities. This later evolution in "something else" adds extra uncertain assumptions about the subsequent history, so it surely makes the original hypothesis

*less likely*.

For this reason, if your observed data are compatible both with being a person \(S\) and the particular person's simulation \(T_j(S)\), then you are simply more likely to be \(S\) because the probability of the \(T_j(S)\) hypothesis is at most a small subset of the "probabilistic pie" assigned to \(S\) herself.

You may phrase this qualitative result in terms of Occam's razor. Unnecessary extra features shouldn't be introduced unless it's necessary. Our question – whether we are \(S\) or \(T_j(s)\) – is a perfect specific example of this Occam's razor situation. And the whole evolution from \(S\) to \(T_j(s)\) is exactly the "extra things" that should be avoided by Occam's razor. And indeed, we can explain why they should be avoided. These extra structures or assumptions reduce the probability of the hypothesis \(P(S)\) by the extra small factor \(P[S\to T_j(S)]\) if you want to calculate the probability \(P[T_j(S)]\).

There's just no way how the more awkward hypothesis that we are simulations \(T_j(S)\) could be more likely than the simpler hypothesis that we are \(S\) itself. By induction, if all the data are compatible with our being natural people, then we are almost certainly natural people. If you want to find evidence that (it's more likely that) we live in Matrix, you simply

*need*to falsify the hypothesis that we are the natural people. You simply

*need*to find the deja vu cats or numerical approximation errors out there in (simulated) Nature or something like that. If you don't do any of these things, solid science still clearly implies that we are the natural people.

Note that my, correct probability calculus always starts with some 100% pie – which is connected with the possible events during the Big Bang or any other beginning of the Universe. And this pie gets divided. Hypotheses that we have gone through unnecessarily long histories are therefore suppressed relatively to the hypotheses that we have gone through shorter histories, whenever both are possible.

But more generally, once again, the probability is a piece of an abstract 100% pie and if you are imagining probabilities as something

*entirely different*, then you simply don't have a clue what the probability is and you should better avoid the term that is clearly way too abstract for you. The probability is

*not*counted as some number of some cloned creatures or leftists or Muslims in some multiverse or other things that Carroll want to count as a "proxy" to probabilities. These two numbers have absolutely nothing to do with each other. In particular, probabilities are never greater than one, while the number of observers etc. is always greater than one.

Sean Carroll and all the other pro-anthropic, pro-we-are-simulation, pro-Boltzmann-Brain babblers, and the sleeping beauty thirders are using a mathematically inconsistent caricature of the probability calculus. In their caricature of the probability calculus, the 100% pie is replaced by something that may always be "inflated" by producing many copies of a product, repeating an experiment, asking questions many times, retweeting a bogus tweet, repeating an untrue rumor about Donald Trump, and so on. But such "extra action" added to a hypothesis – prolonging the history that a hypothesis assumes – can never increase the probability of these propositions. The extra history always

*reduces*the probability of a given hypothesis.

They could figure out that their reasoning is completely wrong if they tried to think at least a little bit. But their brains are too expensive to be used for thinking – they prefer to use them as excuses for the propagation of dumb, incorrect, ideologically driven, mindless slogans.

If they were ever calculating some probability ratios, they could arguably see that their argumentation that "we're likely to be a simulation" is pure junk. Equivalently, if they were trying to construct a similar argument but leading to a result that supports their political foes, they could see that their argumentation is as wrong as the parody.

For example, one may easily construct a sectarian parody. I am almost certain to be a Mormon, and so are you. Why? Because it's possible for a Mormon to have children and Mormons have many children. And they may spread the belief to other places, and so on. So the number of people in the \(k\)-th generation of Mormons, or Mormon converts after \(k\) links in a chain of conversion, is a geometrically increasing function of \(k\). So for very high \(k\), the numbers grow, so the Mormon great great... grandsons totally dominate. We're generic, and therefore we must be Mormons.

Two things may be said about the childish argument in the previous paragraph. First, it is absolutely idiotic. Second, it is absolutely isomorphic to Carroll's and other crackpots' arguments that we should live in a simulation (more precisely, in a simulation within a simulation within... and so on).

If you can think of methods to generate infinite sets of some "people" by some evolution, it simply doesn't mean and cannot mean that you prove that "we" or the "people" are members of this set. One is not "obliged" or "guaranteed" to be an element of every infinite set. Infinite sets and infinite anything may be intimidating for the people who don't understand maths at all. To pick \(\infty\) as your God that you should worship is a typical attitude of religiously minded people who don't really think rationally (I could list a few set theorists and perhaps philosophers as exceptions but my statement does apply to most set theorists and philosophers praising the infinity as well – they are just not being rational). But the probability that you belong to them may still be small or zero. The set of prime integers is infinite but I am none of them. This simple observation is enough to see that the whole way of thinking repeated by the likes of Carroll is absolutely idiotic.

One way to see that the likes of Carroll are utterly unable to think rationally is the complete "delocalization" of the wrong assumption. Even when Carroll finally admits that there is a contradiction in his assumptions about simulations, he ends up saying that he doesn't know which assumptions are incorrect. So we are left with a chaotic list of 7 statement and he has no clue which of them are relevant.

This is just a baffling proof of his absolute confusion. For example, his first assumption says that "We can easily imagine creating many simulated civilizations". It may be wrong in our world – we would probably say that an ambitious statement like that is wrong in our world. It may be wrong in other worlds. But a point that totally eludes Carroll is that you may still think of mathematical models or physical theories where this assumption holds. And even in this world, you may run to the paradox that Carroll sketched. The physical theory or mathematical model may clearly be totally internally consistent and obey all the "technical" assumptions. But the paradox will

*still*arise.

This proves that the trivial paradox that Carroll has been able to understand after 20 years has nothing to do with the assumption 1. The same paradox arises even if the assumption 1 is guaranteed. It's similar with assumption 4 and perhaps others. His is a very mixed bag, with some repetitions, tautologies, and so on. If he carefully analyzed these simple possibilities, Carroll could easily find out what's defective in the reasoning that leads to the contradiction. What's completely wrong is Carroll's whole way of thinking, his usage of the probability calculus etc. It's his

*brain*that basically needs a transplant.

You can't fix it by some adjustment of detailed technical assumptions about which computer programs people in the future will be creating etc. – even though these assumptions they are making are rubbish as well. The fundamental problem is much more universal and affects

*every single paper and blog post*that these people have ever written. They just don't know how to calculate or estimate probabilities in any situation, from the sleeping beauty problem to the increasingly long historical explanations of the present etc.

They are using flawed assumptions about typicality, constant inflation of the 100% pie to arbitrarily ill-normalized total values of \(\sum P_j\) or \(\int \rho(x)\). They are constantly using acausal and therefore logically defective assumption that a later event in a longer history may be made equally likely as the previous event that the first one depends upon. This "democracy" between all the possibilities – especially possibilities which happen at different moments of time – is simply mathematically inconsistent. Not only there isn't any positive evidence in favor of such "democracy". There is a clear proof that this assumption is mathematically inconsistent, as well as a quantitative calculation showing that the "possibilities with strictly longer histories" are simply always less likely than their shorter subhistories – as long as both as equally compatible with the observations.

A Christian could say that the Big Bang was a little marble thrown by Jesus Christ shortly after he was crucified. So both the Big Bang theory and the Bible are right – they are different parts of the full history. But scientifically, the hypothesis with the extra Bible inserted before the Big Bang is guaranteed to be less likely than the simpler Big-Bang-only hypothesis. I leave it as an exercise for you to adapt my previous argumentation to this analogous case. And the insertion of one extra layer of simulators is equally unnatural – and equally suppresses the probability of the hypothesis – as the Bible inserted before nucleosynthesis.

Carroll has needed some 20 years to "rediscover" a trivial point which is about 1% of the obvious conclusions that a competent scientist is able to make in less than an hour, and that a competent scientist has probably been certain about at least from the teenage years, anyway. With this speed of learning combined with the constant proliferation of new idiots and brainwashing of the people by ever stupider pop science, we can't expect that the understanding of mathematics and physics by the broader public is going to improve.

By a linear estimate, Carroll himself will need some 2,000 years in average to understand the things that were clear to me – and many others – an hour after all these anthropic and Boltzmann Brain and simulation problems began to be discussed. If you realize that lots of people are even stupider than Carroll, the picture doesn't look rosy. In the future, a typical/median human being won't be a Boltzmann Brain but he will probably be a deluded stupid brain.

## No comments:

## Post a Comment