## Monday, July 27, 2009 ... //

### Don Page: Born Again

I am convinced that the first paper on the hep-th archive has a higher average value than the average paper. Physicists often know that they have found something pretty and they want others to see it. Because most of the people who submit papers to the arXiv are pretty bright, their selection is usually wise, too.

Unfortunately, on some days, this mechanism works in reverse. Don Page wrote two new papers and I will look at both of them. The first of them is called

Born Again.
The title is clever: it means that the paper is another attack on Max Born's interpretation of quantum mechanics. It also subtly invites you to join Page's church by becoming a born-again Christian - and it may stimulate your thinking about reincarnation. ;-)

Unfortunately, the title is the last clever part of the paper. Pretty much every paragraph of the paper contains fundamental misconceptions about the character of physical law and the very nature of rational thinking. The anthropic considerations are far from being the only misunderstood aspects of reality in this paper.

Parts of quantum theories

The first two paragraphs contain a summary of the wrong paper - he plans to show that Born's interpretation of quantum mechanical amplitudes can't apply to cosmology - and we will discuss the points in detail. The second section sketches how the author imagines the inner structure of a physical theory. In his viewpoint, it is made out of a whopping number of six qualitatively distinct components:
1. Kinematic variables
2. Dynamical laws
3. Boundary conditions
4. Specification of what has probabilities
5. Probability rules
6. Specification of what the probabilities mean
You should be shocked by such a huge number of components because in normal physics, a quantum mechanical theory is said to be composed out of two components only:
1. The logical, interpretational framework of quantum mechanics
2. The dynamical laws
How could he have possibly obtained six instead of two? You might think that it's just some philosophy because one can't empirically answer the question "How many parts does a physical theory have?". However, there is a reason why this philosophy matters: the confused, fragmented, redundant picture of physics that Don Page keeps in mind is pretty much the primary source of all of his confusions that are expanded later in the paper. All the aspects of physics simply don't fit together in his mind which is why he thinks that he can combine them, xerox them, modify them, and recombine them in arbitrary ways.

So why are there two components only and what do they mean? The first component if universal for all quantum mechanical theories: it contains the general postulates of quantum mechanics.

It tells us that the (squared) complex amplitudes determine the probabilities that can be measured by the repetitions of the same experiment, whenever it's possible; it tells us that Yes/No questions are given by projection operations; their expectation values determine the probabilities; other observables are given by Hermitian linear operators; the evolution is given by a Hermitian Hamiltonian or a unitary evolution operator (or S-matrix), and a few others. These rules are completely general for all quantum mechanical theories and I formulated them in such a way that a sensible physicist should accept them regardless of her favorite interpretation of quantum mechanics. However, for the sake of clarity, I will assume that the reader uses the Consistent Histories, just like I do, and the rules of this most sensible logical framework for quantum mechanics are fully included in the first category.

On the other hand, the Hermiticity or unitarity rules that I previously included in the same group are not independent assumptions: they may be viewed as constraints on the types of Hamiltonians or evolution operators that are allowed in the second category. So once you define a proper Hermitian Hamiltonian in the following paragraph, you may forget about them.

The second category contains the actual formula for the Hamiltonian (or the S-matrix). This part contains everything that is specific for a given quantum mechanical theory. It also automatically contains all the nontrivial mathematics underlying your quantum theory, all the information about the "kinematics" or sensible degrees of freedom, all the information about the degrees of freedom that like to behave classically (because the Hamiltonian is enough to calculate the rate of decoherence in any situation). In the case of quantum field theories, this second category knows everything about the spacetime dimensionality, the particle species, and their interactions. All these things can be deduced from the definition of the Hamiltonian.

So why did Mr Page obtain six separable components?

It may be useful to describe Page's four redundant points one by one. Let's identify his points (5) and (2) with the correct categories of a quantum theory, the logical framework and the dynamical laws. What about the other four?

First, it is not true that "kinematic variables" have to be the starting point for any quantum theory. And even when they exist, it's not true that they're independent from the dynamical laws, either. What do I mean? Well, in simple quantum mechanical models from the undergraduate textbooks, quantum mechanical theories are obtained by the "quantization" of a classical starting point. But that's not a general way how quantum mechanical theories arise.

Such a construction of a quantum mechanical theory is only meaningful if the theory has a classical limit and if this limit is useful to reconstruct the whole quantum theory, even outside the limit. But it's not true that every quantum mechanical theory must have such a classical limit. For example, the (2,0) superconformal quantum field theory in 5+1 dimensions has no dimensionless coupling that can be made small. So all the dynamical questions about this theory are inherently quantum mechanical. There is no classical Lagrangian - at least not a manifestly Lorentz-covariant one - whose quantization would generate this quantum theory. The (2,0) theory is surely not the only example.

What is actually needed for a quantum theory is a Hilbert space. But there's no freedom to choose an infinite-dimensional Hilbert space because all infinite-dimensional Hilbert spaces are unitarily equivalent to each other. The only general constraint is that there is a space of states, and this rule clearly belongs to the logical framework of quantum mechanics: it's a postulate. Claims that quantum mechanical theories have to have "classical degrees of freedom" whose quantization produces the quantum theory are artifacts of a blind acceptance of some features of simple textbook examples that are in no way general. There doesn't have to be any privileged or well-defined way to parameterize the Hilbert space by eigenvalues of a fixed collection of observables. A Hamiltonian (or an evolution operator) acting on a Hilbert space is enough.

The second point is even more important as a source of Page's confusion. Imagine that we deal with a theory that is obtained by the quantization of a classical theory. Page thinks that the definition of some kinematic variables is an independent piece of information that is needed to get the complete theory. But that's wrong, too. And this statement is actually wrong for all quantum mechanical theories.

Choose an arbitrary basis of the Hilbert space and define the dynamical laws using a Hamiltonian written as an infinite matrix. Together with the completely general quantum mechanical postulates, you already have everything you need to make any predictions or retrodictions. Isn't it obvious? You may evolve states and calculate the amplitudes i.e. probabilities for any projection operators. You must also have some good questions - good projection operators - but you're free to choose your favorite ones. However, you will be highly restricted by the logical framework of quantum mechanics - by the consistency of your histories, in our case - and decoherence will de facto forbid most questions that you could a priori ask.

But do we need to specifically define the number of dimensions, the fact that particles can have three-dimensional positions, and so on? Not at all. All these things are aspects that can be deduced from the Hamiltonian. For example, the most interesting Hamiltonians - especially those that are consistent with special relativity - enjoy a degree of locality that is often exact. But this locality can be derived from the Hamiltonian, too. Once you derive it, you also know the dimension of the spacetime in which the laws are local. Why? Local Hamiltonians have the property that you can write them (approximately or exactly) as sums of commuting smaller Hamiltonians associated with different, non-overlapping regions.

The Hamiltonian is enough to calculate the rate of decoherence in all situations, too.

The misunderstanding of the fact that the position is just another observable in quantum mechanics and that its "local properties" are emerging from the Hamiltonian, if you wish, and they can't be separately incorporated into the theory according to your own choices, is arguably the main source of Page's confusion about "regions" and his bizarre conclusion that the "regions" make Born's interpretation of the amplitudes invalid.

Summarizing the redundant points

There are only two parts of a quantum mechanical theory while Page proposed six of them. One of them, the (1) kinematic variables, shouldn't have been included because of the reasons we have discussed in detail.

Another one, the (3) boundary conditions, shouldn't have been included because the boundary conditions are either a part of the dynamical laws and allow us to predict something, or they're not a part of the theory at all.

For example, if the initial conditions are given by the Hartle-Hawking state or its generalization, this state is described by the same path integral that is also used for the evolution. It's just the same path integral applied to a different topology of spacetime (one that has no non-vanishing initial slice). It makes no sense to discuss it separately from the dynamical laws.

On the other hand, if the boundary conditions are left arbitrary by the physical theory, they're not a part of the physical theory. One always needs to know the boundary and especially initial conditions to get some answers. But the boundary conditions are not a part of the theory but a part of the question that we are answering using the theory. They are parts of the application of the physical theory but not parts of the theory itself. In the same way, we also didn't include the salary for the physicists as a separate category because while it may be a part of the solution to a problem, it is surely not a part of the theory. ;-)

A theory must define its set of concepts and the legitimate rules how and which questions can be formulated and how answers may look like (the set of allowed questions and answers may be determined by a complicated recipe that needs a lot of calculations!). But it doesn't and cannot really tell us which questions should be asked at a given moment.

The remaining two redundant points are (4) and (6) which are just some confused questions that should have been included into (5), the logical framework of quantum mechanics.

Page asks what has probabilities: any proposition (i.e. any projection operator) has probabilities. And he asks what the probabilities mean: well, they mean "N_k/N" where "N" is the number of repetitions of the same experiment and "N_k" is the number of the repetitions which have some property "k" - whose probability we're talking about.

If the same situation is repeated many times, they can be understood in the frequentist fashion. If the situations can't be repeated many times, probabilities become largely "empirically meaningless" and "unmeasurable" but they can still sometimes be predicted by complete theories. Every scientist or mathematician should know these elementary things and they shouldn't be claimed to be "on par" with the whole Hamiltonian or with the logical framework of quantum mechanics which are much more nontrivial beasts.

The second section ends up with another paragraph that I find very controversial, to put it mildly.

Page says that "a goal of science" is to produce complete theories labeled by "i" that predict probabilities of different observations "j", namely "P_j (i)". I don't think that a goal of science is to have many theories. A goal of science is to find out which hypothesis is actually correct, by eliminating all the other, wrong (falsified) ones and by refining the viable (temporarily correct) one(s): Page's "goal" is just an intermediate step in a typical scientific process. And it's questionable whether he should be using the word "theory" before science determines which of the mutually exclusive hypotheses (would-be theories) is actually right.

Are there problems with Born's rule?

By Born's rule, we will mean the rule that the probability of "j" is the expectation value of the projection operator "P_j". Both of these objects are associated with a given theory, "i", which is another index identifying these objects.

Page thinks that this rule fails in a large universe that may have many copies of the same observer. Why? Because he thinks that by not knowing his location, the observer either can't define the projection operator or can't trust the rule. Page is very vague about his precise statement. At any rate, it is completely manifest that the following paragraphs don't contain any evidence supporting his statement.

He correctly notices that mutually exclusive projection operators satisfy
Pj(L) Pk(L) = delta jk Pj(L)
i.e. that the product vanishes for two different projection operators and each of them squares to itself. But quite suddenly, he says that the product
Pj(L) Pk(M)
doesn't have to be zero for two different regions L,M. Well, indeed, it doesn't have to be zero. And what?

If you restrict the projection operators "P_j(L)" to act on the smaller Hilbert space associated with the region "L" only (which is enough to discuss all experiments in "L"), and similarly for "M", then the product is not even well-defined because the domain of the left operator doesn't coincide with the range of the right operator. In other words, the indices don't match: Page is summing over a pair of indices that are made equal even though they can't really be equal since their types differ.

This means that a consistency check has failed. Objects such as "delta_{jk}" are only well-defined if "j,k" are indices of the same type. But if "j,k" parameterize basis vectors of two Hilbert spaces describing two different regions "L,M", then no "delta_{jk}" exists because the bases of these two Hilbert spaces are really different. The Kronecker symbol is ill-defined and it cannot appear in any error-free calculation. If such a meaningless object appears, it doesn't prove an error in Born's basic postulates of quantum mechanics but rather an error in the paper where this inconsistent product appreared - e.g. in Page's paper.

If the observer has identical friends in other regions, there are several issues to be addressed. First, the very question "which of them he is" is largely unphysical for him. He is one of them, and as long as the other copies are distant and the laws are local, the other copies can't even influence any of his physical measurements (and therefore, they can't influence any of his correct predictions, either). Quantum mechanically, fully identical particles are really identical and it makes no sense to ask which is which, not even in principle: the only way to distinguish them is by a different eigenvalue of an operator (e.g. location).

Second, and this point is even more important, the possible existence of other copies of the observer shouldn't change anything about the observer's will to avoid elementary childish errors in his considerations. When he investigates the causal relationships between any pair of observations or maneuvers in his lab, he must always work with operators that act on the same Hilbert space - the Hilbert space describing his lab. Whenever he multiplies two operators that act on different spaces - e.g. Hilbert spaces associated with different regions - he knows that he has made a mistake.

The observer may try to confuse himself by saying that he may be the observer in region "1945" or "2009" or whatever (such a simple numerical identification of the copies is meaningless and impossible, anyway). But he must never get confused enough to apply Einstein's summation rule to a pair of indices of a different type.

This is such a trivial and sharp point that I am simply flabbergasted that someone can fail to get it - and create fog about it - for a few years. Let me repeat the trivial point again.

The possible existence of additional copies of an observer, whether they exist or not, is completely inconsequential for the correct predictions of probabilities of any outcome of any measurement in his "region" as long as these predictions are made correctly. The basic logical framework of physics combined with (at least approximate) locality guarantees that it is the case, and any hypothesis or a "system of reasoning" that denies locality at this very basic level is instantly ruled out. Why do we have to read this nonsense on hep-th for years?

The mathematical expressions that appear in the last paragraphs of the third section are uniformly wrong, too. Page tries to propose his own Born's rule where the probability is given by
Pj(i) = < I - ProductL (I - PjL) >.
Jesus Christ, this is formally some probability that "j" occurs at least in one region. That's why he needs the absurd product over the regions. But such a product is completely unphysical. One can't ever make a measurement that correlates all copies of an observer, especially not those who will be born in the future or spatially separated regions.

If an observer prepares a neutron and tries to find the probability that the outgoing electron's helicity is right-handed, either experimentally or theoretically, he is interested in the final state of particles in the same region where he defined the initial state, too. It doesn't matter a single bit whether there are similar or identical regions somewhere, how many of them there are, and what the other observers are doing. The local character of the laws of physics guarantees that.

No dynamics - or projection operators - from the other regions and copies can influence the observed result in a particular experiment in a particular region, whether it is "known" or "unknown". That's also why all these operators from wrong regions have to drop out of all the correct theories and calculations of the probabilities. It's a trivial consistency check on any calculation.

In the text above, I assumed that the observer had to define something about the initial conditions, for example that he had a neutron. The location where this neutron was assumed to exist automatically "marked" the correct region where the final probabilities are predicted, too. You could think that there could exist answers (e.g. about the probabilities of a spin, or the "probability of life") that don't depend on the question (including some knowledge about the initial conditions) but I assure you that you won't find any such scientific answers that have no questions, perhaps except for 42. ;-)

This statement should be obvious in the case of ordinary "laboratory experiments", such as the measurement of a spin of a decay product. The initial conditions are as important to be specified as the final state whose probability is to be calculated. The initial state is associated with the degrees of freedom in a particular region, and the dynamical laws can only relate this information to the predictions of observables in the same region. The information from other regions can never enter the discussion at all.

But even if you ask the "more grandiose" questions, such as "what is the probability of life?", you will find out that there's no science without questions. First of all, the exact answer to this question is known empirically: the answer is 100%. Life surely exists. Any viable theory must be compatible with the fact that life exists.

But that's it. If a hypothesis predicts that life in a given world emerges with a certain probability between 0% and 100%, you must be extremely careful whether such a prediction is scientifically justified, measurable, testable, and well-defined, and whether the figure changes the likelihood that the hypothesis is valid (and in which way).

Once again, the experimentally measured number is 100%. The life surely exists. But this result may have been an artifact of accidents in the history of the Cosmos and the Earth, not a direct consequence of the laws of physics. That's true but what is not true is that a greater probability that life exists somewhere is enough to locate more likely theories.

Such an opinion is just an artifact of a biased attitude of people who want to see that "things exist". But in our Universe and any other world, it is equally important that many other things don't exist. The anthropic and Boltzmann-brain charlatans are often tempted to say that a theory with a lot of stuff - many copies of everything etc. - is more likely because it produces a lot of copies of things that may evolve into life (or your brain) and other effects we observe, and they think that this feature increases the probability that these scenarios are valid.

What they're forgetting is that their frameworks, because of their having a lot of stuff, also include a huge amount of things that we don't observe, which lowers their likelihood to be correct. In fact, a typical theory with Boltzmann brains or supervast multiverses predicts many more things that aren't there, according to observations, than those that are there. And you know, a correct theory must predict not only that a chicken is born out of an egg at least somewhere in the multiverse. It must also predict that a car is not born out of this egg, unless it is a Kinder surprise egg. ;-)

In the discussion about the Standard Model vs Minimal Supersymmetric Standard Model likelihood, I've explained that one must be very careful about philosophical arguments that have a different character in both models because they could apparently "punish" one of the models hugely, relatively to the other model, but the resulting probabilities would depend on your choice of the "selection criteria". So these criteria are very dangerous and one should prefer experiments where both models predict statements of the same kind that may be compared (such as the question whether the W boson is heavier than a certain bound). In other words, one needs to choose "fair contests" to compare two models.

The same comment applies to the Boltzmann brain and anthropic considerations, but it becomes even more pressing. The anthropic and the Boltzmann brain people never resist the temptation to believe that a model with a lot of copies of something is more likely because it predicts a lot of copies of the good stuff. But they always forget that it predicts a lot of copies of the bad stuff, too.

In the Boltzmann brain case, the number of the bad (unobserved) things predicted by the crazy hypothesis is exponentially higher than the number of good (observed) things. If we're just a fluctuation, the predicted probability that our neighborhood is chaotic and manifestly follows no "grand set of rules" is at least "exp(S)" times greater than the probability that we would observe an apparent order in another experiment. Any kind of such a Boltzmann brain hypothesis is falsified by pretty much every single observation that we ever make.

The Boltzmann brain hypothesis is ludicrous and can't follow from any serious working theory created by mature physicists. A "fair contest" actually exists: simply use your theory to predict whether your next observation will be consistent with a long-term order of the Cosmos. The sane theories will predict "yes" while the Boltzmann brain theory will predict "almost certainly no": and it is therefore instantly falsified. But even many of the multiverse research directions fall into the same trap. Very huge bubbling multiverses can produce many planets where our life has a chance to appear. And their proponents think that a lot of "volume" or "Lebensraum" for intelligent beings is a good feature. But these models also produce a lot of celestial bodies where wrong particles live and wrong phenomena take place.

The latter objects reduce the probability that the scenario is valid, as long as you are at least a little bit balanced about the "positive" and "negative" observations and their influence on your priors. When you summarize these observations, it is very clear that you can't possibly obtain a more likely theory just by looking at theories with "more stuff" in them. The right theories (and vacua) should have the "right stuff", not "more stuff".

Fourth section

Let's return to Page's article. It gets even more crazy. He starts the section with
Probability Symmetry Principle (PSP): If the quantum state is an eigenstate of equal number of observations of two different observations, then the probabilities of these two observations are equal.
The repetition of the word "observation" suggests a typo. But let's analyze this statement as it appears above. Imagine that you measure the spin of the same one electron N times and you obtain "up" N1 times and "down" N2 times.

The assumption of the "principle" is that a quantum state is the eigenstate of N1 and N2. If either N1 or N2 is equal to zero, it can be because the probability to get either "up" or "down" is zero. But let's assume that both N1,N2 are nonzero.

If a quantum state is an eigenstate of N1, it has a predetermined number of repetitions when it will be spinning "up". How is it possible? One possible explanation is that each measurement is separately predicted to be "up" or "down" with the probability 100%. In that case, everything is known for certain and you shouldn't be talking about probabilities.

Probabilities only refer to the infinite repetition of the same situation. And in the (N+1)-th measurement and the following ones, you won't know whether the measurement will be like those "up" measurements, or your "down" measurements, so you can't possibly say anything about the probability.

On the other hand, the results of individual measurements may be undetermined by the quantum state, but the state is such that the total number of "up" results, N1, is determined. This can only be the case if there exist sharp correlations - i.e. entanglement - between the individual measurements in the ensemble of N repetitions. There are "quota" that tell you that if you get "up" twice, you must get "down" for the third time, and so on.

Such correlations between the individual experiments mean that these "repetitions" are not really independent, so you're not allowed to assume that these experiments are repetitions of the "same" experiment. They're not the same experiment because some of them are correlated with some events in the past while others have a correlation with other events in the future. That's why you're not allowed to talk about probabilities of particular things, either.

At any rate, the "Probability Symmetry Principle" is complete nonsense because it uses the word "probability" for some random ratios that don't satisfy the basic defining feature of a "probability", namely the repeated existence of the same situation. The statement is meaningless - and of course, whether such a meaningless statement would be true or false would be an entirely different question. If you give it the same meaning as Page does, it's wrong, as we will see.

The second principle offered in the section is the "Prior Rule Principle" that requires that the projection operators to calculate probabilities don't depend on the state. That's a bizarre statement because if you want to find out the appropriate mathematical expression for a projection operator that encodes a given property, you need to know the Hilbert space, its physics and the links between the observables and the real observations quite well. For example, the property that "an electron is found in the micrometer vicinity of a proton" is expressed by very differently looking operators in non-relativistic quantum mechanics and in the Standard Model.

So both of these "principles" are silly but what Page is doing with them is even more stupid. Indeed, he considers an entangled state "psi", being equal to "b12 ket12 + b21 ket21". The measurements in regions 1,2 generate results 1,2 or 2,1 - they're anticorrelated. Ordinary entanglement.

But the shocker comes when he uses his "Probability Symmetry Principle" to argue that all theories are always required to predict the same probability for both options allowed above, "12" and "21". Does he really want to claim that the absolute value of both "b12" and "b21" must be "1/sqrt(2)"? That would be a really bad joke. A basic postulate of quantum mechanics is the superposition principle that guarantees that all linear combinations of states are allowed. So if a principle forbids states unless they satisfy a nonlinear constraint (involving absolute values of the amplitudes), it's surely a wrong principle.

I've tried hard to determine how he got to his weird conclusions, and my verdict is that he misunderstands very basic things about the way how probabilities are calculated from wave functions and from density matrices. Quantum mechanics surely allows entangled states with different absolute values of the amplitudes. If a physicist doesn't know the wave function, but only knows that it is one of many orthogonal wave functions that have different probabilities, he calculates the result for each of them and takes the weighted average. That's what the density matrix calculus gives him, too. This is no rocket science.

You can see very easily and explicitly why the "Probability Symmetry Principle" is completely wrong. Imagine that a theory predicts that there are two planets that are separated from one another. Each planet contains one observer - otherwise identical to his counterpart. Both of them are going to measure the spin of an electron. Imagine that the theory predicts that the Universe prior to the measurements must be found in the pure state,
psi = 0.6 |12> + 0.8 i |21>
I only added the "i" to remind the readers that the phases can be arbitrary and that quantum amplitudes are complex. Now, this state clearly predicts that there is a 36% probability that the results "12" will be measured by the two observers but 64% probability that it will be "21".

If an observer knows that he is the first one, he has 36% to get "1". If he knows he is the second one, he has 64% to get "1". If he doesn't know which one he is, then indeed, he can't say what the probability is. If he has a good reason to think that he is a random one among the two, with a uniform probability distribution, his probability to get "1" is (36+64)/2 = 50%: he uses the "maximum ignorance" density matrix. But there's nothing preferred about this "uniform distribution".

If he had a dynamical theory that would show that his ancestors randomly chose the planet, or something like that, he would know that the 50%:50% expectation would be the most sensible one. But if he has no such a picture of his history (and such a picture can't even exist if the number of copies is infinite because there's no uniform distribution on infinite sets), the 50%:50% assumption is as good or as bad as any other one. He may also have a different theory that implies that his planet is the planet where the results "1" are pretty often. Dividing the odds equally among possibilities can't ever reduce your actual ignorance. You can't get any closer to the right length of the emperor's nose by averaging over the opinions of millions of people who haven't seen him.

Measurements and theories can sometimes strictly tell you what the right answer is: but if you don't have the sufficient information about your initial or current state, scientific theories simply can't tell you the "only correct" probabilities that you are obliged to assume and especially not the "only correct" probabilities that you are someone rather than someone else. If one is ignorant about his past or his identity, he is really ignorant and tricks and random answers such as "uniform probabilities" don't and can't make him any less ignorant!

At any rate, it's just not true that every theory predicts that the probability of his getting "1" must be 50%. In other words, it's not true that the dumbest egalitarian version of the anthropic principle must be a part of every theory in physics. In fact, it is a part of no sensible theory in physics, except for the theories where the uniform distribution can actually be derived (e.g. for microstates in thermal equilibrium according to statistical physics).

The rest of the paper is dedicated to replacing Born's rule with a new "principle" - one that completely denies the local character of physics and leads you to predict things according to various absurd considerations about the unphysical, unmeasurable copies of yourself that can't possibly influence you - and that can't possibly influence any properly calculated probability in your lab or planet, either.

The paper is just complete bullshit, and there are literally hundreds of such nonsensical papers in the arXiv already. Another one is the very next hep-th paper by the same author that predicts universal cosmic doomsday within 20 billion years.

This may have something to do with some unscientific religious craziness that the author believes. But I just can't understand how someone can seriously believe such things. What happens in 20 billion years will depend on the current state of the Universe and the physical laws that will evolve it.

The world is not obliged to follow some simple predetermined dogmas of evolution. The character of our world and the odds of one fate or another were decided in the past and they were not decided in the future, or according to the future. The future is whatever evolves from the past and the present. Up to the dynamical laws that connect it with the present, the future is free.

The world has no simple purpose. It has no goal of creating the nicest beings, or to kill them in the minimal possible time, or to allow them to have the minimal or maximum number of descendants. Nothing like that (unless a similar mechanism is actually demonstrated by some independent rational arguments and evidence). The world evolves as its pretty complex dynamical laws, combined with the known complex information about the initial or current state, demand. The result is complicated, too.

The basic fallacy is the same one as in the previous paper. Page thinks that predictions of outcomes in physics depend on the copies of yourself, including the copies that will live in the future bilions of years. But the reality is very different. It's not only true that the correctly calculated predictions don't have to depend on the other copies: in fact, they are not even allowed to depend on any features of the copies, or their very existence.

For spatially separated copies of yourself, this condition follows from locality. For the copies of yourself that will live in the future, the condition is even more strict and follows from causality. Your current evolution simply can't be affected to the fate of some possible humans who may exist in billions of years in the future. If someone uses any infinite products over his copies or some averaging over the future (that is unknown) to make any predictions about the world where we live is simply insane and violates basic rules of physics, science, and logical thinking.

And that's the memo.