Saturday, August 27, 2011 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Particle physicists must retake the vacuum selection problem

Vacuum selection is a short-distance problem and cosmologists' buttocks should be kicked out of this serious business

Alexander Vilenkin released a new paper,

The Principle of Mediocrity,
based on a lecture he gave in April. I don't want to spend too much time with this flawed mode of reasoning (see the landscape category of TRF) but because Vilenkin's paper was what has made me to write this not-quite-short text in the first place, I have to spend some time with it.
Off-topic: Edward Witten celebrates 60th birthday today (originally posted on Friday): congratulations! See 70 minutes of strings with Witten and other Witten articles for a recent coverage of this eminent theoretical physicist.
The dominant theme of the anthropic reasoning - or, more precisely, the anthropic absence of reasoning - is that its proponents are mediocre people and they suppose that almost the whole Universe has to be composed of mediocre observers, too. ;-) For example, 97% of corrupt climate scientists on the Earth are climate alarmists. So, the anthropic reasoning goes, most of the extraterrestrial aliens have to be climate alarmists as well for us to be average and mediocre.

It follows that the extraterrestrial aliens will attack our planet because of the rising CO2 concentrations they are observing. (Of course, the inconvenient truth that in reality, environmentalism represents a lethal tumor for every civilization that has ambitious goals in space has to be overlooked.)

Missing justification of the anthropic logic

I chose the environmentalists to make the defective character of their thinking obvious but the anthropic people suffer from defective reasoning in general. The main problem with the assumption that we are "typical observers" (and what "we" means is subjective, anyway) is that there is absolutely no justifiable reason why it should be the case. In fact, there seems to be a plenty of evidence that the assumption is wrong.

It could only be true that there exists a statistical reason to believe that we belong to a majority if there existed a "democratic system" or "thermalization" - some actual mechanisms that give every observer in the Universe the same "vote", the same amount of "consciousness", or the same power to realize that it's him.

Such a process exists in the case of thermalization. In statistical physics, dynamical systems randomly visit the whole phase space - or a subspace of it corresponding to fixed values of conserved quantities - which guarantees, via the ergodic logic, that all microstates that satisfy a given selection criterion will be equally likely to occur at some point in the future. But one actually needs time for thermalization - time for all the microstatets to become equally likely, time for the entropy to be maximized. This final outcome can't ever occur "a priori". The "democracy between microstates" surely doesn't apply to initial states.

In the same way, most citizens of the U.S. may perceive that they belong to a majority. But such a conclusion is only meaningful because we know how to count the U.S. citizens and we have a reason to attribute the same weight to each of them. Such conditions require some "constant exchange of information" between the citizens. So this reasoning can't be applied to the whole Universe whose citizens don't respect a "global democracy" or "global egalitarianism" - and who can't even interact with each other. In fact, it's causally impossible for most pairs of such observers.

So there's no reason to expect that "we" are typical when it comes to any property that describes us. There exist lots of quantities we may define - and according to most of them, we will be atypical. The "typicality" depends on a particular "egalitarian measure" but in reality, all other measures are at least as justifiable as the "egalitarian measure". And because most measures are not "egalitarian", assuming that we obey "mediocrity" with respect to an "egalitarian measure" is almost certainly guaranteed to produce wrong predictions.

The basic fact - that the anthropic reasoning is not only unjustified but also demonstrably wrong - is also manifested in the fact that its formulae are ill-defined whenever you try to refine some details. For example, there doesn't exist any "egalitarian measures" on infinite sets at all; the normalization factor would have to be \(1/\infty\) which vanishes. So there can't possibly exist any democracy among an infinite number of "voters". All attempts to even "define" what the mediocrity principle could be saying on an infinite space - and a success in the attempt to define such a recipe would still be very far from establishing that such a speculation is actually valid - end up in the mud of mathematically inconsistent and ambiguous expressions. This mud is known under names such as the "problem of the measure" but all this hassle is just the tip of an iceberg - the iceberg is a much more fundamental flaw of this whole reasoning.

The anthropic people fail to realize that there is nothing special about the "egalitarian measures"; they have no justifications for them; there can't be any mechanism that would have imposed them (it's often causally impossible); they can't really be accurately defined; those folks don't seem to realize that any conceivable theory with a rule explaining "in what sense we should be special" is at least as good as the mediocre explanations.

I've covered all those things many times on this blog and I am already fed up with people's being so incredibly slow that they don't get these simply points - which require minutes to be understood by an intelligent person - even after decades of thinking about this problem.

Instead, let me make this general point:

The anthropic rubbish has spread in fundamental physics because cosmologists, having become excessively self-confident after the 1998 observations of a nonzero cosmological constant, launched an attack to overtake the vacuum selection problem.

Sociologically speaking, fundamental physics has been unified to a large extent but you may still distinguish the people who have been largely trained in cosmology and astrophysics - and general relativity that underlies these disciplines - from the people who have been trained in particle physics - and quantum field theories that provide us with a key formalism to approach these issues (which includes most approaches to string theory). The first group is mostly thinking about the long-distance (infrared) phenomena; the second group is mostly thinking about the short-distance (ultraviolet) phenomena.

As recently as 15 years ago, the first group would be associated with cosmology which used to be a quasi-religious low-precision scientific discipline that was full of superstitions, religious sentiments, philosophy, and similar crap. Only the recent supernova and other observations in the late 1990 promoted cosmology to a standard high-precision science.

But the most internal reasoning of the cosmologists hasn't been updated yet. The anthropic junk - going back to empirically unjustified speculations from the 1970s (and in fact, even the Middle Ages) - is a major example of this fact. The cosmologists decided to launch an attack on a holy problem in particle physics and string theory - the vacuum selection problem. Having found a few traitors among the string theorists (greetings, Lenny et al.), they have pretty much controlled the literature on the vacuum selection problem in the early 21st century.

If you think about these issues rationally, it's pretty clear that cosmology - a science of extremely long distances - has nothing to do with the vacuum selection problem at all. So people like Vilenkin should really and kindly shut their mouth when it comes to these important questions.

Why? It's simple. It's because the reasons why our Universe has the particle masses, couplings, and/or the shape of extra dimensions it has is linked to what it was doing at the very beginning when it was small and when it had a tiny or vanishing entropy. So your knowledge about the behavior of the Universe when it's very large can't help you to learn anything about the origin of our Cosmos at all.

Indeed, the tiny Universe eventually grew very large - but this growth is something that we actually do understand pretty well and it is a consequence, not a cause, of some properties of the initial state of the Universe.

The actual "key unknown" is an initial state of the Universe - one given either by a preferred pure state; or by a preferred mixed state - right at the beginning. What remains unknown are the precise initial conditions of the Universe. Obviously, this preferred initial state had a de facto vanishing entropy; if it had a nonzero entropy, we could always ask "what was before that". The condition that the entropy can't be negative is the only consideration that stops us at the \(S=0\) point.

Hartle-Hawking states

The mediocre people want to imagine a statistical distribution on the landscape. Each point in the landscape (each vacuum) either gets an equal weight; or each point gets a weight that quantifies "the number of observers that the vacuum produces", in one way or another. Of course, this number may be defined in many ways and there are many reasons to think that it's ill-defined. But once again, the basic problem is that the appearance of observers in a vacuum is a consequence of the properties of the vacuum, so it can't possibly be a cause that decides about the selection of one vacuum or another.

The actual dynamics of string theory may produce - and almost certainly does produce - a completely different probabilistic distribution on the landscape - one that has nothing to do with observers and that takes many properties of the compactification into account. For example, it may prefer vacua with a large or small number of light fields, large number of hierarchies, small or large values of the vacuum energy, and so on.

I am not saying that the capability of a string vacuum to produce observers has no impact on the emergence of observers who realize "I am here" in each of them; of course, such an impact does exist: our Universe can't be one that doesn't allow for life because we wouldn't be here. But the point is that aside from this "anthropogenic" impact, there may be a much more important impact of the "natural selection" - natural processes that make some vacua vastly more likely than others. Moreover, saying that our Universe must be such that life exists is not a theoretical prediction; it is an observation that constrains a possible theory. When we adopt the anthropic arguments, the "theory" that would produce "predictions" to be compared with the observations is completely absent.

The dependence of the privileged Hartle-Hawking state on the "point in the stringy landscape" is something that has not been studied much - in fact, the papers about this topic are nearly non-existent despite the fact that this is the only scientifically legitimate way to find out some theoretical predictions (not just reinterpretations of observed properties of the Universe around us) about the compactifications of string theory that are preferred in Nature.

Some fun technical stuff

Imagine that there exists a preferred state on the stringy Hilbert space and it also tells us which compactifications are far more likely than others. This state will be a clarification of the Hartle-Hawking state - and it may be viewed as a gravitating counterpart of a ground state of a quantum field theory. You should try to answer the following questions, among many others:
  • Are vacua with lots of independent fluxes (big homology) preferred or disfavored?
  • Are vacua with some massless fields or lots of massless fields preferred or disfavored?
  • Are vacua with some light fields or lots of light fields preferred or disfavored?
  • Are simple topologies of the compactifications suppressed or disfavored?
  • Does the measure allow or encourage the Universe to sit near the local saddle points (relevant for inflation)?
  • How many excitations the measure wants to add to a vacuum?
The anthropic ideology offers some answers - without any dynamical justification - to some of these questions. It surely prefers complicated topologies with lots of cycles and lots of independent fluxes because those are "numerous" and according to the anthropic logic, this is a good thing.

In reality, it may be a neutral thing or, which I find even more likely, a very bad thing that punishes the candidate vacua.

Imagine that you have a theory with a moduli space such as
\[ SO(2,{\mathbb R}) \backslash SL(2,{\mathbb R}) / SL(2,{\mathbb Z}). \] This is of course the regular moduli space e.g. for the \(N=4\) gauge theory or type IIB string theory in ten dimensions which is relevant whenever there are an \(SL(2,{\mathbb Z})\) duality. There is a measure on this moduli space and it's reasonable to assume that the Hartle-Hawking state is linked to a uniform measure on such a space.

However, you must compare this simple moduli space with complicated moduli spaces. There also exists a measure on an \(N\)-dimensional moduli space where \(N\) is large. The uniform one may be preferred once again. But what about the relative odds that Nature decides to sit in a moduli space with a high dimension and a moduli space with a low dimension?

Well, I think that the right answer is that the moduli spaces with a high dimension are hugely suppressed. Note that this is pretty much the opposite logic than the anthropic reasoning. The simplest way to explain why I believe this to be the case is to notice that typical natural manifolds with huge dimensions have very tiny volumes in some natural units. For example, the volume of the \(N\)-dimensional ball of radius \(r\) is
\[ V_N = \frac{1}{(N/2)!} \pi^{N/2} r^N. \] When \(N\) is very large, the factorial in the denominator is the key player - because \(x! \approx \sqrt{2\pi x} (x/e)^x \) - and it suppresses everything else. The volume of the \(SU(M)\) group manifold decreases even more quickly as \(M\) increases.

Of course, the volume is "dimensionful" if the lengths on the moduli spaces are dimensionful and one should become sure that we're not comparing apples and oranges: how do the natural units depend on the dimension of the moduli space? There should exist controllable ways to determine the relative normalization of the "measure" or the "privileged pure state". One of them is to look at topology-changing transition.


Imagine that you study the behavior of the preferred pure state near the conifold point. The transition through the conifold may change the number of 2-cycles and 3-cycles of your Calabi-Yau three-fold. Your task is to write down some natural equations for the preferred state and study how it behaves in the vicinity of the conifold point - where it probably satisfies some nice conditions you should clarify.

To do so, you will have to master the question how the preferred pure state behaves as a function of not only massless fields but also as a function of very light (and perhaps other) fields. And you may need to know how it affects not only the zero-momentum, vacuum values of the fields but also their nonzero-momentum modes.

When you extrapolate the preferred pure state to the whole moduli spaces, you should be able to say whether Calabi-Yau with higher or lower Hodge numbers have a higher overall probability to be realized in the preferred state. My guess is that that Calabi-Yau manifolds with high Hodge numbers will be heavily suppressed relatively to those with low Hodge numbers.

A broader point is that there probably exists a completely dynamical - Hamiltonian-dependent or Hartle-Hawking-related, if you wish (HHH: Pilsen plays the Champions League in the H group with Barcelona, Milan, and Borisov - not bad) - analysis that may justify the assumption that Nature wants to be simple. In this case, it could mean that "simple enough compactifications" - which could be defined e.g. by low values of the Hodge numbers and the number of independent fluxes etc. - are heavily preferred by the a priori measure that decided about the initial state of Nature.

Some of these questions are hard and much of the required maths may not be developed at this moment. Other pieces of maths may be known but their relevance for physics hasn't been appreciated. The vacuum selection will always be constrained by our observations that may be getting more accurate. However, to get a fully theoretical prediction about the preferred compactifications, one has to study Hartle-Hawking-like questions such as those above. There is no other way. Speculations about election systems in an intergalactic democracy involving extraterrestrial environmentalists - which democratically tries to determine what our origins should have been (too late!) - can't replace science.

And that's the memo.

Add to Digg this Add to reddit

snail feedback (5) :

reader Mike said...

If you flip a fair coin 100 times, what can you say about the outcomes and why? (It seems to me, to be able to say anything, you must make an assumption about typicality. Note also this is why, if you see a coin flipped 100 times and they're all heads, you can assert the coin is probably not fair. It seems to me the typicality assumption underlies testing / confirmation of any probabilistic theory, including quantum mechanics.)

reader Luboš Motl said...

Dear Mike, you have misunderstood everything.

In the case of 100 flipped fair coins, the probability of each of the \(2^{100}\) outcomes is the same, \(2^{-100}\), which is what allows you to make this statistics and say that you will have \(50\pm 7\) heads, or whatever is the right result.

But the probability that an observer finds himself to be Rick Perry or a particular one among 1 quadrillion people living on the recently found diamond planet is not the same. In fact, this probability ratio is completely unknown and largely meaningless, so one cannot make any predictions analogous to the fair coin when it comes to similar questions.

There are many ways to see that these two situations are not analogous at all. The experiment with a 100-times flipped fair coin may be repeated many times. However, the experiment "who am I" (am I Rick Perry or the guy 314159 from the diamond planet?) cannot be repeated at all because this would require reincarnation. So there's no statistics, and surely not simple statistics, that applies to such questions.

reader John said...

The actual "key unknown" is an initial state of the Universe - one given either by a preferred pure state; or by a preferred mixed state - right at the beginning. What remains unknown are the precise initial conditions of the Universe. Obviously, this preferred initial state had a de facto vanishing entropy; if it had a nonzero entropy, we could always ask "what was before that". The condition that the entropy can't be negative is the only consideration that stops us at the S=0 point.

Why should there be an initial state of the universe or a beginning at all? We know that at some point there was a very low entropy state, but why couldn't that have been a flux from a higher entropy state, such that the entropy was never at zero?

reader Luboš Motl said...

Dear John,

by "flux", do you mean a statistical fluke reducing the entropy?

It's because the second law holds and reductions of entropy are extremely unlikely. Every time the entropy increases by \(\delta S\), the probability of the history gets reduced by the tiny factor of \(\exp(-\Delta S/k)\).

So the history of our Universe almost certainly had no decreases of entropy that would be much larger than Boltzmann's constant. Even if your version of the cosmology had such points, you wouldn't really explain anything because you would still have to ask "where did the particular higher-energy state before that come from?".


reader Luboš Motl said...

"increases" should be "decreases" once in my comment.