Monday, October 03, 2011 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Bubble of nothing and other catastrophes

We often hear about hypothetical catastrophes that may influence countries, regions, continents, the climate of our blue planet, or the planet as a whole. But physics has seriously studied more severe hypothetical catastrophes, including catastrophes that destroy the whole Universe.

Chicken Littles are too little to think about such matters; one has to be either a Chicken Great or a physicist to do so.

Either all the matter as we know it may be liquidated by the speed of light starting from an epicenter; or the very space may cease to exist as the "nothingness" spreads. This blog entry will be dedicated to these issues.

The eminent late physicists' physicist, Sidney Coleman, was the first pioneer of this "cosmically catastrophic" subdiscipline of high-energy physics. Of course, he was a very creative yet quantitative man so this science is not about whining or calls for action ;-); it's primarily about methods to accurately evaluate the probability that a certain kind of a catastrophe will occur in a given type of a universe.




Sidney began to write papers about the decay of the "false vacuum" in 1977:

Fate of the false vacuum: Semiclassical theory by Sidney Coleman

Fate of the false vacuum. II. First quantum corrections by Sidney Coleman and Curtis Callan
These papers appeared in PRD which means FART in Czech. The first paper is more important because it evaluates the leading approximation of the probability of the catastrophe as \(\exp(-S)\) where \(S\) is given. The second paper computes the correction factor \(A\) in \(A\exp(-S)\) which influences the probability less dramatically than the exponent \(-S\).

Refining the tunneling effect

You may have heard that the decay of the vacuum – a catastrophic deterioration of the empty Universe – has something to do with the tunneling effect in quantum mechanics. But how are they related?

Imagine that you're a sheriff in Nashville, Tennessee and your fellow cops just caught a dangerous criminal who wanted to launch 24 hours of terror (much like the villains in Michael Crichton's book, State of Fear) in order to earn his first billion of dollars. Your group of enforcement officials have placed the criminal inside a prison cell that is represented by the following potential:



This picture doesn't show breasts

Because he has been inventing local tragedies in order to become powerful, people have been calling your criminal a "false prophet". His current location in the prison cell is therefore represented by the label "false". It is a local minimum. If the prisoner (particle) doesn't have enough kinetic energy, he will only hysterically fluctuate in the vicinity of this local minimum, "false". Classically, he can never jump out of his prison cell i.e. jump over the potential barrier that separates "false" and "true".

However, as you know, quantum mechanically, there is a nonzero probability that if the prisoner will keep on hitting the wall of the prison cell with his head, he will eventually appear on the opposite side of the wall. In the WKB approximation, the probability that this will occur is approximately
\[ {\rm Prob}\approx A \exp(-S) \] where the exponent \(-S\) is given by
\[ S = \frac{2}{\hbar} \int_a^b \,{\rm d}x \sqrt{2m(V(x)-E)} \] where \(a,b\) are the values of \(x\) at which \(V(x)=E\) i.e. the boundaries of the interval \((a,b)\) (strictly in between "false" and "true") which is classically inaccessible with the given limited energy \(E\).

You may either interpret the exponent as the result of a calculation of the exponentially dropping value of the wave function (in inaccessible regions, the wave function doesn't oscillate but instead, it exponentially increases or decreases); or you may derive \(S\) as the action of an instanton in ordinary quantum mechanics. The instanton is really a "kink", a solution \(x(t_E)\) where \(x\) depends on the Euclideanized time \(t_E\) in such a way that \(x(\pm\infty)\) converges to \(x_{\rm true}\) and \(x_{\rm false}\), respectively. You may view \(x(t_E)\) as the actual motion of a particle in a potential energy profile that is reverted relatively to the picture, \(V_E = -V(x)\).

It sounds good. We deal with ordinary quantum mechanics; if the prisoner keeps on trying, he will eventually emerge on the other side of the wall. But when he does, it doesn't cause the destruction of the Universe, does it? Well, it may.

We want to switch from quantum mechanics – which is really a 0+1-dimensional quantum field theory (whose variables such as \(x\) depend on 0 spatial dimensions and 1 time) – to a quantum field theory, e.g. one in 3+1 dimensions whose variables such as \(\Phi\) are functions of the whole spacetime. The 3-dimensional space has lots of points. We may approximate them by a large number of criminals who are spread all over the space.

So imagine that there's more than one prisoner. He has personally trained 3,000 similar criminals who almost uniformly cover the whole surface of the Earth (it should be the full 3-dimensional space, but you get the point). Each of them was placed in a similar prison cell by the local enforcement officials. Each of them has a chance to escape.

However, in the field theory case, we don't just face 3,000 independent dangerous criminals. They're conspiring. Technically speaking, they're "coupled". I don't want to scare you but the logic is that if a criminal in a state manages to tunnel through the wall of his prison cell, he may liberate his comrades in the adjacent states and countries. They may do the same thing once they get out of the prison, and so on.

You may see that this is a real global threat because if one of them gets out, he may "liberate" the whole organization. Well, as far as I know, this is the first time in Al Gore's life when he did something useful: he served as a tool to explain a point about quantum tunneling.

Why are the members of the organization "coupled"? It's because of the terms \( (\nabla \Phi)^2 \) and their generalizations in the energy formula for local quantum field theories. Such terms encourage the field \(\Phi(x)\) to be constant and not highly variable. So these terms generate forces by which the criminals outside the prison (at the "true" point) may drag their comrades who are still arrested and help them to get out as well.

How do we calculate the probability (using the tools of a quantum field theory) that someone will get out and will start a "revolution" that will result in the "liberation" of all these dangerous people? Well, the right technical tool is, once again, an instanton.

Now I should explain what an instanton is. It sounds like an "object" but it is not an ordinary object, one that lasts. Instead, the word is clearly related to an "instant of time", so this "object" only exists at one moment. It is localized not only in space (much like normal objects) but also in time. More precisely, it is localized in the Euclidean time.

An instanton is a solution \(\Phi(x,y,z,t_E)\) to the classical equations of motion of a field theory – I am talking about a field theory describing one scalar field with the potential energy given by the curve on the picture above but the generalization to more complex field theories is straightforward. And this solution must approach a "trivial configuration" (a constant, vacuum-like value of \(\Phi\) or its gauge transformation) whenever
\[ x^2 + y^2 + z^2 + c^2 t_E^2 \to \infty. \] However, when the distance from the point \((0,0,0,0)\) is smaller than or comparable to a typical distance scale \(a\), the "radius of the instanton", the solution is nontrivial and \(\Phi(x,y,z,t_E)\) may do complicated things. The most familiar instantons exist in Yang-Mills (gauge) theories and they lead to some interesting but "peaceful" new processes (such as the so-called 't Hooft interaction). However, we're going to look at some lethal instantons now.

Why do we consider such solutions at all?

Feynman's path integral approach to quantum theories offers the most straightforward answer. Feynman tells you to "sum over all histories" of your physical system, over all configurations of its degrees of freedom in the spacetime. In the classical limit, the most important contributions are the contributions of the histories that are very close to the minima (or stationary points) of the action because this is where the complex phases tend to interfere constructively. For all other histories, the factor \(\exp(iS/\hbar)\) is a nearly random, quickly oscillating complex unit and such random numbers average out to nearly zero (destructive interference).

Do we mean local minima or global minima? Well, that's a subtle question. The most important contributions come from the global minima such as "true" on our only graph because, using the Euclideanized language, \(\exp(-S)\) is really largest if \(S\) is the global minimum, the minimum possible value if may have for given boundary conditions (that determine the initial state as well as the final state). However, it's still true that even around a local minimum, the phases add constructively, isn't it? So even the other local minima such as "false" should be giving some contributions to the probabilities of various processes.

(The phase is also nearly constant near a maximum or another stationary point of the action. The generalizations of instantons that are maxima of the action with respect to a few variables and minima with respect to other variables are actually known as sphalerons.)

These contributions are often negligible relatively to the global minima's contributions because \(\exp[(S_1-S_2)/\hbar]\) is negligible if \(S_1,S_2\) are finite, \(\hbar\to 0\), and when \(S_1-S_2\) is negative. However, there exists an exception: if the leading approximation based on the global minimum contributes zero to the probability of a given process, the "subleading" contribution from other local minima may actually become the first factors that make the total result nonzero, so they're very important for the qualitative behavior of the system! They may change the adjective "impossible" to "possible" for certain critical rare processes.

Feynman and quantum mechanics tell you that the quantum world may do whatever it may do before you measure it. So if your rules to calculate quantum field theory probabilities lead you to sum over all configurations of fields in a Euclidean spacetime, you must also sum over the solutions that happen to have an "instanton" sitting somewhere in the middle of the Euclidean space. Those contributions may be important.

One may show that such nontrivial solutions exist for one-scalar field theories with potentials similar to the potential on our picture. Moreover, one may show that these instantons are spherically symmetric (for similar reasons why soap bubbles want to be spherically symmetric). They converge to "false" at infinity but they visit a region near "true" as you probe the central region of the instanton. And they're solutions. So they contribute to the path integral.

One may calculate the action of such an instanton in the Euclidean spacetime, \(S\), and the probability amplitudes of various processes relevant for the instanton will scale like \(\exp(-S)\). Don't forget that one has to square the amplitudes to get the actual probabilities.

Fine. So such exotic solutions where the field mostly sits in the "false" vacuum but visits the vicinity of the "true" vacuum in the middle contribute to the probability amplitudes of some processes. But which processes? The asymptotic structure of the solution for
\[ x^2+ y^2+z^2+c^2 t_E^2 \to \infty \] where the field goes to "false" shows that it's a process happening inside the "false" vacuum. However, for
\[ x^2+ y^2+z^2+c^2 t_E^2 \approx a^2 \approx 0, \] the field \(\Phi(x,y,z,t_E)\) becomes violent and visits the vicinity of the "true" vacuum, too. What does it mean? Well, we must translate the condition above from the Euclidean spacetime to the ordinary Minkowski spacetime to get the right physical interpretation (if you care about the right value and not the interpretation, it's always better to work in the Euclidean spacetime). The translation is nothing else than a removal of an \(i\) from the temporal coordinate. The region corresponds to
\[ x^2+ y^2+z^2 - c^2 t^2 \approx a^2 \approx 0. \] It's nothing else than the condition for the vicinity of a light cone. So near a light cone (and only the future one, as may be figured out by some qualitative thinking), the scalar field actually visits the "true" vacuum. Note that what the field does in the timelike-separated regions (the interiors of the cones) can't quite be directly read from the solution in the Euclidean spacetime because the Euclidean spacetime only contains "spacelike-separated" points: there are no timelike-separated points in the Euclidean spacetime at all.

Let me summarize what happens. With the probability of \(A\exp(-S)\) where \(S\) is the action of the Euclidean instanton from Coleman's first paper and \(A\) is a subleading, "one-loop" multiplicative correction from the second paper by Coleman and Callan (one could systematically calculate higher-loop corrections to \(A\) as well), a "condensation core" occurs at a random place in space. So the exponential determines the probability density per unit volume of the spacetime (in units of \(a^{-4}\), if you need to be more accurate, where \(a\) is the typical size of the instanton).

What happens is that near this point, the field "mostly switches" from "false" to "true", and this revolution starts to spread almost by (nearly) the speed of light from this "succesfully escaping prisoner" to the rest of the space.

This is a global catastrophe because the energy generated in this process is enormous. Note that the energy \(V(x)\) is higher for "false" than it is for "true" so every unit volume of space that is converted from "false" to "true" releases a certain amount of energy. This energy is mostly invested to the kinetic energy of the (accelerating) domain wall separating the external (old) "false" from the central (new) "true" vacuum. This domain wall almost immediately reaches almost the speed of light, anyway. It kills everything it hits.

Even if you imagine that you could survive the collision after an unusually efficient prayer, the particle spectrum around the "true" vacuum differs from the particle spectrum around the "false" vacuum – these people want nothing less than a pan-cosmic revolution – so you couldn't exist in the brave new world, anyway. All the structures we know and love will cease to exist.

Adding vacuum energy (and DeLuccia)

I will try to avoid Al Gore in the text below because many readers may have already vomited; if that's the case, I sincerely apologize.

In this short section, I will just mention a rather straightforward generalization of the 1977 Coleman work, one that he published with DeLuccia in 1980. The discussion above was performed in the context of a non-gravitational quantum field theory: quantum fields such as scalar fields propagate on a fixed, usually flat Minkowski background geometry.

One may add gravity. When we do so, the graph of the potential energy \(V(\Phi)\) may also be interpreted as energy density in the vacuum and energy density influences the spacetime curvature: according to Einstein's general theory of relativity, energy (including the vacuum form of energy) gravitates.

How does it affect the discussion? Well, the relevant spacetime isn't a flat Minkowski spacetime anymore. The typical spacetime is a de Sitter space if the vacuum energy density \(V\) is positive; and an anti de Sitter space if it is negative.

The relevant instanton must approach the Euclidean version of de Sitter space (which is a sphere) or anti de Sitter space (which is the "hyperbolic" Lobachevsky plane or Poincaré disk or whatever you call it) associated with the vacuum energy in the "false" vacuum. You learn that it becomes harder to find such solutions if the "false" vacuum is an anti de Sitter space: anti de Sitter space is getting more stable (which is also linked to the fact that unbroken supersymmetry – a leading guarantor of stability on the market – may exist in anti de Sitter space but not in de Sitter space because only anti de Sitter space offers a globally time-like Killing vector field).

These solutions are much more important if both "false" and "true" vacua are de Sitter spaces, i.e. if \(V(\Phi) > 0\) both for the "false" and "true" \(\Phi\). In this gravitational case, not only the value of \(\Phi\) is variable in the core of the instanton; the metric is variable as well (the scalar fields influence the metric and vice versa). The resulting geometry (the metric tensor part) of the instanton resembles two pieces of two higher-dimensional spheres of different radii that are cut are glued together.

Such quantum tunneling obeying Coleman-DeLuccia's rules is a key component of the eternal inflation; the other component is the complicated landscape of string theory – you may approximate it by a complicated configuration space for many scalar fields (with many local minima) in the approximate low-energy effective field theory.

The space and spacetime curvature may change, small regions of space may grow to big ones (or collapse to a Big Crunch) but we are still converting one space to another. The epicenters of such a dramatic revolution in the Universe are expanding by the speed of light; this is a pretty generic and inevitable feature of all such cosmic catastrophes in field theory and its extensions (such as string theory).

But the first feature, the condition that space still survives in some form, may be circumvented. There are more radical catastrophes in which nothing is left, not even empty space. If you remember Neverending Story, nothingness played an important role although the movie was too sloppy to determine whether the empty space or "really nothing" survived the scary transformation. ;-)

Hořava-Fabinger vs Hořava-Witten: domain walls eating the world

In fact, one may destroy the space itself so that "zero volume" is left. For a semi-technical audience, I believe that the M-theoretical 2000 example by two Czechs (or Czech Americans, if you wish), Hořava and Fabinger, makes the things as clear as possible.

Heterotic M-theory by Hořava and Witten (1995) describes an 11-dimensional spacetime of M-theory. One of its dimensions which we may call the 11th dimension has the shape of a line interval. As a result, the spacetime looks like a thick board with two 9+1-dimensional boundaries. We discussed heterotic string theory and other scenarios where we are located in extra dimensions in a recent article.

You may ask a simple question: cannot the two boundaries just approach one another and "completely annihilate" so that the thick board in between them completely disappears? The thickness of the board in a central region would strictly drop to zero and this central region of nothingness could keep on expanding.



Two boundaries in a heterotic M-theory spacetime annihilate with each other.

In the regular Hořava-Witten heterotic M-theory, such an "annihilation" of the two end-of-the-world branes isn't possible. The reason is that both Hořava and Witten are left-wing. Correspondingly, the \(E_8\) gauginos (superpartners of the gauge bosons) that live in both domain walls (the Hořava boundary as well as the Witten boundary) are left-handed. That makes the domain walls mutually supersymmetric and the configuration is stable.

Moreover, the annihilation depicted at the picture above would glue or identify the two boundaries. However, if you look at the blue spacetime above from a fixed location and you continuously move a letter "d" (or "p") from back boundary to the front boundary, it becomes a letter "b" or "q", the left-right-reflected one. For the same reason, a left-handed gaugino would become a right-handed gaugino.

But in the Hořava-Witten spacetime, both boundaries have the same chirality of the gauginos, as guaranteed by the supersymmetry: "ddddd" is printed all over the place. So the boundaries can't be continuously glued as on the picture above.

The only way how to make the spacetime unstable is to take one of the left-wingers, either Hořava or Witten, throw him away and replace him by a right-winger for this purpose. The history of physics chose to replace left-wing Witten by a (very moderately) right-wing Fabinger.

In this setup, everything "works". One boundary, the Hořava boundary, has left-handed gauginos while the other boundary, the Fabinger boundary, has right-handed gauginos. You may continuously connect them by the fold. The resulting spacetime obviously breaks supersymmetry: the configuration becomes as unstable as a stack of D-branes together with their anti-D-branes. (A picture similar to the blue picture above also describes the instanton symbolizing the annihilation of branes with antibranes; however, the branes are just the "thin surfaces" in this case and external space exists not only in between them – the thick board – but also on both sides.) The boundaries may "annihilate" with each other for the same reason. You don't want to live in such a Universe but it is very interesting because we are discussing cosmic catastrophes.

Just like in the case of papers co-authored by Coleman, there is an instanton solution responsible for all this fun. In fact, the solution looks pretty much just like the blue diagram on the picture. It converges to the "false" unstable spacetime with two boundaries at infinity; however, near the center of the instanton located in an 11-dimensional Euclidean spacetime, the spacetime already resembles its future fate: in fact, the space is completely eliminated over there.

The interpretation of the instability is analogous to the Coleman setup. With some probability density per unit volume of spacetime – which scales like \(1/L_{\rm Planck}^{11}\) which is a gigantic probability density, if you wish to know, making the lifetime at most Planckian – a condensation core occurs at some point. The boundaries are merged together by the fold. All objects are pushed away almost by the speed of light and the central bubble of nothing is expanding by the speed that is rapidly approaching the speed of light.

Undoing the \({\mathbb Z}_2\) orbifold and restoring Witten

When I discussed the origin of the Hořava-Fabinger paper, you may have noticed that we eliminated Witten. Some readers might think that it was unfair. And they would be right. The history of science was unfair to have eliminated Witten. But the history of science is also able to retroactively repair its mistake. It finally repaired its 2000 mistake in 1982. ;-)

Let's see how it happened.

In the article about the basic string phenomenology, I mentioned that a line interval, \(I_1\), is usually represented as a \({\mathbb Z}_2\) orbifold (quotient) of a circle, \(S^1/{\mathbb Z}_2\). The discrete symmetry flips the circle from the left to the right. This map has two fixed points: two endpoints of a line interval that arises as one-half of the circle and that is identified with the other side of the circle.

If there is an instability of a compactification with a line interval whose geometry is \(M^{10}\times S^1/{\mathbb Z}_2\), you may also think that there exists a similar instability in the unorbifolded "parent" spacetime \(M^{10}\times S^1\) – which would be equivalent to a modified type IIA string theory (called type 0A string theory, because of the fermions' antiperiodicity discussed below). And you would be right.

This instability is known as the instability of the Kaluza-Klein vacuum or the "bubble of nothing" and it was discovered in an 1982 paper. Because we have unfairly eliminated Witten when we \({\mathbb Z}_2\) orbifolded this "bubble of nothing" paper, you may guess who is the author of the 1982 "bubble of nothing" paper itself. Yes, indeed, it is no one else than Edward Witten. :-)

His original instanton solution looks analogous to the Hořava-Fabinger blue picture above except that the extra dimension having the shape of a line interval is replaced by a circular dimension which is somewhat harder to be drawn. However, it's still true that the circumference of this circular spacetime dimension behaves much like the thickness of the Hořava-Fabinger "board" and it shrinks to zero in the central region (which is going to grow in the case of a cosmic catastrophe).

In the Hořava-Fabinger case, I discussed the condition that the two boundaries have to be "mirror reflections" of one another; in particular, the gauginos in both domain walls have to have opposite chiralities. Is there a similar "flip" that is needed for Witten's "bubble of nothing" scenario to work as well?

You bet. The "twist" needed in Witten's original 1982 setup is that all the fermions have to be antiperiodic as functions of the circular dimensions. If you think about it, the antiperiodicity of the fermions is needed for the instability to exist: compactifications with periodic fermions would preserve the supersymmetry of the uncompactified theory which means that the instability couldn't exist. Moreover, one may design an argument fully analogous to the "d"-"b" reflection argument in the Hořava-Fabinger case that tells you that the fermions have to be antiperiodic if you want the geometry to be folded just like in the Hořava-Fabinger case (don't forget that Witten's "bubble of nothing" spacetime has no boundaries, however!).

Because we wanted to find various spacetimes that lead to cosmic catastrophes, we were led to things such as antiperiodic fermions, flipped chiralities, and brane-antibrane pairs. These things inevitably violate supersymmetry; in fact, all instabilities require supersymmetry to be violated because supersymmetry guarantees stability.

However, most people and animals don't want to study spacetimes that suffer from cosmic instabilities and that are threatened by cosmic catastrophes. Instead, most people and animals want to survive. For this purpose, pretty much the opposite condition is needed. You have to choose the "like signs" and make other choices that are closer to the supersymmetric spacetime if you want to avoid various cosmic instabilities.

While the association between stability and supersymmetry can't be quite rigorously drawn in both directions – because there exist theories and objects that are stable (or approximately stable) but not supersymmetric – it's still mostly true that supersymmetry at some level is needed for a theory to avoid catastrophic instabilities. Supersymmetry at some level may also be needed for the Higgs to be light, for the cosmological constant to be rather small, and for inflation to work and produce the required potentials.

As the "bubble of nothing" examples show, gravitating quantum theories are threatened by a wider spectrum of possible catastrophes, so the need for their protection – and the need for supersymmetry – may be higher than it is for non-gravitational theories.

Also, there may exist new cosmic catastrophes that invalidate the usual picture of eternal inflation as used by the advocates of the anthropic principle. Various new, so far neglected cosmic catastrophes may work to destroy all complicated Universes that look like giant Rube-Goldberg machines. It's plausible that because of some new ways how compactified universes may collapse, only the "simplest" forms of extra dimensions may be long-lived if SUSY is broken.

Cosmological bounds on cosmic catastrophes

We don't know whether our spacetime is exactly stable. It is plausible that it is threated by a cosmic catastrophe. But because the Universe has lived for \(10^{60} T_{\rm Planck}\) and during much of the time, its radius has been around \(10^{60} L_{\rm Planck}\), we know that the probability of the birth of a deadly nucleation seed shouldn't be much larger than \(10^{-240}\) per Planckian 4-volume of the spacetime. If a theory predicted a (much) larger probability density of a lethal destructive tumor, it would also predict that our Universe should have been (certainly) destroyed by now. But it wasn't so the theory would have a problem.

The state-of-the-art cosmological models typically predict that our Universe ultimately dies by tunneling into a universe with different low-energy laws but the lifetime is much longer than tens of billions of years – it is something between trillions and googols of years. Such instabilities of the Universe are also necessary for the picture of eternal inflation, the cosmological underpinning of the anthropic principle, to be operational.

Summary: cosmic catastrophes may be studied, may tell us something, may produce viable Universes

To summarize, cosmic catastrophes may sound scary but they may be studied by the same quantitative tools as the "peaceful" processes in our spacetime. The absence of certain speedy cosmic catastrophes may tell us something about our Universe (e.g. that it obeys supersymmetry at some level); and the presence of cosmic catastrophes in less viable universes (different places on the stringy landscape) could also be needed for our hospitable Universe to have been selected.

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :