Thursday, February 10, 2011

The enigmatic cosmological constant

The cosmological constant remains one of the most mysterious players in the current picture of the world.



A building of the Faculty of Natural Sciences of the Charles University in Albertov. In those buildings, where the Velvet Revolution began on November 17th, 1989, Einstein spent a few years but he (and Mileva) didn't like Prague much. Just to be sure, the place had been called Albertov well before Einstein came there: it's named after Prof Eduard Albert MD (1841-1900), a Czech surgeon and poet.

In 1915, after a decade of intellectual self-torture in Prague and elsewhere, Einstein managed to write down the final form of Einstein's equations:



They're pretty. Please imagine that there is a minus sign in front of "8.pi". The convention above is bad.

When I was 15, I would write them about 50 times in my notebooks, in beautiful fonts. ;-) Both sides describe 4 x 5 / 2 x 1 = 10 functions of space and time - a symmetric tensor field. The left-hand side is the "Einstein tensor", describing some information about the curvature of the spacetime. (The term "R_{mu nu}" itself is called the Ricci tensor.) The right-hand side is proportional to the stress-energy tensor "T_{mu nu}". It encodes the density and flux of mass/energy and momentum - which are the sources of the spacetime curvature.




Einstein quickly realized that there was a price to pay for the dynamically curving spacetime: it wasn't able to sit at rest. Instead, the total size of the Universe - or the typical distance between two galaxies - would behave just like the height of a free apple according to Newton's equations. It can fly up and decelerate; or it can fly down and accelerate. But it can't sit in the middle of the room.

For Einstein, this was unacceptable. Much like everyone else in the (de-Christianized) physics community, he was convinced that the Universe has always existed. It had to be static, he thought. (He could have predicted the expansion of the Universe but he didn't. It had to wait for experimenters such as Edwin Hubble. We will mention this point later.)

In 1917, in order to make the Universe static, he added the term
+ Lambda gmu nu
to the left hand side of his equations. The value of the positive cosmological constant "Lambda" he needed for his static Universe - whose geometry is "S^3 x R" (three-sphere times time) - was
Lambda = 4.pi.G.rhomatter
where "rho_{matter}" was the average mass density of the Universe as envisioned by Einstein. The negative value of the pressure would become more important than the positive energy density, and it would prevent the Universe from collapsing, keeping the curvature radius of the 3-sphere equal to "1/sqrt(Lambda)". (The fixed size would be unstable, much like a pencil standing on its tip, but I won't discuss these extra pathologies of Einstein's "solution" in any detail.)

This value of "Lambda" had the right sign and was actually comparable to the currently believed value of the cosmological constant, as I will clarify below. It's either rather accurate or more accurate, depending on what you substitute for "rho_{matter}". Our Universe is not Einstein's static Universe, so we can't measure the right "rho_{matter}" to substitute to these wrong equations ;-) or more precisely right equations with wrong assumptions about the parameters (driven by desired solutions).

You may also move the cosmological constant term to the right-hand side of Einstein's equations which would become
-8.pi.G.(Tmu nu + Lambda/8.pi.G gmu nu) =
= -8.pi.G.(Tmu nu + 2.rhomatter.gmu nu)
Note that in this way of looking at Einstein's equations with the cosmological constant, the cosmological constant is nothing else than a correction to the stress-energy tensor. This correction is proportional to the metric tensor. A positive cosmological constant is nothing else than an addition of a positive energy density and a negative pressure, "p = -rho".

For Einstein's universe to be static, the required "rho" is actually equal to "2.rho_{matter}" where "rho_{matter}" is the average density of matter (assumed to be approximately dust) in the static Universe. Note that if Einstein had known dark matter and visible matter, and if we neglected that they're not quite dust, "rho_{matter}" would be 4+23=27% of the critical density, and this times two is equal to the 54% of the critical density. That would be needed for our Universe to be static.

Because the actual value of the energy density stored in the cosmological constant is about 73% of the critical density which is more than the figure 54% mentioned above, our Universe is actually accelerating its expansion. But you see that the difference between 54% and 73% is not too high. In fact, it's not a long time ago when the acceleration was zero - when the Universe was gradually switching from decelerating expansion to accelerating expansion.

But in that moment, the Universe wasn't static because it still had a positive value of the "velocity" of the expansion. Just the acceleration - another derivative of the velocity - was zero. If you wonder when was the moment when the acceleration was zero, it was approximately six days before God created the Sun and the Earth, about 4.7 billion years ago. ;-)

I ask the Christian readers not to get carried away. It's a complete coincidence - believe me more than the Bible even though it may be hard :-) - and I admit that the six-day accuracy was a little bit exaggerated. The error margin of the figure 4.7 billion years is below 10 percent, however.

Blunder

Of course, when the expansion of the Universe was found by Hubble in the late 1920s, Einstein was upset that he had failed to predict it. If he had predicted it, he could have become more famous than Isaac Newton if not Juan Maldacena. ;-)

That's why Einstein swiftly identified the addition of the cosmological constant term as the greatest blunder of his life. Of course, this was far from a great blunder. In fact, we currently know that the term is there. The greatest blunder of Einstein's life was that he didn't give a damn about quantum mechanics in particular and empirically based science in general in the last 30 years of his life.

Let's look at the reasoning that led Einstein to think that the addition of the term was the greatest blunder. Of course, Hubble had showed that the term wasn't needed because the Universe was expanding, exactly the possibility that Einstein wanted to avoid rather than to predict. But Einstein surely had an a posteriori theoretical reason to think that the term was bad, hadn't he?

Well, the term was "ugly", he thought. It was spoiling the beautiful simplicity of the original Einstein's equations, Einstein would say.

However, we must be extremely careful about such subjective emotional appraisals of the beauty. They may easily be wrong. The sense of beauty is only a good guide for you if it is perfectly correlated with Nature's own taste. Even Einstein has failed to achieve this perfection correlation at many points of his career.

Do we have a more scientific reason to trust Einstein's equations, instead of saying that they're "pretty"? The "counting of the number of terms on the paper" is surely an obsolete criterion for the beauty, especially because the maximally supersymmetric supergravity has many terms, despite being the prettiest theory in the class. So why would we rationally think that Einstein's equations are "prettier" than similar equations in which we would add e.g. squared curvature terms, among others?

Well, we have understood the reason at least since the 1970s - when the ideas of the renormalization group in particular and the organization of the laws of Nature according to the scale in general became a key pillar of the physics lore. The actual reason why the "squared curvature" and even more complicated terms are "bad" is that these terms only become important at short distances. Each factor of a curvature in each term of such equations is approximately adding a factor of "1/squared_curvature_radius". So to protect the right dimension of the terms, it must be multiplied by a coefficient that goes like "length^2".

What is the "length"? Well, it's some special length scale where the behavior of physics changes - and some new terms become important or negligible, depending on the direction where you go in the length scale. The general experience in particle physics indicates that all such scales "length" are microscopic: the size of the atom, proton, electron. And the Planck length is the shortest one.

So at all distances much longer than this macroscopic "length", all the higher-derivative terms - such as the powers of the Ricci tensor - may be neglected. That's really why Einstein's equations, without those extra terms, are a good approximation for long-distance physics. That's why we shouldn't add the "ugly terms".

What about the cosmological constant term?

Well, it actually has fewer derivatives than the curvature tensor: it is even more important at long distances than the curvature tensor in the original Einstein's equations! So we can't neglect it. According to the modern replacement for the "beauty criterion", you can't eliminate it. Einstein's decision to call the cosmological term "ugly" was an example of his flawed sense of beauty.

Quantum physics

And indeed, as we have known from 1998 or so, the cosmological constant is positive because the expansion of the Universe is actually accelerating (it was a big surprise). In fact, it's bigger than what would be needed for the acceleration of the Universe to vanish - what would be needed for Einstein's static Universe. That's why the expansion of the Universe is accelerating. The greater positive cosmological constant you add, the more capable it will be to accelerate the expansion.

(The term "dark energy" is sometimes used instead of "cosmological constant". Dark energy is whatever drives the accelerating expansion and the nature of "dark energy" is deliberately vague. However, accurate observations indicate that "dark energy" has exactly the same properties as a positive cosmological constant, so you may pretty much identify the two terms.)

The energy density carried by the cosmological constant is about 3 times larger than the energy density carried by dark matter and visible matter combined.

Now, you have to realize that the cosmological constant is the energy density of the vacuum. Use the convention in which the cosmological constant term is moved to the right hand side of Einstein's equations. And define the tensor "T_{mu nu}" in such a way that it vanishes in the vacuum. But there's still an extra term added to "T_{mu nu}" which is, therefore, the stress-energy tensor in/of the vacuum.

Can our theories explain that this energy density of the vacuum is nonzero? Well, they can. Too well, in fact. ;-) They explain it so "well" that their predictions are wrong by 60-123 orders of magnitude. :-)

Needless to say, the value of the cosmological constant can't be calculated from "anything" in a classical theory. You need to assume a value and any value is equally legitimate.

In quantum field theory (at least a non-supersymmetric one), you may decide that the classical value of "Lambda" inserted to the equations is zero. But even if the classical value is zero, the total value of the cosmological constant is not zero.

As you should know, quantum field theory produces quantum effects - temporary episodes in which particle-antiparticle pairs emerge from the vacuum and disappear shortly afterwards. These quantum effects modify the masses and energies of all objects (aside from other properties). They change the mass of the Higgs (that's why there's the hierarchy problem) and everything else; they move energy levels in the atoms (by the Lamb shift and many other shifts), and many other things.

They also change the energy density of the vacuum. In particular, if you consider Feynman diagrams without any external lines, they determine the quantum mechanics' contributions to the vacuum energy density. Because of the Lorentz symmetry of the original theory, this quantum-generated energy density is automatically promoted to a stress-energy tensor that has to be proportional to the metric tensor - the only invariant tensor with two indices - so the pressure is always "p=-rho", just like for the cosmological constant.

The simplest diagrams without external lines are simple loops (circles) with a particle running in it. They contribute to the vacuum energy density by something proportional to
+-mass4
where "mass" is the rest mass of the corresponding particle species. The bosons contribute positively; the fermions contribute negatively.

The observed value of the cosmological constant, if expressed as the energy density, is approximately equal to
massneutrino4,
i.e. the fourth power of the lightest massive particle we know, a neutrino. This relationship is only approximate - as far as I know. It may be a coincidence but it doesn't have to be a coincidence. (In the "c=hbar=1" units I use, the energy density is energy per cubic distance but the distance is inverse energy, so the units are "energy^4" or "energy^d" in "d" spacetime dimensions.)

To get the finite number above, we may have to choose a regularization scheme, and it seems helpful here to assume dimensional regularization. (But the final result is not too compatible with the observations, anyway, so it's questionable whether the method is really helpful. But we will continue to assume that this basic calculation is valid.)

So if the neutrinos were the only particles that would contribute their quantum loops to the cosmological constant, you would get approximately the right vacuum energy density (with a wrong sign because neutrinos, being fermions, contribute a negative amount to the energy density).

However, there are many particle species that are heavier than the neutrinos. In particular, the top quark is approximately 10^{15} times heavier than the lightest neutrino (or the smallest neutrino mass difference; we can't quite measure the absolute neutrino masses today). Because it's the fourth power of the mass that contributes to the vacuum energy density, the top quark loop contributes a negative term that is about 60 orders of magnitude too large.

And then you have all the conceivable contributions to the cosmological constant from the "intermediate" particles between the neutrinos (which are OK) and top quarks (which are 60 orders of magnitude too high). Every effect you may imagine - including confinement, Higgs mechanism, and other things - modifies the value of the cosmological constant.

For the Higgs field, for example, it's important to distinguish the potential energy at the local maximum and the minimum. We live at the minimum but the cosmological constant would be vastly (60 orders of magnitude) higher at the local maximum. And it's just the Higgs field.

And as simple particles as the top quark or the Higgs boson give you contributions that are 60 orders of magnitude too high. In fact, the top quark is almost certainly not the heaviest particle in the world. Particles worth the brand "particle" may exist up to the Planck scale which is 15 orders of magnitude heavier than the top quark. (Particles heavier than the Planck scale are black hole microstates.) Clearly, if you include the heaviest, near-Planckian particle species (e.g. the new particles predicted by grand unified theories), you will get contributions to the cosmological constant that are about 120 orders of magnitude too high.

In fact, the observed cosmological constant is equal to "10^{-123}" in the Planck units - a very unnatural pure number.

How is it possible that the sum of all the effects and loops of particles of diverse masses ends up being so tiny, comparable to the contribution of the lightest neutrino which is ludicrously lighter than the Planck scale?

Supersymmetry

You may have noticed that the contributions of the bosons are positive while the contributions of the fermions are negative. Can you cancel them?

Well, a problem is that the masses of the bosons are some random numbers, and the masses of the fermions are other random numbers. There's no reason for them to cancel exactly or almost exactly. 3-6+9-15+48-98 has a chance to be zero but it's probably not zero.

Things change if you have unbroken supersymmetry. If supersymmetry is unbroken, each boson has a fermionic partner and vice versa. The masses of both partners exactly match. When it's so, it follows that the total quantum correction to the cosmological constant vanishes. (You may still need to be careful about a "classical" term that could have been nonzero to start with. In my opinion, it's rather reasonable to expect or require that the classical term has to be zero in realistic vacua.)

However, supersymmetry in the real world has to be broken. The mass differences between the pairs of superpartners are at least as large as the top quark mass - at least some of them. It follows that you still get an uncanceled contribution that is comparable to the contribution of the top quark we chose as a benchmark. And it's 60 orders of magnitude too high.

At least, you get rid of the larger contributions that could arise from the near-Planckian heavy particles and that would be 120 orders of magnitude too high. With broken supersymmetry, the discrepancy between the measured and "estimated" cosmological constant gets reduced from 120 orders of magnitude to 60 orders of magnitude.

It looks like progress - we have already done 1/2 of the job. While it's true, there is also some sense in which the - smaller - discrepancy obtained in the supersymmetric context (supergravity, in fact, because we want to include gravity as well) is much more "real" than in the non-supersymmetric context.

I still believe that all the physicists are confusing themselves with the "estimates". Particular theories or string vacua - perhaps a bit different than the popular ones; perhaps the same ones with a more correct calculation of the vacuum energy density - could actually lead to a more accurate cancellation than what is suggested by the estimates. The discrepancy of the 60 orders of magnitude could be fake.

There have been many episodes in the history of physics in which people made a very sloppy estimate and they (thought that they) ruled out a correct theory because of this estimate. For example, people would believe that the gauge anomalies in all chiral vacua of string theory (with Yang-Mills fields) had to be nonzero. They believed so until 1984 when Green and Schwarz showed that the complete calculation actually produces an answer equal to zero. That sparked the first superstring revolution.

I find the "generic" estimates of the cosmological constant to be equally sloppy to the arguments in the early 1980s that the gauge anomalies in string theory had to be nonzero. There can also be a contribution from a counterpart of the Green-Schwarz mechanism. The total could be 10^{-120} in Planck units. And much like in the Green-Schwarz anomaly cancellation, there may exist more "profound" and "simpler" ways to show that the cosmological constant cancels much more accurately than people expect.

A big difference is that in the case of the type I string theory's gauge and gravitational (and mixed) anomalies, we know that they cancel. We know the Green-Schwarz mechanism and other things. The "similar" scenario for the cosmological constant remains a speculation or a wishful thinking, if you wish. So we can't say that science has showed that the situations are analogous. I still think that this wishful thinking is relatively likely to be true.

Landscape of possibilities

In quantum field theory, you may adjust the classical value of the cosmological constant to any number you want. So you may adjust it so that when the quantum corrections are added, you get exactly the desired value.

String theory is much more rigid and predictive. You can't adjust anything. Much like quantum field theory, string theory allows you to calculate the quantum corrections to the cosmological constant - and all other low-energy parameters - but unlike quantum field theory, it also allows you to calculate the classical pieces, too. There's no dimensionless continuous static parameter in string theory that would be waiting for your adjustments.

There are only discrete choices you can make - the topology of a Calabi-Yau manifolds; integers encoding the number of branes wrapped on various cycles and magnetic fluxes; and some related discrete data.

If you believe that there's no Green-Schwarz-like miracle waiting for us, then you must probably agree with the anthropic people who say that the most likely prediction of the cosmological constant by a single semi-realistic vacuum is comparable to the Planck scale, about 120 orders of magnitude too high.

If you believe so, then you face a potential contradiction which is "very likely" if the theory produces a small number of candidate vacua. In such a situation, the existence of more than 10^{120} solutions that string theory offers is saving your life. It's saving you from the discrepancy. The large number of solutions de facto allows you to do the same thing that you could do in quantum field theory. In quantum field theory, you could continuously adjust the cosmological constant (and all other parameters). In string theory, you're not allowed to adjust it continuously but the number of solutions is high enough so that you may adjust it "discretely" and "almost continuously".

I still think that one should try not to rely on such mechanisms that are meant to be able to cure any inconsistency by a universal metaphysical trick. (This nearly religious "cure for everything" is common among low-level physicists such as the loop quantum gravitists who think that they can cure "all UV problems" by their spin network aether, without doing any special work. That's too bad because in a consistent framework with a QFT limit, one can prove that e.g. gauge anomalies cannot be cured. So any framework that allows you to argue that even theories with gauge anomalies can be defined is inevitably internally inconsistent.)

There seems to be a disagreement between the observed and estimated value of the cosmological constant so we should work hard to improve our estimates. We should find previously neglected terms and mechanisms that, when accounted for, make our estimates more compatible with the observations.

The opposite, anthropic attitude could have been used to "solve" any puzzle in the history of science. But in every single puzzle we understand (and consider to be solved) today, we have learned that the anthropic solution was wrong. The neutron is "anomalously" long-lived but we don't need a landscape to explain that (even though the longevity could be argued to be important for life); we can calculate the lifetime and it's the "small phase space" of the decay products that makes the neutron more stable than expected.

And I can tell you thousands of such examples of puzzling features of Nature that could have been explained by the anthropic hand-waving but the right explanation turned out to be different and more robust.

Supergravity seems to be rigid when it comes to the calculation of the vacuum energy density and those 60 orders of magnitude of disagreement seem to be real today. But we may be mistreating the vacuum graphs in supergravity. We may be missing the counterpart of the Green-Schwarz anomalous transformation laws. We may be missing some purely quantum effects that only (or primarily) affect the tree-level graphs. We may be missing alternative ways to prove an almost exact cancellation.

The cosmological constant may be linked to
massneutrino4
as I have suggested above and there may exist a good reason why. For example, the value of Lambda could be running in a bizarre way and the running could stop below the neutrino mass (the mass of the lightest massive particle), guaranteeing that the value of "Lambda" stays comparable to the fourth power of the neutrino mass. (In a similar way, the fine-structure constant doesn't run beneath the mass of the lightest charged particle, the electron.) A value of "Lambda" that is vastly higher than "mass^4" could make the effective (gravitating) field theory defined for the "mass" scale inconsistent for a currently unknown reason.

And because 60 is one half of 120, the observed cosmological constant may also be related to the ratio
mPlanck8 / mTop4
where I chose the top quark mass - our previous benchmark - to represent the electroweak scale or the superpartner SUSY-breaking scale which is arguably not far from the electroweak scale. I have about 5 different scenarios how some of these formulae could be shown correct on a sunny day in the future. Needless to say, I realize that none of those scenarios is fully convincing at this point. But people should keep on trying, anyway.

And that's the memo.

6 comments:

  1. Einstein said you do not really understand something unless you can explain it to your grandmother. And so I told my grandmother the acceleration is due to stuff falling over the edge of the universe, much as the velocity of a river increases as it approaches a waterfall. My grandmother was completely satisfied with this explanation.

    ReplyDelete
  2. What if the broken symmetry mass values had to take on values in a finite group?

    ReplyDelete
  3. Edson Fernando FerrariJul 26, 2013, 11:13:00 PM

    Dear Lubos:

    could you read this and contact me?

    Thanks.

    https://docs.google.com/file/d/0Bxo1hrb-drogNmk1UzNRZXJCbEE/edit?pli=1

    ReplyDelete
  4. Interesting try, Edson, but I don't see how this could move you closer to a solution to the C.C. problem.


    You wrote zero in a convoluted way - Cesaro sum - but it's only the C.C. for a single-scalar classical theory. The problem is to get the tiny C.C. for a 1) theory with generic fields like the real world, including Dirac and gauge fields, 2) the quantum case. Both properties make the problem much harder. Moreover, 3) we seem to know that the right result should be tiny but not zero.

    ReplyDelete
  5. Edson Fernando FerrariJul 29, 2013, 2:25:00 PM

    Thank you for posting the link to my paper, Lubos.

    It's not only the CC of a single scalar, I should have stated clearly. The mechanism works at the classical level for generic inner symmetries.

    The quantum case is much harder, sure, but no viable classical solution is known.

    We seem to know that DE is a tiny CC, indeed. This sensational experiment will settle the issue in a few years:

    http://www.darkenergysurvey.org/

    I think it will be very successful.

    ReplyDelete
  6. Edson Fernando FerrariAug 9, 2013, 9:37:00 PM

    Dear Lubos:


    I included the proof that the mechanism works for generic classical fields and inner symmetries.

    https://docs.google.com/file/d/0Bxo1hrb-drogWmhiNFlKYjVTNEk/edit?pli=1

    ReplyDelete