Friday, September 15, 2006 ... Deutsch/Español/Related posts from blogosphere

Gibbons and Turok: an unnatural measure

Gibbons and Turok want to determine the probability of different Universes in a multiverse.

Recall that the egalitarian anthropic string theorists want to find all discrete vacua and assign them essentially the same weight in the ensemble, at least in the zeroth approximation. As you know, I find such a treatment unjustifiable scientifically.

Gibbons and Turok reach a very different conclusion which, I am afraid, is even less justifiable scientifically. ;-) The most irritating implication is that

  • the Universes that lead to "N" e-foldings of inflation are suppressed roughly by a factor of "exp(-3N)".

If this were right, inflation wouldn't be a natural mechanism to generate exponentially large numbers such as the size of the Universe after inflation. Their probability measure is more or less inversely proportional to the coefficient that multiplicatively measures how much the volume of the Universe has expanded.

There is always a possibility that such a devastating conclusion is indeed implied by the full theory of quantum gravity, by the complete equations of string theory that someone will write in 2020, and inflation is completely unnatural in that framework. But I just don't understand why should anyone believe such a conclusion in the absence of a rigorous calculation and in the absence of any advantage - such as an agreement with a feature of reality - that such a calculation would naturally imply.

In other words, their picture is quite clearly inferior in the ability to describe reality in comparison with the conventional picture in which inflation is natural, as explained below. If the conventional picture of any theory happens to require a huge modification, I am afraid that the person who will discover the very different and better picture in the future will have to do most of the job at the same moment. Quite generally, I don't think that the scientific truth can be systematically looked for (and reached) by creating theories whose justifiability, self-consistency, and agreement with reality is consistently deteriorating in the name of "diversity of ideas" - something we can call the Smolinian way to look for theories. ;-)

In science, we must simply look for better and better theories, and if it is necessary to make the theories look worse than the previous ones for a while, only the final result that is again better should be published. You can't and you shouldn't permanently reveal yourself in the middle of quantum tunneling! The ultimate goal of science is to agree with reality, and Nature imposes both a lower bound as well as an upper bound on the amount of the "diversity of ideas" that can be relevant to describe Her.

Back to Gibbons and Turok.

Technically, they try to measure the probability measure by a volume on the phase space. Of course, such a volume in general relativity is completely ill-defined because everything is infinite-dimensional and the exact factors in the measure are also unknown. In order for them to make it finite, they identify all states of GR whose spatial curvature is smaller than some cutoff "Omega_0". I find this procedure absolutely unphysical. This is a sort of infrared cutoff. The correct way to treat the infrared cutoff in situations such as scattering with light photons is to sum up the probabilities - not probability amplitudes - including all the productions of soft photons below the cutoff. But no one is guaranteed that such a sum just gives a factor of one. On the contrary, it cancels some other divergences.

Inflation is natural and generic

A more direct reason why I am convinced that their calculation can't be right is that many of us are implicitly using a calculation that seems more controllable, less cutoff-dependent, and that simply implies that inflation is natural and the dependence of the probability measure, whatever it is, on the number of e-foldings is at most a power law.

The selection that decides about the probability measure occurs in a quantum gravity regime: think about a Universe whose volume is Planckian. The number of e-foldings is determined by the couplings that describe the inflaton potential, among other couplings. And the probabilistic measure will have a relatively innocent - power-law? - dependence on these couplings. That means that the probability measure will depend innocently on the number of e-foldings, too.

With these initial conditions and the inflaton near the top, we can naturally obtain "N" e-foldings which will generate a very large and flat Universe with volume scaling like "exp(3N)". This volume "exp(3N)" is a low-energy, late-time consequence of the initial conditions, and this factor simply cannot penalize the a priori probability distribution in the Planckian regime because such an influence would violate causality.

This is a subtlety about the wavefunction of the Universe that many people see, in my opinion, incorrectly. The path-integral prescription for the wavefunction of the Universe automatically satisfies the Wheeler-DeWitt equation which is why we can also use it to determine the amplitudes of different states at late times when the Universe is already large. But we can only determine these things if we know how to regulate all possible subtleties correctly.

In reality, we don't know how to compute them correctly and thinking about very large late Universes can lead to very different results than thinking about the early Planckian Universes. I am convinced that in the case of any uncertainty or discrepancy like that, the wavefunction of the Universe must be viewed as a tool to calculate the wavefunction in the early, Planckian stage. This is the regime that is fundamental. In this regime, different Universes can have different amplitudes (or probabilities) that are consequently used as initial conditions and evolved to the large Universe we know.

The formulae for the wavefunctions of the Universe should be applied to the smallest possible Universe one can controllably study - as a tool to determine the initial conditions.

Punishing the initial conditions exponentially for their ability to lead to an exponentially large Universe is acausal, anti-fundamental ;-), and strongly disfavored by experiments that suggest that inflation works, and it should therefore have a non-negligible probability to occur which probably requires that the factor of "exp(-180)" is absent.

Add to Digg this Add to reddit

snail feedback (0) :

(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//','ga'); ga('create', 'UA-1828728-1', 'auto'); ga('send', 'pageview');