Thursday, February 07, 2008 ... Deutsch/Español/Related posts from blogosphere

Aether compactification

One of the undesirable effects of the blogosphere is the self-promotion of dubious papers written by authors who are also bloggers. In the era of classical journalism, journalists were not perfect and they didn't ever understand science in depth but they were usually impartial. Or at least, they understood that they should have been. I think that blogging scientists simply shouldn't introduce this kind of bias and hype into the process of appraisal of scientific work. All responsible people should be cautious about this kind of a potentially flagrant conflict of interests.

Because I tend to believe that most TRF readers would agree with me and the statement of mine above is uncontroversial, let me focus on the technical aspects of one particular recent incident, namely the paper about

Aether compactification
by Sean Carroll and Heywood Tam that was promoted on the blog of one of the authors. They more or less explicitly argue that

  1. They have a new scenario how dimensions can be hidden
  2. The main task of extra-dimensional model builders is to break the rotational symmetry between the ordinary and hidden dimensions
  3. One can construct consistent physical theories by adding arbitrary Lagrange multipliers that impose arbitrary constraints
  4. One can see that in their picture, the masses of scalars, fermions, and gauge fields scale with different powers of a new parameter
  5. The "aether" theory solves any puzzles of current physics or offers any attractive features
  6. The "aether" theory can follow from a consistent theory of quantum gravity as a classical limit

All these statements, assumptions, and beliefs and incorrect, as we will show in detail.




Background

The aether used to be a substance that was thought to fill the space and whose vibrations were identified with the electromagnetic waves. People, including big shots such as Maxwell, simply couldn't imagine that the vacuum itself could have degrees of freedom in it. The aether was breaking the Lorentz symmetry and was ultimately wiped off the map of physics by Albert Einstein in his revolutionary 1905 paper about special relativity.

The whole point of special relativity is about the cleaning of the spacetime - about imposing the principle of relativity known from mechanics to all other phenomena including the electromagnetic ones.

A physicist called Ted Jacobson uses the term "aether" for a different kind of field theories that also break the Lorentz symmetry, namely theories with an additional vector field whose vacuum expectation value is nonzero - either space-like or time-like. The canonical example of the preferred direction is the 4-vector associated with the reference frame of the 19th century aether. These theories do not solve any problems, every new physical phenomenon in them is inherently incompatible with observations (they are at least as wrong as they are interesting), they effectively return us before 1905, and they show a lack of creativity of their authors.

Carroll and Tam use Jacobson's aether in a five-dimensional spacetime and choose the direction of the aether vector to be in the fifth, space-like dimension. They claim various things, including the list above that I am going to debunk.

The picture is not new in any way

First, Carroll and Tam cite the papers by Arkani-Hamed, Dimoupoulos, and Dvali and by Randall and Sundrum that have nothing to do with their naive paper, a paper that is neither about a braneworld nor about a warped geometry. Why do they cite the famous papers? Because they would apparently like to sell their paper as another scenario for extra dimensions. Is it one?

Not really. More precisely, not at all. They consider a normal compactification - it really looks like a circular compactification pioneered by Kaluza and Klein almost 90 years ago. But they add an extra field. The size of the dimension in their picture depends on the probes and the size seen by the Standard Model particles must be tiny because their Kaluza-Klein modes haven't been observed. So the constraints are exactly what they are in normal compactification.

There is no new idea here, only an unmotivated addition of some (but not all) Lorentz-violating terms. The constraints must be checked for each particle species separately but it is questionable, to say the least, whether such a specialization is acceptable or worth considering.

Goal of compactification

They seem to believe and even say that the main problem with additional dimensions is the rotational symmetry mixing the dimensions we know with those that we don't. That's, of course, a very small portion of the things we actually care about. The real difficulty of additional dimensions is that we don't see them and we can't move in them. At least we think we can't.

Quantum mechanically, it means that the observations imply that there exist no "cousins" of known particle that carry an extra momentum or velocity (quantized, in the compact or curved case) along the extra dimensions - the so-called Kaluza-Klein modes. At least, we know that if such particles exist, they must be heavier than the energy scale that has been experimentally tested. Such a condition imposes an upper bound on the size of extra dimensions. Looking at gravitons, we know that such dimensions must be smaller than 10 microns or so. By analyzing the particles in high-energy physics, we know that the extra dimensions must be smaller than the classical radius of the electron - the scale probed by the cutting-edge accelerators - unless they are stuck on branes.

Rotationally symmetric extra dimensions are an extremely special case - the case of infinitely large flat (or uniformly curved) dimensions. If you show that your theory breaks this "mixed" rotational symmetry, you are still very far from showing that the existence of extra dimensions in your theory is compatible with observations.

Compactification always breaks the rotational symmetry mixing the large and compact dimensions and whether you add some additional sources of this breaking - such as the "aether" - is irrelevant. It doesn't help you to achieve anything, it is not necessary, and it leads to additional problems and inconsistencies discussed below. You are really making the picture worse, not better.

Problems with Lagrange multipliers

In the aether theories of the Jacobson type, they deal with a vector field u^m that has a normal Klein-Gordon-like kinetic term in the Lagrangian but also a term with a Lagrange multiplier,

lambda(u^m u_m - a^2),
the imposes the condition that the squared length of the vector equals a^2 at every single point (also in the vacuum). Is it a legitimate formulation of a theory?

The equation of motion derived from varying lambda tells you what the length of u^m should be. Instead of having four "scalar" components, there are effectively three degrees of freedom in the u^m field. All the components must satisfy Klein-Gordon-like equations with an additional term proportional to lambda. Lambda is non-dynamical and must be chosen so that both the modified Klein-Gordon equations as well as the constraint on the length of u^m give you what they should.

You can solve for lambda - or, using Feynman's quantum jargon, integrate lambda out. Then you will be left with a three-component field and a highly non-linear Lagrangian. It will be, of course, non-renormalizable. Now, there is nothing wrong about classical or effective non-renormalizable theories but there is nothing good about them either. No UV complete quantization of such classical theories is known but even classically, it is questionable whether there exists a rational reason to consider them.

Scaling of masses

The authors argue that in their theory, scalar masses and fermion masses scale very differently with a coupling constant. One can see that this conclusion is an artifact of their incomplete analysis. In other words, they violate the usual rules of field theory and only write some random terms in the Lagrangian but not others and parameterize them in an arbitrary way.

For example, they claim that the fermion masses scale like alpha^2 but the boson masses go like alpha. For the bosons, it is indeed natural to say that the leading contribution to the mass would go like alpha. But for the fermions, it is not alpha^2. Instead, it is alpha even for the fermions. Why do they say it is alpha^2 (or alpha^4 for the squared masses) in the fermionic case? Well, the reason is that they write the coupling
u^a u^b psibar gamma_a partial_b psi
which is quadratic in u^a. However, it is not the leading coupling. Students learn in their QFT I courses that the fermionic terms are, unlike the bosonic ones, linear in the derivatives or the masses. The interaction with u^a is no different. The leading coupling - one that they forget - is
u^a psibar partial_a psi.
If one parameterizes the coefficient of this term naturally, he will obtain the same scaling of the mass for the bosons and fermions. That shouldn't be surprising because one can promote u^a to a superfield and supersymmetrize the whole theory. In the supersymmetric theory, the masses of bosons coincide with their fermionic partners.

On page 3/4, they indicate that they only want quadratic terms in u^m because of a symmetry, u^m goes to -u^m, that they want to impose. But such Z_2 symmetries must always be allowed to be accompanied with a "sign" action on the spinors. The wrong sign from u^m can be clearly compensated by a relative sign in the transformation rule for the two Weyl two-component spinors in the Dirac spinors: the symmetry is restored.

Their statement that their theory predicts different scalings of the masses for bosons and fermions is clearly a result of a sloppy analysis. It is just garbage. One can use the requirement that a Lagrangian must have a symmetry but one can't assume that this symmetry isn't allowed to act on certain fields. They're making random, unjustifiable assumptions which is why the budget of their predictions is described by the "garbage in, garbage out" (GIGO) Ansatz.

Decoupling from puzzles of particle physics

It is true that a large portion of phenomenology is about writing unmotivated Lagrangians that don't solve anything but that are just conceivable. It is simply how model building works in most cases - the intense investigation of "unparticle physics" in 2007 is a great example of this phenomenon - and it is a reason why I never considered phenomenology to be as deep and as important as pure theory. After all, most original and meaningful lines of research in contemporary phenomenology have been motivated by pure theory.

But in some cases, this lack of motivation is more blatant than in other cases. In the case of the aether theories, the lack of motivation is extremely transparent. Why the hell are they obsessed with the breaking of the Lorentz symmetry and why the hell they do it exactly in the way they do it?

Make no mistake about it, a generic Lorentz-violating theory is just a subject for pre-1905 physics. Unless you have a more sophisticated reason that couldn't exist in 1905, breaking the Lorentz symmetry is as reactionary ;-) as returning to epicycles or creationism. Moreover, if you break the Lorentz symmetry, you should consider all Lorentz-breaking theories. That's what the rules of field theory dictate us. The only way how possible terms in the equations of motion can be constrained is to impose a symmetry (or an approximate symmetry with a justifiable quantification of the "proximity"). Once you lose the symmetry, anything goes.

Most of the Lorentz-breaking people constantly violate these rules and they only write some "simple" Lorentz-breaking theories and Ansätze that they like at a given moment but whose structure cannot be determined by any objective or natural criteria. The decision of Carroll and Tam to only write a quadratic, and not linear, coupling of u^m to the Dirac spinors is a textbook example of this misguided approach.

These people are lost somewhere in the infinite-dimensional manifold of worthless theories and they randomly declare infinite-co-dimension submanifolds of this manifold to more interesting than the rest: they're really guessing all the time. But they never have any rational arguments for these statements. Everything about these random Lorentz-breaking theories is hype and brainwashing. It is about the ability to use politics (or blogs) to convince other physicists or students to work on your rubbish theories. There is no scientific value in this enterprise and taxpayers shouldn't be required to pay for it.

Incompatibility with quantum gravity

I want to say that we kind of know that this rather generic violation of Lorentz symmetry that would allow you to choose different speeds of propagation for different particles (and maybe even different directions of space) is not allowed in a consistent theory of quantum gravity.

As long as geometry is a good concept, it must play an important role in the Lagrangian. For example, we normally believe that the kinetic terms of all fields - in fact, all terms with derivatives - are controlled by the metric tensor. These terms have lower indices from the derivatives and they are contracted with upper indices from the metric tensor, at least in the crucial terms of the Lagrangian.

If someone claims that these normal terms are already negligible - so that the speed of electrons in the fifth dimension is dominated by their coupling to an "aether" field (as in the Carroll-Tam paper) - he already breaks the rules of effective field theory because there is no good reason for the "normal" couplings to the metric to be suppressed so much.

More fatally, this generic breaking of Lorentz invariance seems to be prohibited in consistent theories of quantum gravity. For example, perturbative string theory always has a Lorentz symmetry at short distance scales that acts on all directions of spacetime whose size (and curvature radius) is sufficiently longer than the string length. Only the presence of new objects - such as D-branes - can change this conclusion. Why? It is simply because locally, at distance scales below the curvature radius, the non-linear sigma model action for the string looks like the normal Polyakov action. The latter exhibits Lorentz symmetry: this symmetry is a trivial transformation of the scalars living on the worldsheet. In this sense, string theory really predicts relativity.

Closed string fields therefore always respect Lorentz symmetry in this regime. As far as open string fields are considered, all the breaking boils down to the existence of Lorentz-breaking D-branes and their worldvolume fields. For example, the D-brane magnetic field is often formulated equivalently as the closed-string B-field and it creates non-commutativity.

There is of course no known string theory that would lead to an "aether" but I am convinced that it is not just about a particular set of approaches to quantum gravity we can explicitly investigate today. No consistent theory of quantum gravity allows us to break the Lorentz symmetry in the generic way of the "aether" type. In general relativity, the Lorentz symmetry is really incorporated into the local diffeomorphism symmetry which is a local symmetry of general relativity responsible for the decoupling of ghosts (negative-norm states).

The fact that the authors don't care about renormalization of their Lagrangians effectively means that they don't care about the quantization at all. That also means that they don't want to be worried about the likely existence of ghosts and other lethal diseases of possible theories. Not surprisingly, if they ignore such important constraints, the set of "possible" theories becomes much wider. However, if they don't want to look at ghosts in their theories, it doesn't mean that no one else can look at them either.

A physicist who carefully looks at these things will conclude that these theories cannot arise as a classical limit of a consistent quantum theory. These classical games are unmotivated, they have nothing to do with the progress of physics in the last 90 years, and they should never be presented as a part of the contemporary cutting-edge science because they have nothing to do with it. All "predictions" of such theories always reflect the inability of the authors to include all terms and/or all criteria that are known to be relevant: they reflect the authors' lack of rigor and imagination.

It's too bad that the blogosphere can be used to highlight bad science in comparison with good science and it shouldn't be happening.

And that's the memo.

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :