Saturday, July 02, 2011

INTEGRAL / IBIS: Lorentz symmetry at the Planck scale holds at 1 part per 100 trillion

Bob has pointed out the following June 2011 preprint:
Constraints on Lorentz Invariance Violation using INTEGRAL/IBIS observations of GRB041219A
They used IBIS, the imager on board of the INTEGRAL satellite, to look at gamma ray bursts in the Crab Nebula. The most relevant gamma ray burst allowed them to improve a certain upper bound on the Lorentz violation at the Planck scale by 4 orders of magnitude.

It's a fact that Nature - namely the weak interactions - violate the parity symmetry P, charge conjugation symmetry C, and their combination CP. So far, the Lorentz symmetry holds perfectly but if you imagine that the Lorentz symmetry fails at the Planck scale, for whatever rational or religious reasons, there is (almost) no reason why the following higher-dimension term (dimension-5 operator) in the Lagrangian coming from the Planckian physics shouldn't occur in the effective action:
L = (xi / MPl) . nm Fmn nrr (ns *Fsn)
This term is gauge-invariant and rotationally symmetric and it is suppressed by the right Planck energy scale where the Lorentz violation, picking a preferred unit 4-vector "n", comes from by the assumption.

INTEGRAL, the satellite

The most obvious consequence of such a term is the chirality-dependent modification of the dispersion relations:
omega2 = k2 +- 2 (xi / MPl) k3
where the sign refers to the handedness of the photons. When you have a linearly polarized light, as opposed to the circularly polarized one, the polarization plane will be rotating by the rate determined by the second term in the equation above. This effect would be known as the vacuum birefringence if it existed.

The experimenters could see polarized light in the gamma ray burst and the rotation plane wasn't rotating which allowed them to impose a remarkable upper bound on the dimensionless coefficient "xi" in the term:
xi < 1.1 x 10-14
In the natural Planck units, the Lorentz violation linked to the chirality-dependence is smaller than one part per 100 trillion. This is an amazing accuracy which, according to the authors, improves the previous bounds by 4 orders of magnitude.

If you imagine any fundamentally Lorentz-breaking theory that is however compatible with the C-, P-, and CP-violation in the weak interactions, it's almost inevitable that you will produce the term "L" above with a coefficient "xi" of order one. But the actual coefficient has to be smaller than 0.00000000000001 or so. That's a kind of a problem for any Lorentz-breaking theory.

Amusingly enough, the dimension-5 operator is banned by supersymmetry. Supersymmetry is the only known principle that may "naturally" eliminate the dimension-5 operators such as the term "L" above. In other words, it sets "xi" equal to zero.

Lorentz-breaking dimension-6 operators which would be compatible with supersymmetry remain experimentally unconstrained; the corresponding upper bound on the coefficient says that the coefficient is smaller than 10^{8} or so - well, it might still naturally be of order one, which is theoretically natural and so far compatible with the experiments.

However, it's fair to say that any non-supersymmetric Lorentz-breaking theory of the Planck scale physics that's also compatible with the parity violation in the weak interactions has been excluded with 14 orders of magnitude of a "safety margin". It's like if you need to destroy a Japanese city and instead of 1 Fat Man, you use 100 trillion Fat Men. :-)

You should better treat loop quantum gravities, degenerated special relativities - and all other "discretizations of spacetime", among other crackpotteries - as toxic garbage that you should better not touch.


  1. > and all other "discretizations of spacetime"
    And what about ? I can't understand the idea so can't judge for myself.


  2. Oleg,

    Global Lorentz invariance is "obtained" in causal set theory by considering all possible paths between point A (where a particle is created) and some point B (where the particle is annihilated) and then choosing the path between these two points that would have lead to the least amount of deviation from a truly straight path so that the appearance of a constant speed of light in vacuum is maintained. The idea is similar in spirit to quantum field theory, superficially anyway.

    However, consider that there are a finite number of edges directly connected to point A. This severely restricts which directions the newly created particle can begin its propagation along, and so local Lorentz invariance is most definitely not a part of the theory. 

    Additionally, the first point that the newly created particle propagated to is actually truly important, not the point where the particle is annihilated. I mean, how else does one define the initial straight path other than the first edge that the particle propagated along?

    So really, a discrete theory is nothing at all like quantum field theory even at the most fundamental of levels.

    This memory of the second point of propagation is tantamount to an extra hidden variable, in my humble and unlearned opinion. In a continuous theory a particle would have two variables: an initial position and a current position. This makes for a single unified local and global direction vector. In a discrete theory a particle would have three variables: an initial position, a position corresponding to the second point propagated to, as well as a current position. This makes for two separate local and global direction vectors. That's too just many variables in the discrete version.

    I'm sure there are many clever arguments to sidestep these arguments of mine, but I'm confident they'd just be more of the same old shell game. I may have also misunderstood some of the basics of the discrete theory, but all of what I said above is beside the point -- I didn't even touch on the much more subtle flaws outlined by Lubos' article.