Monday, September 26, 2011 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Superluminal neutrinos from noncommutative geometry

In two previous blog entries, I discussed possible mistakes in the Opera experiment and theoretical reasons why they probably exist.

But it seems pretty likely to me that a Fermilab experiment will confirm the Opera result – either because there is new physics or, more likely, because it will be affected by the same glitch in the GPS system ;-) – and theorists will be increasingly pushed to give an explanation. Imagine that we're really forced to admit that the neutrinos are faster than the photons.

What changes will we apply to our theoretical picture of the world? What's the most sensible setup to rebuild our understanding of the reality? Does string theory offer some semi-natural tools to account for the different speeds? Well, I will mostly promote the famous 3,000-citation 1999 article by Nathan Seiberg and Edward Witten,

String Theory and Noncommutative Geometry
to get a flavor of some semi-realistic attempts to assign different speeds to the massless and very light particles. If you want to see how such a Cyber Witten looks like, see the picture above. But let me begin at the beginning:

What to give up: causality or Lorentz invariance?

If you accept that the whole world is (locally) exactly Lorentz-invariant, then faster-than-\(c\) propagation of signals is equivalent to the propagation of signals backwards in time which produces logical paradoxes. This seems absolutely fatal and impossible to me. So if the experiments forced us to revise our picture, we would have to keep causality – one can't modify or rewrite the past – but we would have to give up the exact and universal Lorentz invariance.

By this sacrifice, the word "causality" would return to its pre-relativistic meaning: we can't modify the past but we are allowed to travel faster than photons, at least a little bit. There's no "immediate" contradiction. However, almost all insights of the 20th century physics show that special relativity has worked extremely well. So you can't just throw relativity out of the window: you must also explain why it is almost exactly governing the physical phenomena even after it is sacrificed at the fundamental level. Or if your new physics may be described as a "small correction" to old physics, you must explain why the correction is so small.

Physics is not about throwing Einstein into trash bin after a century of similarly overreacted worshiping of this amazing physicist. Physics is about explaining the events in the real world and most of them won't change after the Opera experiment even if the big conclusions of the experiment are right. So whatever new theory would replace our exactly Lorentz-invariant theory we were used to believe, it should still produce pretty much the same, almost exactly Lorentz-invariant predictions for the situations that have been successfully tested in the past.

String theory and Lorentz invariance

String theory guarantees that in a "truly empty space", Lorentz invariance holds exactly. In perturbative string theory, for example, this \(SO(d-1,1)\) symmetry boils down to the same symmetry group locally transforming spacetime coordinates that are represented by fields on the world sheet. You can't really get rid of the symmetry (acting locally in spacetime) between such bosonic fields with kinetic terms in the "true vacuum" of string theory. However, you may obviously break the symmetry by "inserting something into the space". And I don't mean just particles: you may insert fields.

For example, if there's a vertical magnetic field somewhere, it induces some violation of the rotational symmetry. Only the rotations around the axis parallel to the field remain symmetries; others cease to be symmetry. To say the least, they're not exact symmetries anymore.

String theory has various fields of this kind. I think that the most sensible guess is the stringy \(B\)-field from the papers about noncommutative geometry such as the famous Seiberg-Witten 1999 paper. What it is about?

Brane setup

One studies D-branes which are floating in a higher-dimensional spacetime equipped with some \(B\)-field with nonzero components \(B_{ij}\). Special gauge transformations for this \(B\)-field allow us to set it to zero in the bulk: however, these gauge transformations create a compensating magnetic field \(F_{ij}\) inside the brane's U(1) gauge group so that some combination \(B_{ij}+F_{ij}\), and I will be choosing non-standard normalization conventions to streamline the text and keep the moral core only, is invariant under these gauge transformations.

So inside the D-brane, you may either say that there is a nonzero \(B\)-field or a nonzero \(F\)-field. What are the impacts of this field – and let's talk about the \(B\)-field for the sake of clarity? For example, a D-brane without any such \(B\)-field produces a Yang-Mills theory as the effective field theory description at long distances. What do we get if we deform the D-brane by the \(B\)-field?

(In supersymmetric stringy vacua, the \(B\)-field is massless and all values are equally allowed and may change from one place to another. In realistic vacua, the \(B\)-field gets reinterpreted as the "universal axion" which has to acquire a mass. But I need to assume that the relevant physics is still similar to the first case where the \(B\)-field is nonzero, perhaps stabilized at a nonzero value.)

The answer is that we still obtain a Yang-Mills theory but a noncommutative one. At this point, I have to explain the difference between "non-Abelian" and "noncommutative". In mathematics, those two adjectives may be considered to be synonyma. But in theoretical physics, they have acquired two distinguishable meanings. Non-Abelian gauge transformations – which define Yang-Mills theory – are given by matrices \(G,H\) that may transform the colored fields at the same spacetime point but that still fail (or refuse) to commute with one another:
\[ GH \neq HG \] On the other hand, "noncommutative" also means that something doesn't commute. But what doesn't commute are spacetime coordinates. In particular, the spatial coordinates satisfy
\[ [x^i,x^j]=i \theta^{ij} \] Here, \(\theta\) represents an antisymmetric matrix with two spatial vectorial indices. If it's nonzero, one obviously breaks the rotational symmetry. For example, if the component \(\theta^{12}\) is nonzero, one preserves the rotations of the directions 1,2 i.e. rotations around the 3rd axis. But the rotations mixing the 3rd axis with the other two are broken, at least a little bit.

(Yes, for those vacua, we must also break the rotational symmetry. Think about preferred directions in space and various astronomical hints that those might exist.)

You should notice that the nonzero commutator is fully analogous to the nonzero commutator of the position and momentum in quantum mechanics:
\[ [x,p] = i\hbar \] This defining commutator \([x,p]\) of quantum mechanics is fully mathematically isomorphic or analogous to the nonzero \([x^1,x^2]\) commutator in a noncommutative field theory. This allows us to use all mathematical tools that quantum mechanics has developed in the phase space. However, we should still realize that there is a different interpretation: the plane spanned by \((x^1,x^2)\) is a normal "classical" plane and it doesn't necessarily require us to adopt a quantum, probabilistic interpretation of anything. The coordinates of the noncommutative space on which a noncommutative field theory is defined obey the same rules as the phase space coordinates of quantum mechanics.

What is noncommutative field theory?

So what do we mean by a noncommutative field theory? Imagine that this field theory is defined on spatial coordinates \(x,p,z,t\): I renamed \(y\) to make the analogy with the phase space manifest. Well, all the fields in such a field theory are functions of these four coordinates but \(x,p\) don't commute with each other.

This means that we have to distinguish e.g. the function \(xp\) of these two coordinates and \(px\). We seem to have a huge, seemingly much higher number of functions of two operators than the number of functions of two ordinary real variables \(x,y\). But this is just an illusion. The number of functions is actually identical because \(xp\) and \(px\), while being different, are closely related: they only differ by \(i\hbar\) or \(i\theta^{12}\) or whatever symbol you use for the commutator.

In fact, it's straightforward to prepare a dictionary between all possible functions of commuting real variables \(x,p\); and between all possible functions of noncommuting operators \(x,p\). How do you do that? Well, you just identify the function of commuting variables \(x,p\) e.g. with the fully symmetrized function of the operators. So you have:
\[ x^m p^n \leftrightarrow \frac{1}{(m+n)!} \sum_{i=1}^{(m+n)!} {\rm permutation}_i (\hat x^m \hat p^n) \] In this sum, the right hand side sums over all possible permutations of the \(m+n\) operator-valued factors. I have included the hats to the right hand side for the sake of clarity. If you think for a little while, any function of operators, even the functions with the non-symmetrized ordering, can be written in terms of the symmetrized ones. For example,
\[ \hat x \hat p = (\hat x \hat p)_{\rm sym} + \frac {i\hbar}{2}. \] The first term on the right hand side is represented by the simple \(xp=px\) function of commuting coordinates. The two differently ordered products of \(\hat x,\hat p\) differ by \(i\hbar\) and the symmetrization is the geometric average. So the equation above follow. It's not hard to see that this setup applies to arbitrary polynomials in \(\hat x,\hat p\). So morally all functions (because all well-behaved functions may be imitated and approximated by Taylor expansions) of the operators may be represented by ordinary functions of real variables \(x,p\).

What if you want to mimic the (noncommutative) operator multiplication of operators by the ordinary functions of the real (commuting) variables \(x,p\)? Note that the latter commute with each other so the two multiplications can't be the same. However, there is a nice prescription. The noncommuting product of the operators on the left hand side
\[ \hat F \hat G \leftrightarrow F * G \] may be identified with the so-called star-product (a non-commuting deformation of the ordinary product of two functions) of the corresponding "ordinary functions" of the commuting variables. I can even tell you what the star product is:
\[ \begin{align} & F(x)*G(x) = \\ &=\exp \left(\frac{i}{2} \theta^{ij} \frac{\partial}{\partial\alpha^i} \frac{\partial}{\partial\beta^i} \right ) \left. F(x+\alpha) G(x+\beta) \right |_{\alpha=\beta=0} \end{align} \] The exponential of the differential operator may be evaluated by a Taylor expansion if you can't do it otherwise. The first term in the expansion of the exponential, \(1\), will give you the normal product which is the leading term in the star-product. The other terms may be viewed as "nonlocal corrections" and they're suppressed by powers of \(\theta^{ij}\).

You should check (at least by choosing an example) that if you first take two operators \(\hat F,\hat G\), multiply them as operators, and translate the product into a function of the commuting variables \(x,p\), you will get the same thing as if you first translate the operators to commuting functions \(F,G\), and then star-multiply these two functions of \(x,p\).

Now, I am getting to the key point. The Lagrangian of a noncommutative field theory is almost the same thing as the Lagrangian of a commutative field theory you know; the only difference is that all products of fields in your Lagrangian have to be replaced by the star products. For example, if you have a \(\lambda \phi^4\) theory, it contains a similar term in the Lagrangian. If you replace
\[\phi^4 \quad{\rm by}\quad \phi*\phi*\phi*\phi\] and if you do it with all the terms in the Lagrangian, you will get the Lagrangian of the corresponding noncommutative field theory. There are many things to learn about such theories.

For example, the bilinear terms also have to use the star-product but if you replace the star-product by the normal product in these terms, you won't change the Lagrangian. So the star-product only affects the higher-order terms. And the Feynman rules are almost identical to the normal Feynman diagrams: the vertices just contain an extra phase, \(\exp(ih_{mn}p^m p^n)\), whose exponent (the angle) is bilinear in the attached momenta and \(h\) is linear in \(\theta\): the momenta were created simply from the derivatives inside the exponent defining the original star-product in the position representation. Because of this phase, the diagrams actually get more convergent in the UV and things become simpler, not harder. There are lots of discussions how these theories exhibit the UV/IR mixing: some ultraviolet divergences are traded with infrared ones etc.

Seiberg and Witten – and many others – have shown many interesting properties. An important observation by Seiberg and Witten is that such noncommutative gauge theories – the star-product squeezed into a Yang-Mills Lagrangian – is actually equivalent to an ordinary commutative Yang-Mills theory with some gauge-invariant corrections etc. And I must mention that Gopakumar, Minwalla, and Strominger, in a beautiful paper at the end of the last century, showed that one may exactly find solitons of similar theories and they have some nice new "quantization" properties. In fact, the search for such solutions simplifies in the "large noncommutative limit", i.e. the opposite limit than the limit you get if you imagine that noncommutative field theories are just small deformations of the commutative ones.

I don't want to describe all papers ever written about this topic because just Seiberg and Witten's 1999 paper has about 3,000 citations and 3,000 papers is close to 100,000 pages, so even if one admits that 95% of the stuff in them is redundant, repetitive, overly slow, highly unimportant, and sometimes even wrong, even the rest that doesn't fit into any of these categories would force me to describe 5,000 pages. To make the presentation pedagogic, I would have to expand it to 50,000 pages and some readers could find such a 50-megabyte blog entry boring, not to speak about the fact that my compensation for explaining that would be 10 orders of magnitude below the minimum hourly salary.

Closed string metric and open string metric

Why is this possibly relevant to the fast neutrinos? Well, it turns out that there is a funny thing going on about the speed of light in the noncommutative theories.

The ultimate speed limit may be identified with the speed of gravitons. They're massless particles in the bulk and all the \(B\)-fields may be removed from the bulk by the gauge transformations. So the closed strings such as gravitons can't possibly know about your spontaneous breaking of the Lorentz invariance. You may use a \(B\)-independent "closed string metric" in the bulk and talk about the normal relativity with the normal speed of light.

If we could measure the speed of the gravitons, string theory clearly says that nothing could ever be faster.

However, open strings attached and confined to D-branes with a \(B\)-field in them behave differently. For them, it's more natural to introduce the so-called open string metric. It also defines its own natural "special relativity" but it has a different speed limit. If the deviation between both metrics is small, equation (2.5) of Seiberg-Witten tells you that
\[ g_{\rm open} = g_{\rm closed} + (2\pi\alpha'B)^2 \] See the equation in the Seiberg-Witten paper for details. The metric tensors are not proportional to each other – the redefinition is different for different values of the Lorentz vector indices. Consequently, the speeds of light as seen by the two metrics differ, too.

Because you want the deviation of the D-brane speed of light from the maximum one to be of order \(10^{-5}\), it follows that
\[ \alpha' B \sim 10^{-3} \] so that its square is \(10^{-6}\) and with the \(2\pi\) factors, you may get to the \(10^{-5}\) territory. Approximately. Great, so we need the \(B\)-field in the string units \(1/\alpha'\) to be about \(0.001\).

Neutrinos would be the faster particles so they would be like the gravitons. It's very unnatural to imagine that neutrinos are just like gravitons: they seem more different from gravitons (and less closed-string-like) than the photons. Instead, it's more reasonable to expect that photons and neutrinos would mostly live at different D-branes and the neutrino D-brane would allow a slightly higher maximum speed for open strings which would still probably be lower than the limiting speed of gravitons in the bulk. But it could be the same, too.

Dispersion relations, Morley-Michelson

One would have to check many and many things to see whether such a picture would be consistent with all known experiments. One important property is that the speed of photons would be universal, independent of the photons' energy. That's very important because we have clearly measured that there can't be any noticeable dependence of the photon speed on the photon energy. The light from distant supernovae would otherwise arrive as a very very slow rainbow continuously changing colors.

The photons' speed would be something like \(0.99999c\) where \(c\) is the limiting speed, the speed of gravitons, and it would be independent of the energy. One should realize that the Lorentz transformations become subtle. The gravitons' metric leads us to use different transformations than the photons' open string metric. Which one is more correct?

The gravitons' Lorentz transformation is more universal and more fundamental. However, it's likely that the relevant Lorentz transformation is one connected with the metric tensor as seen by the photons, namely their D-brane's open string metric.

I think that when one looks at the supernova – and (Fermi's...) gamma-ray-burst – data in some detail, she will see that one can't create a sensible theory (and not even a sensible collection of phenomenological relationships and dependencies) that is compatible with everything we know. It's also very likely that the other particles besides photons would have a limiting speed that differs both from the neutrinos' speed and the photons' speed.

However, stringy vacua inspired by noncommutative field theories which produce several different "metrics" that govern different types of physical phenomena are the most realistic direction in which I would try to find a new description of the observed anomalies as well as all the well-known old physical phenomena. Most likely, I would fail at the end but the failure wouldn't be as obvious as the failure of many other transformations of the paradigm that some other people have proposed.

Add to Digg this Add to reddit

snail feedback (12) :

reader Plato said...

Thank you Lubos,

What is a sterile neutrino?:)

“Many extensions of the Standard Model of particle physics, including string theory, propose a sterile neutrino;”

What is the implication then of contact with Ice if it were discovered that the impact was with proponents of the sterile neutrino's already indicative of the ideas of cerenkov?

Yes to Graviton? We are looking at a bulk space?


reader Luboš Motl said...

Dear Plato, normal neutrinos only interact through gravity (as everything with mass/energy) and the weak force (that's why they appear in beta-decay) etc.; they don't interact electromagnetically (not charged) and they don't interact by strong interactions (not colored like quarks).

Sterile neutrinos don't even interact by the weak force (only by gravity); they are not parts of SU(2) doublets with leptons etc. Right-handed neutrinos and completely new "singlet" types of neutrinos are sterile.

I suppose that normal neutrinos have sex by being in the same "doublet" with the electron and friends so the sterile neutrinos are sterile because they can't have this sex, so they don't participate in similar weak interactions.

I don't believe it's sensible to discuss discovery of string theory, black holes, and other super ambitious things by IceCube at this time. The Wikipedia text is extremely optimistic, to put it mildly.


reader B Yen/Getty Images [ iTunes demo ] said...

Interesting coincidence. After I scrambled to send you an email (signal measurement statistics, Re: OPERA experiment), I sped off to a lecture yesterday (Mt Wilson Observatory lecture-series) by Don Nicholson, see this,there was a slide on Albert Michelson & a device to measure speed of Light. Other big-name iconic researchers were brought up: Robert Millikan (Caltech, famous for oil drop experiment to measure mass of electron), George Hale (founder of Mt Wilson, Mt Palomar, etc), Walter Baade, Fritz Zwicky/Caltech. Definitely, current researchers need a sense of History. The OPERA result is preliminary, & needs time for scrutiny.

reader Hontas said...

This is a very good blog posting, and I will read that paper and strain to understand it.

It is so nice to be able to agree with you about something, for once.

reader Plato said...

Hi Lubos,

In no way would I ever contest LIGO operations:)Nor would I contests Muon relativistic effects:)Tomgraphy density factors give an interesting picture of magma flows?

But seeing a consistent feature of gravity through "all phases" is important for sure. Gravity and light in the fifth dimension? What is consistent with Einstein's relativity must be consistent in the bulk space??

Kaluza/Klein theory reveals the difficulties with which you are referring?

SciBooNE and MiniBooNE inspect the neutrino pie

Thanks for your patience.


reader Andrew Palfreyman said...

There's been some talk of neutrinos not coupling to gravitons and thus the shorter transit time is an off-mass-shell path and "more direct" than the curved terrestrial geodesic. My back-of-envelope calculation shows the effect to be a million times too small to account for the experimental result, so it doesn't appear to be fruitful to hypothesise about neutrinos not gravitating. If we are forced to consider alternates, I'm betting on off-shell paths and these having some relationship to the on-the-fly colour changes.

reader Matti Pitkanen said...

Super-luminal neutrinos and many other anomalies not taken seriously by the mainstream hitherto can be understood easily in TGD framework, where sub-manifold geometry replaces abstract manifold geometry so that one must distinguish between maximal signal velocity along given space-time sheet and in imbedding space M^4xCP_2.

The latter gives absolute upper bound for the former. There is no need to break causality. The effect could be studied also for other relativistic particles, say electrons.

Even electric circuits could be used and the old strange results by posting and its predecessor. The model can be modified so that it applies in braney M-theory and probably the representatives of M-theory hegemony will "discover" the explanation within few days.

reader John said...

If gravitons are the true fastest particle, and the neutrino velocity measurements are accurate, then it seems there's a floor that's been set on how much sooner we'd detect a gravity wave prior to light from an event reaching us.

reader Rosy Mota said...

the non-commutative geometry implies violations of invariance of lorentz and the violations of pt,that implies non causal processes;
it is,there are se3veral ways that implies that photons and neutrinos can have not it limited speed. some antineutrinos doesn't exist in our universe-metrics-then the rotational invariance is violated.then the causality implies infinities ways of "past","present" and "future"

reader Rosy Mota said...




reader Rosy Mota said...

i think that the split-quatenionic algebra vinculated by a non-commutative and semi-associative topologal geometry could to explain the mathematical foundations of SPECIAL THEORY OF RELATIVITY THROUGH SUPERLUMINAL SIGNALS( AS NEUTRINOS THAT TRAVEL ALSO WITH SPEEDS SMALL THAN THE SPEED OF LIGHT AND TACHYONS ) that aid explain the invariance of lorentz so as the CAUSALITY.

reader Rosy Mota said...

not is only split -quaternions but yes split-quaternions