Friday, March 20, 2015 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

LHC insists on a near-discovery muon penguin excess

None of the seemingly strong anomalies reported by the LHCb collaboration has been recognized as a survivor but many people believe that similar events are not being overlooked by TRF and they rely on this blog as a source, so I must give you a short report about a new bold announcement by LHCb.

20 March 2015: \(B^0\to K^*\mu^+\mu^-\): new analysis confirms old puzzle (LHCb CERN website)
In July 2013, TRF readers were told about the 3.7 excess in these muon decays of B-mesons.

The complete 2011-2012 data, which was just 3 inverse femtobarns because we talk about LHCb (perhaps I should remind you that it is a "cheaper" LHC detector that focuses on bottom quarks and therefore on CP-violation and flavor violation), have been analyzed. The absolute strength of the signal has decreased but so did the noise so the significance level remained at 3.7 sigma!




The Quanta Magazine quickly wrote a story with an optimistic title
‘Penguin’ Anomaly Hints at Missing Particles
where a picture of a penguin seems to play a central role.




Why are we talking about these Antarctic birds here? It's because they are actually Feynman diagrams.



The Standard Model calculates the probability of the decay of the B-mesons to the muon pairs via a one-loop diagram – which is just the skeleton of the picture above – and this diagram has been called "penguin" by particle physicists who didn't see that it was really a female with big breasts and a very thin waistline.

But there may have been more legitimate reasons for the "penguin" terminology – for example, because it sounds more concise than a "Dolly Buster diagram", for example. ;-)

The point is that there are particular particles running in the internal lines of the diagram according to the Standard Model and an excess of these decays would probably be generated by a diagram of the same "penguin" topology but with new particle species used for the internal lines. Those hypothetical beasts are indicated by the question marks on the penguin picture.

Adam Falkowski at Resonaances adds some skeptical words about this deviation. He thinks that what the Standard Model predicts is highly uncertain so there is no good reason to conclude that it must be new physics even though he thinks that the it's become very unlikely that it's just noise.

Perhaps more interestingly, the Quanta Magazine got an answer from Nima who talked about his heart broken by the LHCb numerous times in the past.

Various papers have proposed partially satisfactory models attempting to explain the anomaly. For example, two months ago, I described a two-Higgs model with a gauged lepton-mu-minus-tau number which claims to explain this anomaly along with two others.

Gordon Kane discussed muon decays of B-mesons in his guest blog in late 2012, before similar anomalies became widely discussed by the experimenters, and he sketched his superstring explanation for these observations.

LHCb is a role model for an experiment that "may see an anomaly" but "doesn't really tell us who is the culprit" – the same unsatisfactory semi-answer that you may get from high-precision colliders etc. That's why the brute force and high energy – along with omnipotent detectors such as ATLAS and CMS – seem to be so clearly superior at the end. The LHCb is unlikely to make us certain that it is seeing something new – even if it surpasses 5 sigma – because even if it does see something, it doesn't tell us sufficiently many details for the "story about the new discovery" to make sense.

But it's plausible that these observations will be very useful when a picture of new physics starts to emerge thanks to the major experiments...

The acronym LHCb appears in 27 TRF blog posts.

Add to del.icio.us Digg this Add to reddit

snail feedback (19) :


reader kashyap vasavada said...

Off topic but relevant
Hi Lubos: Where can I read about the technical changes in LHC
to double the energy? From simple physics they probably increased the number of turns on the magnetic field producing solenoids.Is the current 11000 A a limit on what the wires can stand? Also the superconductivity may go away for a high enough magnetic field. What about voltages (electric fields)? My basic question is : have they reached technical limits on electric and magnetic fields for LHC, so that any increase in energy beyond doubling is out of question?


reader Luboš Motl said...

Great questions but I am not sufficiently certain about the answers - how far is the limit for the maximum magnetic field and what was really the key to increasing the magnetic field (but yes, it was the magnetic field, not electric field, that had to be raised to increase the energy).


reader kashyap vasavada said...

Thanks Lubos. Can you invite some experimental physicist or engineer from CERN to write a blog on technology of LHC?


reader Luboš Motl said...

Maybe but I don't want to. There are lots of other places where you may deal with such things, aren't there?


reader Pavel said...

AFAIK the LHC was designed for 14 TeV energy. It was running at half energy because the damaging quench at start-up. So the major changes improved wire connectors to prevent quenches and improve system for settling the situation if a quench happened.


reader OXO said...

"Not to write about things I don't understand"

This is why we love you Lumo.


reader Luboš Motl said...

Thanks, Pavel, you must be right! I've heard it many times, indirectly, but haven't internalized it.


No qualitative change has really taken place - they could have run at those 14 TeV even with the LHC that existed in 2011, if the courage existed, but the connections could betray and cause quenches again.


reader W.A. Zajc said...

You are correct Pavel. 14 TeV is the design energy for the LHC, and it was the interconnects, not the magnets themselves, that have been upgraded to allow the machine to operate reliably at 14 TeV.

The LHC magnets use niobium-titanium wires as the superconducting material. The dipoles operate at a little over 8 Tesla. In principle Nb-Ti can remain superconducting to 15 Tesla, but that is not the issue. Quenches can occur when the mechanical stress from the B-field causes a coil winding to flex ever so slightly. The friction creates a local hot spot, that region stops being a superconductor, and you have a messy situation from the enormous stored energy in the field - 7 MJ for an LHC dipole. (Recall also that the magnetic forces go as the square of the field.) So I would imagine that the maximum operating field for the LHC dipoles is set by mechanical considerations, and is unlikely to have a lot of headroom to go beyond 14 TeV.

Kashyap, you can find most of this information with a simple Google search. My first hit was this http://home.web.cern.ch/about/updates/2013/11/test-magnet-reaches-135-tesla-new-cern-record which talks about potential upgrades using niobium-tin superconductors; links from that page along with Wikipedia will tell you all you need to know.


reader kashyap vasavada said...

Thanks Zajc for a detailed comment! Now it is getting closer to what I want to know. I will look at the link sometime. As Lubos says magnetic field is the villain, not the electric field!!


reader etudiant said...

Got to watch those mathematicians, when they get into politics they are too damn rational. Maybe anyone who has passed first year calculus should be a prohibited from public office. ;)


reader Joe Shipman said...

Vopenka's most famous contribution to set theory was originally a joke--set theorists liked to come up with larger and stranger versions of infinity and he published a paper about the largest one yet which was defined in a remarkably simple way. He was hoping that the other set theorists would write papers on it and be embarrassed when he published a sequel proving that it didn't exist.

The joke was on him because his proof of inconsistency was flawed and after a couple of years of non-communication due to the Russian invasion he learned that they had named the number after him!

A Vopenka cardinal is a set so large that any structure of that size must contain am equivalent substructure of smaller size. In other words it is so big that it can't be described as the smallest example of anything!


reader Luboš Motl said...

Amusing...


reader Rehbock said...

I do wish we could require testing for those seeking office and of the voters


reader Dilaton said...

Phil Gibbs sometimes reports about such technical things ...


reader Dilaton said...

And by now means should Tommaso Dorigo be invited to here ;-)

Or maybe if he talks about experimental things it is sane, but about theoretical physics he should really shut up ...


reader FlanObrien said...

“We Honestly Have No Fucking Idea What We’re Doing”, Admits Leading Quantum Physicist
http://waterfordwhispersnews.com/2015/02/10/we-honestly-have-no-fucking-idea-what-were-doing-admits-leading-quantum-physicist/



Lubos please save the day. :-)


reader QsaTheory said...

He is into quantum consciousness and meditation and such.

https://www.facebook.com/quantumactivism


reader Liam said...

Sad news to hear this great mathematician has passed away :(

I'd never heard of these Vopěnka Cardinals before. Their characterising property is reminiscent of downward Löwenheim–Skolem (only much stronger!), whereby if a first-order theory has a model at infinite cardinality kappa, then it has models at all infinite cardinalities less than kappa, including omgea/aleph_0, c.f. such exotic beasts as countable first-order models of the reals etc.

Generally such models would reside in a set theoretic universe where the internal ways to construct enumerations are weaker than in standard ZFC, so such reals still look "uncountable" from inside the universe.

Also the standard reals are on the boundary where first-order logic is no longer able to characterise all the intersesting/relevant properties of a model - for example the least upper bound axiom requires second-order logic, so countable models are still distinguishable from the "really reals" and aleph_one/C is still the smallest cardinal capable of hosting the full second order case.


So it seems a Vopěnka cardinal kappa satisfies a much stronger condition that any theory with a model of size kappa has a elementary embedding from subset of size strictly less than kappa.... er... for all languages L_kappa,kappa? For all languages of any cardinality??

If it's the second then it's equivalent to the very strong statement that there cannot exist a kappa-categorical theory!

I guess this would make it in some sense the smallest "undescribeable" cardinal too.

They have bigger ones nowadays, personally for day to day use I like to squish down all higher cardinalities than aleph1/C into one class so that the counting goes:



"1,2,... n, countable, uncountable, em-big-ocious"


and leave it at that ;-)


reader Liam said...

Hiya, I think the comparison to Gödel-Bernays/Constructible Universe sounds fair - seems AST is taking the same attitude of pushing down the size-boundary where sets end and proper classes begin... but that AST is a more "extreme version" of this philosophy, (no "set of all naturals" only a class, a single real is a proper class etc).

So loosely speaking it has strong induction but weak composition, such that there's no way to take infinite powersets, and higher cardinalities are consequently squashed down so that there's only finite, countable and uncountable.

Looks like it has internally some quite bizarre properties! Apparently the class of naturals is uncountable in there for starters... :D

I find this kind of set-thoery/non-standard analysis fun and awsome, wish I had more time to play in there :-)