Sunday, March 02, 2014 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Gross vs Strassler: Gross is right

See also: Naturalness is fuzzy...

I was told about Matt Strassler's 50-minute talk at JoeFest (click for different formats of the video/audio/slides) and his verbal exchange with David Gross that begins around 35:00.

Matt's talk is pretty nice, touching technical things like the Myers effect, pomerons etc. but also reviewing his work with Joe Polchinski and giving Joe some homework exercises all the time. Matt said various things about the effective field theory's and/or string theory's inability to solve the hierarchy problem even with the anthropic bias taken into account. He would be distinguishing the existence of hierarchies from the lightness of the Higgs in a way that I didn't quite find logical.

They were thought-provoking comments but I just disagree about the basic conclusions. He can't pinpoint any contradiction in these matters because the QFT framework doesn't tell us which QFT is more likely – it goes beyond the domain of questions that an effective QFT may answer. And even the rules to extract such a probabilistic distribution of the vacua from string theory is unknown. If there are no predictions about a particular question – even if it is a "pressing" question like that – there can't be contradictions.

But the main conflict arose due to Matt's vague yet unusual and combative enough comments about the value of the 100-TeV collider.




He would say it could be a bad idea to give our eggs into the basket of this collider planned for the longer term. The reasons? Similar to the luminiferous aether. Michelson was trying to find the aether wind which was misguided, Matt says, so there should have been other experiments.

Unfortunately, he didn't say what those were supposed to be.




David Gross' reaction made it very clear that they disagree not only about the 100-TeV collider but also about the right strategy and interpretation of hypotheses and their testing of the late 19th century, i.e. about the Michelson issue. Previously, Matt Strassler would say lots of weird things such as "the prediction of new physics at the LHC is the only prediction one can deduce from string theory".

This is clearly wrong. No one can deduce anything like that. Effective field theories with some extra assumptions about the distribution of parameters could perhaps lead you to guess that particles with masses comparable to the Higgs may exist. But these are not conclusions of QFT itself. And in string theory, such "predictions" are even more impossible because string theory has no adjustable continuous dimensionless parameters. That means that there are no "natural distributions" on the space of parameters that would follow from string theory, at least as long as we interpret string theory as the currently understood body of knowledge.

Even more qualitatively, there is clearly no derivation that would imply that "string theory is progressive" or "string theory is conservative" when it comes to the amount of new low-energy physics. The latter – conservative string theory – is totally compatible with everything we know. After all, your humble correspondent is not the only one who thinks that string theory is a mostly if not very conservative theory. The claim that it inevitably predicts new things – or that it predicts more new things than possible or real alternatives – is just wrong.

Just to be sure, you may remember that your humble correspondent considers an even larger hadron collider to be the single most meaningful way to progress in experimental particle physics. We march towards the unknown which means that higher-energy experiments are needed for that. This relationship is probably true up to the Planck scale. The higher-energies we investigate experimentally, the deeper we penetrate into the realm of the unknown. David Gross and others clearly share the same viewpoint.

Is it possible that the Very Large Hadron Collider will find the Standard Model only and nothing else? Absolutely. That will be a disappointment but physicists will still learn something. But if you want to propose alternative experiments, you should know what they are. Some people are looking for the dark matter directly. There are experiments trying to detect axions and other things. Physics seems to have enough money for those – they are not as expensive as the large colliders. If you had another important idea what should be tested, you should say what it is, otherwise the claim that the colliders are overvalued contradicts the evidence you are able to offer. Matt isn't able to justify any true alternative.

But David Gross really disagreed about Matt's suggestion that the Michelson-Morley experiment wasn't naturally the best things to do at that time. Well, it was very sensible. Their understanding of the origin of the electromagnetic fields within classical physics implies the aether wind, so they were logically and justifiably trying to find it. More generally, these interferometer-based experiments were a "proxy" for the tests of many or all conceivable phenomena whose strength is proportional to \(v/c\) or its power i.e. effects that become strong when speeds approach the speed of light. They chose an experimentally smart representative of the "tests of relativistic physics", as we could call it decades later.

But during Michelson's times, there was no relativity. People just didn't know it yet. And Michelson's experiments happened to play just a minor role in Einstein's theoretical investigations. But as far as the people who relied on the experimental data and not Einstein's ingenuity are concerned, it was just completely logical to study the aether wind. It was really a critical experiment that would kickstart relativity if Einstein were paying more attention to the experiments or if another physicists who was well aware of the experiments had a little bit more of Einstein's ingenuity.

When relativity was discovered, it became clear that the aether wind doesn't exist but lots of other relativistic effects (corrections to Newton's physics suppressed by powers of \(v/c\)) do exist. Theorists like Einstein were crucial to make the progress in 1905 etc. but when it comes to the experiments, Michelson's experiments in the 1880s were really the optimum thing to make. They were even lucky because they revealed a particular situation where the right (now: relativistic) physics not only deviates from Newton's physics but where it gives a very simple result (the speed of light is constant, regardless of the speeds of sources or observers).

So just like David Gross and others, I think that Matt is just wrong even in the case of Michelson.

Nati Seiberg would also criticize Matt Strassler. Seiberg says that Matt's doubts are not only in contradiction with the philosophy of high-energy physics; they are in contradiction with reductionism itself that has worked for centuries. The further you go to shorter distances, the more detailed understanding of the physics you may acquire. Why should it break down now? Matt says that quantum gravity is an example where shorter distances fail to allow you to study more detailed physics. Seiberg correctly says that it is right but quantum gravity is very far. Strassler says that this comment by Seiberg might be right or wrong.

Well, I do think that quantum gravity is "probably" at the usual 4D Planck scale or nearby, roughly at the conventional GUT scale or higher. It is also hypothetically possible that quantum gravity kicks in at nearby energies, near 1 TeV. But this scenario predicts *exactly* what Matt Strassler attributes to string theory in general. This scenario implies significant deviations from the Standard Model at the LHC energies. It is really excluded experimentally. String theory is not excluded but quantum gravity at 1 TeV is excluded. I don't know why Matt is getting these basic things backwards.

Nima Arkani-Hamed addressed a question touched by Matt, "why a light Higgs and not technicolor". This question has an answer. As Nima decided with Savas after some scrutiny, technicolor models with light fermions are inevitably tuned to 1-in-a-million or worse because the light fermion masses require us to introduce some running that easily and generically creates new exponential hierarchies between the electroweak scale and the QCD scale, and related things. So SUSY with some tuning for a scalar is still less fine-tuned than technicolor. And if the electroweak symmetry is broken by a strong force, there are no baryons – just neutrinos.

Nima also defends the 100-TeV collider. No one is really suggesting to put all eggs into one basket; people are thinking about and building many, many experiments. Going to high energies is still a very important thing for many reasons.

Matt replies to Nati that "this time is different" because for the first time, the Standard Model can be the "whole story" (he overlooked gravity but it is a potentially complete theory of non-gravitational physics) so there is no reason to think that the new discoveries will come soon. Despite my expectations that new physics does exist below the scale of a few TeV, I agree with that. Strassler also says – and it is pretty much equivalent to the previous sentence – that naturalness and reductionism are not related in any direct way. I agree with that, too: there may be big hierarchies and deserts but reductionism still holds.

David Gross says that the naturalness isn't a prediction; it is a strategy. I completely agree with him (and I have written down this point many times), so let me please try to present this claim in my words, assuming that David Gross would fully subscribe to them, too. Naturalness boils down to a probability distribution on the space of parameters which we can use to think that certain values or patterns are "puzzling" because they are "unnatural" – which means "unlikely" according to this probability distribution. And that's why we focus on them; they are likely to hide something we don't know yet but we should know. At the end, the complete theory makes all these effects in Nature natural (in a more general sense) but because they look unnatural according to an incomplete theory, these effects in Nature are likely to hold a key for a new insight that changes the game, an insight by which the new theory significantly differs from the current incomplete theory. Naturalness cannot be tested quantitatively, however.

Gross said that our inability to calculate something is a problem. I completely agree with that and I add that this problem is worse in the case of parameters that seem parameterically smaller than the expectations because this gap suggests that there is some "big piece of meat" we are missing that changes the story substantially, not just some generic obscure calculation that spits out some numbers of order one. That's where naturalness directs us, I add.

Matt is often drowning in the sea of vagueness – this is something we know from the discussions with him on the blogosphere, too. He tries to say something extremely unusual but he's trying to claim that he isn't saying anything nontrivial at all at the same time. You just can't have it both ways, Matt. In this case, he is saying that we're not spending our time wisely – it's being focused too narrowly. Except that he never says what is the direction in which one might or one should broaden the interest or work.

Someone says he finds it frustrating that the reach for gluinos will only be doubled from 1 TeV to 2 TeV in 2015, not too big a difference. A reason to like a higher-energy machine.

Steve Giddings also points out a bug in Matt's logic concerning reductionism. Even if reductionism (meaning the need to study higher energies) were ending at X TeV, we would clearly need to go slightly above this level to find out that the reductionism fails. Finally, Matt proposes a loophole. Maybe there are extremely light and extremely weakly coupled new effects somewhere, so going to higher energies doesn't help us. Great. So what should we measure instead of the larger collider data?

David Gross says that dark matter is an example of that and Matt says that this makes his (Matt's case) stronger because according to many dark-matter models, one can't discover the new physics by going to higher energies. Well, right, it's plausible. But the difference is that there are "very many" such possible directions. Going to high energy is to increase the value of a quantity that is universal for all of physics – energy (or the inverse distance). Going to study very weakly coupled things means to go in the direction of lowering every conceivable coupling constant anywhere and there are just too many. We may try. We should try those cases that are justified by some arguments. But it is simply not true that any single march towards higher sensitivity in some particular coupling constant of some particular interaction seems to be as important as our ability to go to higher energies. There is only one energy and it's the king; there are way too many coupling constants and each of them seems less fundamental and less universal than energy. So I don't really agree with Matt on this change of the bias, either – unless he tells us what is the particular coupling constant or experiment where it makes sense to go to "much better than considered" sensitivities.

Maybe Matt would propose to build a 1 GeV collider with the luminosity increased 1 billion times? Perhaps it could make sense for some potential possibilities. But he should at least propose such a thing explicitly instead of saying that others are narrow-minded just because they are doing everything that people have conceived so far.

At the very end, Joe Polchinski calmed all bitterness and said that Matt was one of the young persons who comes to Joe's office and pretty much solves a problem in the confining gauge theory, having 100% right on the field-theory side and 80% right on the string-theory side, so Joe added the remaining 20%. Joe improved the flattering joke by saying that this paper was never submitted for publication because it didn't meet Matt's standards LOL. Matt says it's not really true. Joe also says that he didn't really deserve his PhD but with Matt's help and 15 years later, he had finally solved his thesis problem. ;-)

Add to del.icio.us Digg this Add to reddit

snail feedback (29) :


reader david55 said...

Stockmarkethedron child of Amplituhedron
Researcher Envisions A Geometrical Jewel At The
Heart Of Finance.

http://goo.gl/1Qvv0S
http://arxiv.org/abs/1402.1281


reader anna v said...

Well, no the subject of 100 TeV hadron collider I also have doubts that this is the right way to go now.

1) I think that a lepton collider will be the best for really establishing whether we are seeing the Higgs or many etc. Hadron colliders have an enormous background not only of noise due to the great number of collisions in the interception region but also to the great number of possible interactions at each collision. This involves a lot of triggering and throwing away a lot of data with the danger of throwing away the baby with the bathwater.

Leptonic vertices are much cleaner and allow for great accuracy in measurements, viz LEP . Accuracy allows for glimpses of new physics inaccessible in the gross limits of the LHC.

2) I think that maybe accelerator people should rethink the accelerator technology which has not changed in concept much since last century. Some money should be spent to try and get new methods of acceleration, possibly involving nanotechnology and lasers. It might not come to anything revolutionary but it should be tried before going to 100 TeV machines . An ILC or GreaterLEP would give the pause necessary for examining something new in accelerators and still give possibilities for new physics explorations for the next generation of physicists.

And I will repeat that society is getting its moneys worth out of the HEP experiments considering that the whole LHC is costing of the order of magnitude of two large aircraft carriers, shared by many countries..


reader david55 said...

Theorists Witten, Gross, 't Hooft & Arkani-Hamed are in China to encourage officials to invest in building LHC's successor.

http://www.shanghaidaily.com/article/article_xinhua.aspx?id=203146


reader Uncle Al said...

It appears (oh my god, empiricism!) that Dr. Matt Strassler has Pb hands when he stands at a lab bench. Don't trust a programmer carrying a screwdriver. Don't trust a Strassler telling you how to experiment. Nothing says the answer sits under a bright street light except managers' PERT charts.

http://arxiv.org/abs/1102.2837
Promotion within hierarchical management is quantitatively worse than random choice. All the goodies sit in the dark center of the block. All the fun is in the heretical footnotes. Newton was nice. Newton was also wrong. Ya gotta look.


reader Kimmo Rouvari said...

I bet that Chinese won't invest on bigger collider. 50 Euros, first who takes the bet is on.


reader Physics Junkie said...

I agree with using a lepton collider. Has anyone considered a tau/antitau collider. It is a little heavier than a proton so new magnets or even an upgrade to existing magnets should be doable technology. You could put in it the LHC and get the full range of 28 Tev physics with a much cleaner collision. Taus moving near he speed of light should have long lifetimes. I suppose someone has thought of this and there is probably a technical downside, but I don't know what it is.


reader Luboš Motl said...

LOL, how do you produce taus moving near the speed of light? Have you seen how actually short the lifetime is?


reader anna v said...

s the tau lifetime is out of the question at the energies we can reach in the labs. It decays at the vertex .http://en.wikipedia.org/wiki/Leptons#Table_of_leptons .

A muon collider though has been proposed http://en.wikipedia.org/wiki/Muon_collider , http://www.cap.bnl.gov/mumu/ .


reader W.A. Zajc said...

A tau collider is indeed not something we will see in our lifetimes - I believe that is a safe statement regardless of any reader's current age. However, I would not feel safe making a similar argument about a muon collider; it might be possible to build one without extraordinary advances in technology: http://arxiv.org/abs/arXiv:1308.2143 . I would guess that the first such machines would be more proof-of-principle devices rather than true discovery facilities, this by analogy to the development of the very first e+e- colliders: http://en.wikipedia.org/wiki/Collider#History .


reader kashyap vasavada said...

Hi Lubos: I would like to understand the following statement
about predictions little bit better. “Effective field theories with some extra assumptions about the distribution of parameters could perhaps lead you to guess that particles with masses comparable to the Higgs may exist. But these are not conclusions of QFT itself. And in string theory, such "predictions" are even more impossible because string theory has no adjustable continuous dimensionless parameters. That means that there are no "natural distributions" on the space of parameters that would follow from string theory, at least as long as we interpret string theory as the currently
understood body of knowledge.”
If I understand, Gordon Kane has said that Higgs at mass 125-126 GeV is a prediction of ST. Do other string theorists agree with him? If you have already written about this and I missed it, please give a reference. Thanks.


reader Dilaton said...

Matt Strassler speaking up against the 100 TeV collider is definitively NOT a good thing, in particular considering the fact that there exist already too many trolls who begrudge fundamental physics any cents it gets. The sourballs and trolls usually populating his blog are celebrating his "victory about the establishment" right now.
In addition to what this TRF article says, is it not the case that high precision experiments surely could reveal hints of new physics as deviations from what is expected from the SM, but to really discover for example the new particles involved, a high energy collider would be needed anyway?

I remember that Matt Strassler once said in the comments, that he would be happy if any new physics could be savely exluded. Looking for BSM physics seems to have become a boring task that simply has to be done to fullfill the needs of the scientific method for him...

Reading this TRF article about his exchange with colleagues, it seems that he drops confusing fog bombs not only when addressing the general public, and he seems to get nearer in spirit to well known sourpusses like Tommaso Dorigo and ..... you know ;-/

So instead of watching his talk I will proceed with the mathematical physics lecture series of Carl Bender I have startef to watch some days ago ...;-)


reader Dilaton said...

... 100 ;-)


reader Kimmo Rouvari said...

LOL :-D We have 100 already on the table, so I'll stick with the 50.


reader Stephen Paul King said...

"a probability distribution on the space of parameters which we can use
to think that certain values or patterns are "puzzling" because they are
"unnatural" – which means "unlikely" according to this probability
distribution"

Why is the distribution such that all values and patterns are not equiprobable? How are some more unlikely than others?


reader kashyap vasavada said...

Hi Lubos: I would like to understand little bit better the predictability of QFT and ST in view of your statement “Effective field theories with some extra assumptions... That means that there are no "natural distributions" on the space of parameters that would follow from string theory." If I remember right, Gordon Kane has stated that ST predicts the mass of Higgs to be about 125-126 GeV. I do not know enough about ST to criticize this. But I would like to understand the views of majority of string theorists about this point. Also has Kane changed his opinion about this? Of course there is nothing wrong in changing opinion in science as far as I am concerned. In case you have already commented on this before and I missed it, please give a reference. Thanks.


reader Luboš Motl said...

LOL, I hope you agree it's a fun parody. The permutations of the stocks and the decorated permutations - the author quite apparently had lots of fun with - made me laugh and decide that it was a parody.


Still, for the permutations of the external gauge bosons etc., the procedure just marvelously works. I guess the author of the financial paper can't understand the reasons why so he must think it's silly so he wrote this equally silly financial parody.


reader Luboš Motl said...

Dear Kashyap. Gordon Kane and collaborators deduce this interval 125-126 - sometimes written much more inclusively like 124-129 etc. - ultimately from some empirical data.


They must look which masses are compatible with the concentrations of dark matter and other things that seem to agree with some observations well enough; and they pick the values of tan(beta), a parameter in supersymmetry, that is preferred elsewhere in literature.


Still, whether the width of their predicted distribution should be 1 GeV or 5 GeV is really unknown - even if one believes that their scenario is the right incorporation of SUSY (or the right class of string vacua) in our world which is far from certain.


I don't know what it means "how a majority of string theorists" view this question. Science isn't really about majorities; it doesn't work like that, it has never worked like that, and this is an example that shows very clearly that it is utterly ludicrous to think that it could ever work like that. A majority of string theorists doesn't really understand astroparticle physics and/or technical features of these stringy compactifications for their opinion to matter at all. Most string theorists would probably admit that this is the case - they have no clue. They have heard about totally different phenomenological scenarios and they assume that they could be right, too. On the other hand, they also realize that they could be unaware of many arguments showing that some of those are really wrong etc.


I know that it's fashionable to talk about this unscientific notion of "consensus" in science - and perhaps ask hundreds of thousands of people with PhDs to vote. But even if you only include those 1,000 or so string theorists in the world, a very "exclusive" set, it's still a way to high a number. Only a small percentage, perhaps 50, perhaps 100, of string theorists would claim to have a feeling to understand the issues well enough to push their opinions. I am probably not counting myself into this group. I am not quite convinced that the Kane et al. scenario must work - many other scenarios are very different - but some assumptions that Kane et al. are making could still be much more certain than most of us think.


At any rate, if you made a vote among all 1,000 string theorists, a majority would be unqualified and the result of the vote would be a random gibberish.


reader Giotis said...

Well at least ‘t Hooft found something useful to do instead of polluting the world with crackpot physics. I was watching the other day a science show on Discovery channel where I heard him say that QM is not of his taste because “it defies ordinary logic”.

Unbelievable…


reader Luboš Motl said...

Given his amazing past achievements, it really often looks like a brain transplant is needed to explain all the data.


reader Giulio said...

I'm a bit surprised at the reductionist approach of Gross and I've been very happy to find on your blog an example of another Nobel laureate writing about the limits of this approach. Of course your old post was completely against his ideas :-) Now, your definition of reductionism sounds reasonable and pragmatic, but I'd love to fully quote the comment of one of your readers:
http://motls.blogspot.it/2005/10/laughlin-vs-reductionism.html#comment-559508713
I feel that there is something that you, Strassler and Gross understate. In the colloquium of Juan Maldacena you recommended last month, he writes that spacetime is an *emergent* concept (from entaglement?). The same thing that Joe Polchinski was also saying about an *emergent* space containing D-brane. You should not dismiss the emergent vs reductionist objections so easily..


reader Luboš Motl said...

Sorry, Giulio, I think there is some deep misunderstanding on your side.


If we use the term "emergent", it doesn't mean that we abandon or weaken or reject reductionism. On the contrary, it means that we fully embrace reductionism and we say that the thing just labeled as "emergent" isn't fundamental but instead, it may be *reduced* to some more fundamental entities and concepts.


This is true also for spacetime itself but whenever gravity is weak etc. - e.g. whenever the density matter is much smaller than the black hole density - effective field theory is a good approximation. In effective field theory, we know that fields in the spacetime and particles they create are the fundamental concepts.


reader Giulio said...

When you label a thing as emergent you are thinking more in terms of a duality than a reduction. I'm not sure from where you derive the topological properties of space


reader Luboš Motl said...

Sorry, I am not. No one is. A "duality" is in no way equivalent to "emergence". You are just confusing the meaning of some basic words.


Fundamental concepts on one side of a duality may be composite or non-fundamental on the other side of a duality. You may even use the word "emergent" instead of "composite" here.


But if we pick which description we use, it is always clear that some concepts are fundamental and some concepts are not. The existence of dualities in no way justifies the New-Age demagogy about fundamental physics' not being fundamental (idiocies that some condensed-matter physicists would sometimes join). There are still fundamental laws that may be formulated in various (dual) ways.


reader Giulio said...

LOL we should agree at least on the basic dictionary... ok, I try to reply.
Michelson-Morley's experiment can be viewed from the principle of emergence in this way: in condensed matter you have only one compression mode instead of two transverse modes of polarization as per Maxwell eq. It is Einstein SR that allows ether to exist ;-) Traditional reductionism may not be the right approach: we cannot study matter and the vacuum separately


reader Dilaton said...

Do we need the term "emergence" at all in physics ...?
I dont like this very much, as too many people too often confuse it with things "magically emerging" out of nothing or thin air etc ...

When I read the term "emergence" in physics context, I always internally replace it roughly speaking with "something being obtained at lower energies/larger scales by (systematically) coarse graining the higher energy degrees of freedom/laws describing the system", you know Wilson etc ...


reader lukelea said...

Hi Eugene, I wrote a long reply to your reply but for some reason it got hung up in the posting process. I will have to rewrite it, which I cannot do at the moment. Meanwhile why don't you Google further on my name and look for all the positive things I say about Jews, Ashkenazis, and the state of Israel before drawing your conclusion. Context is everything as are distinctions.


reader Ovidiu Racorean said...

Well, Lubos why don’t you explain us why the permutations are not working in stock markets?


reader lukelea said...

"Try to imagine what would have happened if the West had seriously tried to assist the protestors at Tienanmen Square!"

Gene, you are a cock-eyed optimist.


reader Luboš Motl said...

Come on, Luke, you don't believe it would be a good idea to start a power confrontation with China because of a protest in China, do you?