## Wednesday, August 03, 2016 ... /////

### ICHEP, Strings: nothing scary about overlapping conferences

A conference may influence the face of particle physics for a long time

ICHEP 2016, the largest annual gathering of experimental and phenomenological particle physicists, is getting started in Chicago today. Well, the talks begin tomorrow so there are two days, Thursday and Friday, of a full-fledged overlap with Strings 2016. Because Peking is rather far from Chicago, the effective overlap is really three days.

The ICHEP logo promises us discoveries of the CP-violating angle in fermion mass matrices (done), observation of dark energy (done), the Higgs boson discovery (done), the discovery of a neutralino (cool!), and one more unexpected shocking round discovery in the middle (wow). Let's hope that Chicago's promises may be trusted, that Chicago isn't a city of liars, gangs, and criminals. ;-)

Is that immoral or outrageous that physicists must choose whether they visit Strings or ICHEP?

I don't think so. The reality is that these subfields – high-energy physics theory (and especially string theory) on one side; and high-energy phenomenology (and experiments) – are basically separated from each other. The number of people who publish papers both on hep-th and hep-ph is small and in most cases, it's rather clear what's their primary interest, anyway.

In this potentially overlapping group, most of the "mostly phenomenologists" are basically just "friendly" with string theory, so they know some basics, they know it's right, and they sometimes use it or at least take it into consideration in their work. And most of the "mostly string theorists" in the overlapping group are just excited about the ongoing experiments, they are surely interested in experimental discoveries that would cause a paradigm shift, but they assume that those are unlikely to take place soon. And even if such discoveries emerge, a few days of a delay in learning about those discoveries wouldn't be the end of the world.

It's simply true that most of the characteristic work in formal theory of particle physics deals with phenomena on energy scales much higher (and distance scales much shorter) than the scales that the ongoing experiments are probing. Or this formal theory is dealing with the organization of ideas that is truly independent of particular future experiments and discoveries, at least the realistic ones.

The anti-pure-science religious sect often tries to demonize particle physics, and especially its more formal theoretical part, for this fact. But every intelligent person knows that this is no reason for a worry. The work of the formal theorists looks "almost certainly" independent of the ongoing experiments because the Standard Model is "almost certainly" the right theory describing the results of the ongoing experiments. And even if new physics is discovered, the extension of the Standard Model that may be needed could be said to be "modest" relatively with the deeper changes or interconnections that formal theorists often study.

If some far-reaching discovery were made at the LHC (or a cheaper experiment), e.g. supersymmetry or some new particle implying a larger gauge group or grand unification or even extra dimensions, most theorists would be excited. But even at this amazing moment, only a relatively small part of the formal theorists' work would be really affected simply because they are working on more ambitious, deeper, more long-term tasks and questions.

There is nothing wrong about it. The (formal) theorists are simply not spokesmen working for particular ongoing experiments such as the LHC. They are people working on the fundamental theories describing Nature. Some experiments are relevant for a subset of the questions studied by theorists, other experiments are relevant for another subset, but for a third subset, there are no relevant ongoing experiments. This shouldn't be surprising. It's simply not feasible to build an experiment that would test every question that is sufficiently well-defined, sufficiently provoking for the curious theorists, and that may be studied by the theorists. To expect the exact overlap between the interests of the theorists and experimenters is utterly unreasonable. The deeper questions and layers of the reality are being studied, the more unreasonable it is.

Fateful ICHEP?

Experimental talks from the LHC may be the most eagerly expected ones at the annual ICHEP conference that starts in Chicago today. For example, ATLAS will show brand new results of an analysis of 12 inverse femtobarns of the 2016 data. Various people on the Internet claiming to be exposed to rumors say that not only the $750\GeV$ "cernette" diphoton excess was a 2015 fluke that wasn't repeated in the 2016 data; but no other interesting deviation from the Standard Model has been seen by ATLAS and CMS, either.

One may adjust his expectations based on this chatter. On the other hand, it would be premature to think that these rumors are the "official final conclusion". A discovery could take place.

I think that if no discovery appeared in the data up to the mid July 2016 and in either of the channels that have already been studied, it makes it rather likely that no discovery will be made for some 5 more years, either. In 2016, the LHC has already collected a huge amount of data – about 19 inverse femtobarns at the $13\TeV$ center-of-mass energies (almost all of it was collected before July 24th or so).

While the modest amount of the $13\TeV$ collisions in 2015 – some 3 inverse femtobarns per major detector to be used in papers – had a comparable potential for discoveries as the 20 inverse femtobarns of the $8\TeV$ data in 2012, the 2016 dataset is already more than 5 times greater. So all the "numbers of sigmas" in excesses are expected to have grown by a factor of $\sqrt{5}\sim 2.5$. Two-sigma excesses could grow above 5 sigma. And lots of new 3-sigma excesses could literally emerge out of nothing.

I think that the LHC won't see a similar multiplicative "increase of the number of sigmas" for several years.

If the LHC is really seeing the data compatible with the Standard Model in all the numerous channels about which some 2015-data papers have already been written, it means that "another thick layer" in the parameter space is compatible with the Standard Model – i.e. containing new physics.

When Pilsen allows itself to be rebuilt into a paradise of the most violent street gangs. This is thankfully the "Mexican" Pilsen in Chicago, not the original Pilsen so far. I am sure that it's politically incorrect to even ask whether the criminal face of Pilsen, Chicago could have something to do with the replacement of the Czechs who founded the neighborhood with the Hispanics. ;-)

It has always been possible, of course. Many people paint this picture in catastrophic terms. It's the end of particle physics, and so on. I don't really share these sentiments at all. First, it's clearly not the "last thick layer" that may be probed. A $100\TeV$ collider would have more new exposure – a greater chance to discover new physics relatively to the previous null results – than the 2016 increase of the data. There are reasons to think that something could be seen at a $100\TeV$ collider. Just to be sure, even a $100\TeV$ collider may produce results fully compatible with the Standard Model. I am not promising or guaranteeing new physics to you.

And if there's no physics at these colliders that may be built in the next 50 years, it doesn't mean that there's no new physics – I can guarantee that there is new physics somewhere – and it doesn't even mean that people can't safely discover it. They have a chance, of course, but the strategies that may find new physics will depend on a fine theoretical work (e.g. in string theory) much more intimately.

The LHC of the Mao Collider is (or will be) big and it may even look hard, complex, and expensive, but at the end, it's really a straightforward brute force strategy to look for new physics. To think about the structure of theories – and about string theory – very cleverly and carefully may be much harder and potentially more successful than brute-force experiments.