Cutely enough, my pure report about 33 mandatory hours of climate hysteria in Italy a year was labeled "hate speech" by Google and demonetized. The attack on free speech keeps on escalating.
First, something about the "interpretations" of quantum mechanics. Under a 2018 blog post about Bohmism, FF asked how I can precisely prove that Bohmian mechanics predicts lots of unseen phenomena in a specific experiment and what the experiment is, something that I also discussed on Quora.
Well, as I wrote on both places, we can't make any specific, quantitative, or precise predictions because there is no well-defined "Bohmian theory". Proper quantum mechanics describes ongoing phenomena as a sequence of two kinds of processes: the unitary evolution of the wave function (or density matrix) that reversibly spreads into superpositions (that evolution is encoded in a unitary operator, often equal to an exponential of the Hamiltonian); and the reduction or collapse of the wave function associated with the observer's learning of an outcome of an experiment (that's given by a projection operator on a space of eigenstates). These two things are alternating – like loading and shooting, loading and shooting.
For stupid ideological reasons, Bohmists and other "realists" (i.e. anti-quantum zealots) dislike especially the projection (or reduction), the measurement. So they erase it and also totally differently interpret the physical meaning of the objects in the unitary evolution (they promote them to classical degrees of freedom). But they don't have any well-defined replacement for the measurement (projection) part of the quantum mechanical evolution. They remove one-half of the engine of QM, create the other half from wood, but they don't replace the first one-half by anything. So it doesn't work at all. We may only speculate about a hypothetical theory that fills the gap.
That part of the engine that was removed must (and quantum mechanics does)
(1) erase all the parts of the wave function that were found not to be realized because they correspond to outcomes of measurements that differ from the outcome that occurred
(2) that step simultaneously revises the probabilistic distributions for almost all observables, expectations for further measurements.
In Bohmian mechanics, there is no collapse but these steps must still be done in some way if you want a viable theory. When you repeatedly shoot, you need charge-shoot-charge-shoot etc. If you remove one of the two verbs, loading or shooting, it won't work, at least not repetitively. In Bohmian mechanics, the part of the pilot wave that was just "experimentally found unreal" must still be erased in some way, they don't say how. New pilot waves which are "localized enough" must be produced in some way to start the new evolution (and spreading) from a viable, localized enough new initial state. They don't say how.
Also, the real beable, the position of the real particles etc., has to be abruptly modified so that it's placed at a random place that is nevertheless distributed in agreement with the squared pilot wave \(|\psi|^2\) interpreted as a probabilistic distribution. Again, Bohmists don't say how. They just like to remove the essential one-half of the quantum mechanical engine for stupid ideological reasons but they have no replacement. But it's clear that if you propose any particular replacement, the physical processes that do the "cleaning" of the pilot wave and "reloading" of the particle will depend on both the pilot wave and the location beable. So there must be some objectively real effects that are sensitive to all these "in principle observable" degrees of freedom, and therefore they will become "observable in practice", too, thus contradicting experiments because they haven't been seen.
When someone criticizes my arguments for not being specific or precise or quantitative enough, it's exactly like criticizing critics of tooth fairy or any supernatural stuff for the same reason. You can't give a precise calculation in a specific experiment that falsifies the tooth fairy because there is no well-defined, quantitative, precise theory of a tooth fairy. It's a vague idea so the criticisms must be rather vague, too. Also, it makes no sense to focus on any specific experiments because the problems with such fundamentally wrong theories – like Bohmism or tooth fairy – would manifest themselves in almost any experiment. The theories just don't work at all.
It is very clear that the critics of quantum mechanics set a vastly lower bar for themselves than for quantum mechanics – and they blame quantum mechanics or its defenders for something that is actually their fault. The main point is that they don't really have any viable alternative to quantum mechanics at all.
Now, to return to the topic promised in the title, the situation is completely analogous with modern quantum field theory (QFT) and string theory. Critics of these defining topics in modern theoretical physics have acquired the "mainstream" media in recent years. Like critics of the fundamental postulates of quantum mechanics (and in many cases, it is the very same people), they don't have anything at all that resembles an alternative theory that could actually produce anything that resembles the amazing achievements of modern quantum field theory and string theory.
Many of these critics of theoretical physics pretend to be sophisticated and to understand quantum field theory, their would-be state-of-the-art best foundation of our theories of physics (which isn't really true because physics has irreversibly switched to string theory). But they just don't understand quantum field theory well. They only understand it at the level of a mediocre undergraduate students that have barely gotten through the first semester of quantum field theory.
OK, these superficial followers of physics and quantum field theory typically do understand something like
* there are some particle species in Nature
* particles are excitations of an infinite-dimensional harmonic oscillator (with bosonic and fermionic directions) which may be described as quantum fields
* these quantum fields have hats but they evolve according to equations of motion similar to those in classical field theory
* non-linear terms in these equations (or higher-than-quadratic terms in the action) create interactions that are responsible for scattering, as calculable by Feynman diagrams
That's it. They get these points to a large extent although many of them still misunderstand the actual universal postulates of quantum mechanics and would like to replace them with Bohmism, many worlds, objective collapse or, in the most hopeless cases, superdeterminism and similar pseudoscientific fantasies. But even if they understood these points well, which they usually don't, they are only getting quantum field theory as understood around 1930, some ninety years ago.
They imagine (and try to persuade stupid readers of the popular press) that the only important progress since 1930 was in the addition of new particle species (like quarks and gluons) and new Feynman vertices (through the Lagrangian). But that's complete nonsense. In the ninety years since 1930, our understanding of quantum field theory (and its meaning, purpose, breadth, degree of consistency, the actual reasons for some procedures we are making, and the organization of the "options") has profoundly changed and deepened. Physicists have understood that many previously overlooked things (and things still overlooked by the would-be quantum-field-theory literate pundits in the mainstream media) are really "the point". The addition of quarks and gluons is just a ludicrously tiny part of the progress of our understanding of QFT from 1930, a moronic caricature of the actual progress that has occurred.
Each of these points should be expanded to very long blog posts, chapters of books, or whole books (and many of them exist, obviously inaccessible material to generic readers of newspapers). But I think it's important enough to give a list because so many people are being misled by the increasingly expanding anti-science nonsense in the media and they don't know about the very existence of these points.
Renormalization as the technicalities to subtract infinities from loop diagrams
The appearance of infinite terms in the Feynman loop diagrams was already obvious in the 1930s but only in the 1940s, people began to realize that the loop diagrams still look like legitimate terms and when the infinite terms are "ignored" while the finite ones are kept, it's better than when the whole loop diagrams are ignored! The right way(s) to erase the infinities and the justification for the renormalization procedure were being discovered gradually between the 1940s and 1970s.
It may be proven that the well-defined process to subtract the infinities leads to finite results and a unitary evolution matrix in renormalizable quantum field theories. Gerard 't Hooft became the world's #1 expert in this kind of proofs (of consistency of renormalization in gauge theories) in the 1970s, when he was called the Ayatollah of physics.
Most of the "superficial QFT pundits in MSM" are incapable of working with renormalization at the technical level. Some are superficially aware of the steps needed to renormalize the theories but almost none of them has ever calculated a two-loop diagrams so they're ignorant about all the "practical knowledge" that really distinguishes experts from the laymen in this topic.
Renormalization Group as the explanation why renormalization works; universality; fixed points; running
In the 1970s, starting with the works of Ken Wilson and others that affected both particle physics and condensed matter physics, it started to be understood why these renormalization methods – previously resembling black magic – work. The point is that the infinite terms parameterize "arbitrary terms that could depend on the details how the theory is regulated at very short distances" and the long-distance behavior is universal. Wilson and others designed methods to deal with effective field theories, i.e. quantum field theories only claiming to work at some distance scale and longer scales, but being agnostic about the shorter-distance physical phenomena.
Wilson et al. explained methods to translate effective field theories from one distance scale to a bit longer one and realized that the set of possible behaviors at long distances is highly restricted. Although lots of laws may be envisioned at short distances, the space of possible effective theories at long distances is often finite-dimensional. That's really why various methods of renormalization – including cutoffs, Pauli-Villars, and dimensional regularization, among others – are producing the same physical predictions at the very end. They have to. They're just dirty technical methods to describe something that may be shown to be almost unique but whose parameterization requires some dirt. Our focus has been redirected from the details of the "intermediate dirt" to the final result (e.g. the S-matrix of a theory in particle physics). We really understood that the set of "consistent S-matrices" is somewhat restricted and the "renormalization dirt" (which comes in many flavors) is just a constructive method to get the numbers in the S-matrix that exists (in the sense of mathematics) even without our wish to calculate things.
So people have learned lots of things about the running coupling constants, renormalization group flows, and similar things.
Fixed points and special treatment of conformal field theories
In the renormalization group paradigm, scale-invariant theories play a very important role; I will be a bit sketchy and use the term conformal field theory (CFT) as a synonym of scale-invariant theories. They may be the short-distance or long-distance limits of general theories. Well, they very often are and must be, under some assumptions. Conformal field theories are even more unique and their spectrum is more restricted than the set of consistent QFTs in general.
And there are lots of special things that can be done with conformal field theories. I discussed conformal field theories in May 2015, as one of the main bodies of knowledge that a sub-Saharan quantum field theory student has to master and become smarter at before she ;-) becomes a string theorist. Clearly, the general purpose of that blog post of mine was similar to this one but here I am more general and talk about new findings in QFTs, not only those that are particularly important for string theorists.
OK, conformal field theories may be placed on spaces that are conformally mapped to cylinders or manifolds of any topologies. One may discuss the OPEs, mutual localities and monodromies, and similar things that are important for the application of CFTs for the string world sheet. They may be nicely defined on spheres which is done in AdS/CFT. CFTs have the special conformal symmetries that end up being isomorphic to the isometries of the anti de Sitter space, a basic pattern that led Maldacena to suggest that AdS quantum gravity and CFTs could be physically equivalent. AdS/CFT itself is a huge field by now, of course, with over ten thousand papers and it's the class of discoveries that make QFT and string theory inseparable.
So conformal field theories play an important role for the classification of all QFTs because CFTs are even more special and they may be studied by extra tools; they are the "fixed points" from which non-scale-invariant QFTs are obtained by deformations; they are important for the whole perturbative string theory as the definition of the world sheet dynamics; they make holography manifest in AdS/CFT, and there are several extra reasons why CFTs are similar and a good theoretical physicist simply dedicates much more time to them than an "infinitesimal fraction of time dedicated to QFTs". The idea that CFTs should only occupy a tiny fraction of physicists' time is a naive idea by someone who is just rolling marbles (the building blocks of the "1930 QFT") but who doesn't actually understand anything about the properties and structure of QFTs that were learned by adults.
Anomalies and properties of QFTs that aren't just a classical theory with hats
Anomalies are quantum effects that make it impossible to construct a consistent quantum mechanical theory with a symmetry even though the classical theory with these symmetries may exist. For example, four-dimensional QFTs with chiral (left-right-asymmetric) fermions coupled to gauge fields generically have anomalies calculable from triangular Feynman diagrams (triangles become hexagons in 10D, you may find the general size of the polygon in 2K dimensions).
For example, the Standard Model would be inconsistent if you omitted all leptons and only kept quarks, or vice versa. The leptons and quarks actually have to add up their contributions to the anomalies and only when it's done, the theory may respect the usual Standard Model gauge symmetries. The breakdown of the gauge symmetries – gauge anomalies – would be incurable inconsistencies of the theories because the (destroyed) gauge symmetries are vital (when the symmetries work) for the removal of the negative-probability, time-like and longitudinal modes of gauge bosons (photon and its generalizations).
Anomalies are purely quantum and may be interpreted as some "UV (short-distance) obstacles for renormalization" in some cases, or as IR (long-distance) properties of a would-be QFT. But anomalies aren't the only "purely quantum" phenomena in QFT that have no counterpart in classical field theory. Seiberg-Witten papers from 1994 or so have shown that the whole topology of the moduli space of scalar fields (and monodromies in it) is unavoidably different than what seems to be unavoidable by looking at the scalar fields classically.
The superficial people don't really understand any of these things. They think that quantum field theory is always just a "classical field theory with hats", with some new universal phenomena. But it's not. It is heavily outdated in 2020 to imagine that a QFT is "constructed" from a classical field theory. Most classical field theories don't lead to a consistent QFT and most QFTs (in some natural measure, I don't mean the measure seen in the papers) can't even be obtained by "quantizing" a classical field theory. Instead, classical field theories are limits of QFTs that may exist (or not) and a part of the information about the QFT (either everything or not) may be reconstructed from the limit when the limit exists at all.
Non-perturbative objects, phenomena, high-order behavior of amplitudes
One obvious aspect of the "understanding of QFTs from 1930" is that it was almost completely perturbative. But even multiloop diagrams are "not enough" to calculate the precise QFTs. Also, the multiloop diagrams have some asymptotic behavior for a high number of loops and the whole sum actually diverges in generic QFTs and string theory. But that's not an inconsistency because an expansion from a totally well-behaved function may "look" like a divergent one.
The minimal inaccuracy in the resummation of the divergent multiloop series is comparable in magnitude to the leading, i.e. largest, nonperturbative terms. Nonperturbative terms such as instantons and the "analogous" objects that exist, the solitons and monopoles and branes etc., are also almost completely misunderstood by the "QFT pundits stuck in 1930".
Analytic behavior in the complex plane
Complex numbers (as totally fundamental mathematical objects in theoretical physics) are largely ignored and misunderstood by all the superficial QFT kibitzers. There is an amazing body of knowledge about QFTs that they're almost completely unaware of. The real point is that all useful QFTs have scattering amplitudes or Green's functions etc. that are meromorphic (almost everywhere locally holomorphic) functions of the energies and momenta. The validity of this analyticity (in energy) in quantum mechanical amplitudes goes back to the late 1920s but for more than 50 years, it's been heavily (and increasingly) appreciated and exploited in QFT.
So the whole field of analytic functions, the theorems about uniqueness of such functions, and the different singularities in the complex plane and their interpretation (e.g. through unitarity and its consequences for the behavior of diagrams that are cut to pieces) is directly relevant for a proper physical interpretation of the functions that appear in QFTs. This conclusion wasn't obvious from the beginning but it's true and essential. It's been proven mathematically and the mathematical proof is at least as strong as e.g. the discovery of DNA. The people who mentally live in 1930 simply don't get any of this stuff. That ignorance has far-reaching implications. They don't understand the degree of uniqueness of QFTs or subclasses of QFTs with some properties. They don't understand or appreciate the importance of the research of many complex functions for physics of QFTs, and many more things.
There exist very good reasons why deeper analyses of QFTs end up requiring new classes of special functions. For example, Bessel's functions become elementary and sort of connected with "classical field theory". Perturbative string theorists need Gamma-like and zeta-like functions; amplitudes or amplituhedron researchers end up being big experts in the polylogarithms, among similar things. This is not just some invented decoration. The importance of these functions in the structure of the theories is an established fact.
The superficial kibitzers generally dismiss all this deep and essential stuff but this stuff has been shown to be deep and essential and their misunderstanding of this importance is the kibitzers' fault, not a defect of the actual ongoing world class research. You can impress moronic laymen among the readers of newspapers by ludicrous statements that "research into complex functions with singularities in QFT is bad, heretical, unscientific" etc. but you can't do actual physics research in 2020 with that assumption.
New symmetries and the "primordial status" of highly supersymmetric theories
If one is stuck in 1930, he also holds naive ideas about the symmetries that are possible in physics. The most important overlooked symmetry (or type of symmetry) is supersymmetry which only emerged in the 1970s. In some sense, it's the maximally "intrusive" type of symmetry that may exist in QFTs without making it completely trivial. There may be several supercharges in every dimension – the number of supercharges may go from the "minimal supersymmetry" (one spinor of the allowed type) to the "maximally extended supersymmetry" (as many spinors as possible which still avoid fields of too high a spin in the multiplets).
Supersymmetry isn't just some ad hoc pairing of bosons and fermions. Like other symmetries, it's a constraint on the possible interactions. In the case of supersymmetry, the constraints are strong (especially for highly extended supersymmetry) and these constraints are "nice" in the sense that the pathological terms are banned first. So supersymmetry often guarantees a perfect cancellation of infinities in some Green's functions (or all Green's functions, like in the maximally supersymmetric gauge theory).
For this reason, highly supersymmetric QFTs may be placed on the top of a pyramid of relationships which has the nicest properties and other QFTs may be considered as uglier "relatives" that are derived from the highly supersymmetric theories. This perspective is "reverse" relatively to what the people naively expected but it's known to be a more physically natural one and the more physically potent one by now.
People who are imagining that the supersymmetric theories are a "random subset" or a "contrived construction" created out of their non-supersymmetric building blocks completely misunderstand the central role of the highly supersymmetric theories have played in our understanding of the "map of the landscape of QFTs" for a few decades, especially in the recent 26 years. In some sense, this misunderstanding is a special case of their misunderstanding of all of modern physics as started by Einstein. Einstein placed the symmetries (and general principles) at the center of physics. So the absence of any "aether wind" (in the Michelson-Morley interferometry experiment) isn't some contrived, unlikely conspiracy between lots of building blocks. Instead, it may be considered the starting point in our search for the promising theories. Observations have some general patterns – the equivalence of inertial observers, the absence of the aether wind – and these general patterns are some of the most powerful empirical data that we have. So deep physicists are using the most general patterns and principles of this type as the main guides that direct their search for the correct theories. These very general symmetries and principles are more important and more informative for good theoretical physicists than some isolated observations of one building block or another! That's just how it has worked for more than a century and whoever "disagrees" is simply incompetent as a theoretical physicist. There is nothing to disagree with. One may either know this important fact or be ignorant about it.
The highly symmetric (and especially supersymmetric) theories end up being vital beacons in the good physicists' "map of all possible theories". This status of theirs is just another manifestation of the reform of the very logic of physics that was started by Einstein. The people who ignore this new perspective of modern physics are really mentally living before 1905, not just before 1930. They are mindlessly combining randomly invented building blocks according to a "restricted type of the game" that has been obsolete for many decades, in an efforts to "win the jackpot" and get a theory that agrees with the observations. But that's not how good physicists have approached the empirical data, general principles and patterns they seem to obey, and the spectrum of theories since 1905. Since that year, good physicists have placed the symmetries (and principles) and symmetric theories (and principles they obey) to the center and they tried to think about the most general and inclusive classes of theories that obey certain principles. Even when a symmetry is broken or a principle is violated, it may be violated weakly and it is a good idea to interpret a "theory with a broken symmetry or principle" as a beast that is derived from a more symmetric or more principled ones. The perspective taken by the competent theoretical physicists in 2020 is "reverse" to some old one – but it is more natural while the old approach looks man-made today. Physicists in the past could have randomly played with building blocks and they were "inventing", like Edison, but physicists of 2020 are much closer to actually "discovering" things, mapping the ideas that mathematically exist independently of humans. That's the superior approach to natural sciences because Nature isn't man-made, it hasn't been built or invented by Edison or another human.
In this essay, I wanted to enumerate some aspects of QFTs that weren't known or that weren't appreciated around 1930 when QFT was getting started but they became very important by today, especially in recent decades. The list is in no way comprehensive. I could continue with the details of the twistor and amplituhedron research, relationships of QFTs and matrix models, QFT in curved spaces (including the Hawking radiation etc.), and many more – and of course, if we go beyond QFTs, all the quantum gravity as consistently described by string/M-theory.
When good theoretical physicists see someone who apparently doesn't get some, most, or all of the "modern aspects of QFTs", they know that he or she is not a real expert. He or she is at most a fake physics researcher and every reader of the "mainstream" media that buys the stuff about the possibility if not desirability to erase 30 or 40 years worth of physics is just a brainwashed sheep.
Sadly, the "mainstream media" are overwhelmingly dominated by fake researchers. Those fake researchers have spread to some (fortunately, almost always low-prestige) research institutions and universities, too. To make things worse, the level of education is dropping at most places of the Western society. So the basics of all the "aspects of modern QFTs" may be considered stuff that graduate students should be learning during the bulk of their graduate school years. Some good undergrads obviously started (and perhaps still start) with these things as undergrads.
But it's becoming increasingly normal for graduate students, or even PhD alumni, to be stuck at the level of "QFT of 1930", in the sense described in this blog post. My criticism often applies not only to the media-savvy physics "pundits" but to some people in the research community, too. So the graduate students' knowledge of QFT is often dropping to the level that is similar to what we considered the "undergraduate knowledge of QFT" just two decades ago. Every single self-confident critic of string theory, supersymmetry, amplituhedrons, and lots of other topics is a frustrating piece of evidence of this rapidly deteriorating education level in our society – and at our universities.
And that's the memo.