Monday, January 04, 2021

UV, dynamical problems are problems with a theory; IR, kinematic problems are just hard work for users

Elias Archer has asked whether and how string theory solves "difficulties and complications" that he sees in the quantum field theory on curved spacetime backgrounds. Well, I do understand this kind of a question. When I was leaving the high school, I still wasn't quite comfortable with quantum mechanics and I did think that string theory would revolutionize the understanding of the measurement so that it would look more materialistic, deterministic, or classical. This idea evaporated in a year; of course the postulates of quantum mechanics are extremely universal and string theory doesn't change anything about these rules and philosophy that are shared by all quantum mechanical theories. And there are no problems with the general postulates of quantum mechanics.

But this broader theme is generally repeated. We the people often misunderstand something about an "older, approximate theory" and expect the "newer theory" to fix the problems or "difficulties and complications". In some situations, the expectations are justified (and sometimes they succeed); in others, they are wrong because our misunderstanding of the "older theory" is our personal misunderstanding that we shoud solve by thinking harder – we really don't understand even the "older theory" correctly and this cognitive problem of ours has nothing to do with the changes that are made by the "newer theory".



I think that it is pretty much accurate to say that these two classes of "difficulties" may be defined as the following categories with certain traits:
  • ultraviolet (UV, short distance) problems that indicate a real problem or incompleteness in the dynamical equations of the theory, something that is clearly a part of the "engine" in the theory that cannot be changed by our interpretations or "ways how to use the engine". These genuine problems may be called "inconsistencies" and they need a fix (or an addition) within the engine, within the theory.
  • infrared (IR, long distance) problems that arise because the UV laws imply something about the physics at long distances by pure reductioniosm combined with the large spacetime geometry; the character of these problems doesn't depend on the precise dynamical laws, the problems may be considered "signs of our wrong usage of the theory or understanding how it may be used and what qualitatively happens". The solution to these problems occurs in our head; we should learn something universal about the kinematics, an insight that doesn't depend on the detailed UV dynamical equations of the theory, and once we learn it, the "difficulties and complications" don't look so bad.


Let me discuss several important examples. First, as I mentioned, many people have problems with the basic rules of quantum mechanics, with the fact that this theoretical framework doesn't discuss "how things objectively are" (which is a wrong or meaningless proposition) but "what propositions may be made about the system" (and the propositions unavoidably depend on the observer). I have discussed the universal postulates of quantum mechanics in hundreds or a thousand or so blog posts. The expectation that something will be improved in a "better theory" is completely wrong. The theory is right, the problem is on the quantum skeptics' side, they need to fix their dysfunctional brains.

Divergences in quantum field theory are an important class of the "difficulties" that arise when we combine quantum mechanics and special relativity. Loop (Feynman) diagrams lead to integrals over momenta (or positions) and these integrals may diverge, either because of the region \(|\vec p| \to \infty\) (or \(|\vec x_1-\vec x_2| \to 0\)) which is the UV region; or in the opposite extremes of the integration domain, \(|\vec p| \to 0\) (or \(|\vec x_1-\vec x_2| \to \infty\)) which is the IR region.

These two types of divergences are symptoms of "difficulties" of a completely different character and I chose the basic structure of this blog post according to this class of "complications", the divergences. Many (but not all) of the undergraduate courses on quantum field theory explain that the UV divergences need a true dynamical fix within the theory and they may render a theory inconsistent (or non-renormalizable) which means that in one way or another, it should better be replaced with another theory, at least if we want precise predictions at arbitrarily short distances.

On the contrary, the IR "difficulties" don't mean that the theory is bad. Instead, they mean that we have asked a wrong question, one that was implicitly assuming qualitatively wrong assumptions about the physical processes or the numbers (observables) that describe these processes and their outcomes. Let us elaborate on both phenomena, the UV divergences and the IR divergences, a little bit.

Shortly after the birth of quantum field theory around 1930, people were only dealing with the tree-level diagrams that describe certain processes (and scattering) in the leading approximation. One may show that the tree-level diagram contain "the same information" as the classical equations governing the evolution (and collisions) of classical waves (electromagnetic waves and other waves). The tree-level Feynman diagrams look very quantum and depend on propagators, external particles from a Fock space, and other things that look totally non-classical. But the information that we get from these diagrams is just a "reprocessed information" that is also encoded in the data about the scattering of classical waves.

It must be so because the loop expansion (where we consider diagrams with an increasing number of loops to be increasingly negligible) is nothing else than the expansion in \(\hbar\); the higher number of loops you consider, the higher exponent \(\ell\) in \(\hbar^\ell\) sits in the prefactor estimating the magnitude of the diagram (a contribution to the total amplitude), and the more "quantum" (and therefore more negligible in the classical limit) the contribution is.

Already the simplest loop diagrams, the 1-loop diagrams in the \(D=4\) quantum electrodynamics, were quickly seen to be UV-divergent, receiving a contribution from the very high momenta of the virtual particles that simply aren't finite. For a while, physicists thought that the theory was useless for the calculation of the smaller, more precise, more quantum corrections. However, some brave souls ignored the infinite terms and noticed that the most straightforward erasure of the "infinite terms", while extracting the "finite parts" of the loop diagrams, yielded much more precise results than the tree-level diagrams only. The loop diagrams should be counted and the infinities should be erased in some way.

It was black magic for a little while but in the 1940s, they already had a pretty much universally correct rule how to do these subtractions. One may assume that the coupling constants such as \(e\) in electromagnetism that are substituted to the QED Lagrangian have both a finite and a divergent piece (\(k\cdot \Lambda^m\)) piece. When the tree-level diagrams are added with the one-loop diagrams, the divergence from the one-loop diagram (but the finite piece of \(e\)) is cancelled against the infinite part of the tree level diagram (that involves no divergence in the integral but, as I just said, the prefactor is a sum of two terms including an infinite one). When this is done right and the \(e\) in the Lagrangian is fine-tuned to cancel the divergences, the truly observable phenomena end up being finite. The infinite terms in \(e\) in the Lagrangian, the "bare coupling constant", aren't an inconsistency because the bare coupling constants can't really be measured using any apparatuses and protocols. So the QED has divergent terms in the bare parameters; but all the calculated probabilities of genuine, full-blown processes end up being finite numbers between zero and one.

Physicists gradually saw that this process, renormalization, worked and produced arbitrarily accurate results to all loops which kept on agreeing with the experiments. The precise way how the divergences were parameterized turned out to be ambiguous but all these ambiguities canceled in the final result. Meanwhile, the good theories only depended on a finite number of parameters such as \(e\). In the 1970s, these "miracles" making the black magic usable were explained using the notion of "effective field theory" (EFT) and "renormalization group" (RG). Whatever the detailed physics at short distances is, we are guaranteed to get some approximate theory at long distances which is an effective field theory and the number of such long-distance behaviors is rather limited, labeled by finitely many parameters (in the sub-cases we study). Non-renormalizable theories (producing infinitely many types of divergences that waited to be canceled, and therefore the need to adjust infinitely many coupling constants) were "trash" according to the picture of the 1940s; in the modern RG picture of the 1970s, they are OK as effective field theories (EFT) but mustn't be trusted at too high energy scales. Every EFT has a cutoff \(\Lambda\). Non-renormalizable theories are just those whose \(\Lambda\) cannot be chosen to be much higher than the typical masses of particles in the theory; renormalizable theories may be extrapolated to much higher, almost infinite, energy scales.

I don't want to focus on the renormalization, RG, and EFTs too much. But the need to subtract the divergences, adjust the bare parameters, and replace the non-renormalizable theories with the renormalizable ones may be said to be a work for a "repairman" who is genuinely fixing the "engine of the theory" to make it work – or make it work more accurately and for higher or more general momenta than without the fix. Note that already perturbative string theory eliminates all UV divergences. It's nice but it's not really "existentially needed". What was existentially needed was for string theory to make quantum gravity renormalizable; to remove the infinitely many types of UV-divergences that plague the quantized Einstein's equations. And indeed, by removing all UV divergences, string theory also easily makes sure that you won't be crippled by infinitely many types of the divergences.

The punch line ends up being very different for the infrared divergences. In quantum electrodynamics, people quickly noticed that aside from the UV divergences, some one-loop (and sometimes also higher-loop) diagrams also produce divergences from the integral over the region of very low momenta (or long distances between the vertices in the spacetime), the infrared divergences. What is the probability that two electrons scatter and change their directions to new ones while emitting exactly one photon in a third direction? The answers to such questions end up leading to IR divergences in the loop diagrams.

It turns out that in this case, there is nothing wrong with the theory. You shouldn't change the formula for the Lagrangian and you shouldn't even modify your assumptions about the value of the parameters (in the UV-divergent case, we said that the bare coupling constants had an infinite piece). Instead, you should carefully look what you're calculating and what the calculation assumes. By calculating the specific process with just a single photon at the end, and assuming that the probability will be a finite number strictly between 0 and 1, we assume that it's normal for such an electron-electron collision to produce a finite number of photons.

Well, however "obvious" this assumption may look, it is wrong. In reality, the scattering of two electrons produces an infinite number of photons in average. Why? Classically, two electrons repel each other, accelerate, and the accelerating charges emit the electromagnetic waves. This conclusion must be incorporated in the full quantum theory, quantum electrodynamics, and be sure that it is. QED doesn't really study the precise trajectories of the electrons during the process (only the initial and final states are truly observed) but the expectation value of some acceleration would be nonzero and electromagnetic waves are emitted. And in QED, electromagnetic waves are represented as photons. These are the same kinds of photons that are seen as virtual photons in the diagrams responsible for the repulsion of the electros.

The total energy emitted via the electromagnetic waves during the repulsive encounter of two electrons is finite; but the number of photons is infinite. It's not hard to see that these statements don't contradict each other; the number of particles is an integral of a density over frequencies\[ N_\gamma = \int_0^{\infty} d\omega\,\rho(\omega) \] but the total energy is a similar integral with the extra insertion of the photon energy \(E=\hbar\omega\):\[ E_\gamma = \int_0^{\infty} d\omega\,\rho(\omega)\cdot \hbar\omega. \] In the region of the integral with \(\omega\to 0\), the second integral for \(E_\gamma\) may very well be convergent due to the extra suppression by the factor \(\hbar\omega\); while the first integral for \(N_\gamma\) may be divergent. And indeed, you will find out that in certain processes, exactly the right exponents appear so that both statements are correct. The repulsive, accelerating electrically charged particles produce a finite amount of energy in the form of photons; but the expectation value of the energy stored in these photons is finite.

Once you become comfortable and certain about these facts, you may ask what you should do. If it is unavoidable for the process to produce infinitely many photons and the probability of any process with a finite number of photons is zero, which probabilities should we calculate at all? And a usable answer – and the simplest usable answer (although not necessarily the only usable answer) – is to consider the inclusive cross sections (probabilities per unit flux etc.) where we allow an arbitrary number of photons that can't be detected in practice, e.g. an arbitrary number of photons with the energy \(\omega\lt \omega_{\rm min}\) where \(\omega_{\rm min}\) is the minimum frequency that can be detected by an apparatus.

When we do so, we will automatically see that such probabilities are finite. The original, "naive" questions were equivalent to the limit of the new question for \(\omega_{\rm min}\) (all photons may be detected by the idealized apparatus in this naive limit). However, some tree-level processes will yield a cross section that seems divergent in the \(\omega_{\rm min}\) limit and this "divergent" (only when the limit \(\omega_{\rm min}\) is actually taken!) piece exactly cancels against the divergences in the loop diagrams (where we also truncate and integrate over \(\omega \gt \omega_{\rm min}\) only; why these two cutoff values of the frequency are almost the same is a bit more subtle question). What is important is that we were not adjusting anything in the Lagrangian, not even the value of the bare parameters. In the refined treatment, the cancellation took place even with the original Lagrangian and original values of the bare parameters. It was our question that needed to be fixed for the theory to produce finite nonzero answers. The theory was right, we were not!

There exist various other IR effects where seemingly inconsistent (e.g. divergent) results may be calculated from the theory. But in all these cases, the "difficulties" arise because of our incorrect qualitative assumptions about what happens, not because of a pathology or incompleteness in the theory. Once we adjust our assumptions about what qualitatively happens and ask a more careful question (such as one that allows the infinitely many soft photons, beneath the frequency \(\omega_{\rm min}\) to be emitted along with the particles that we do observe and distinguish), the theory starts to cooperate and produces finite, nonzero answers for the probabilities, without any need to fix the theory.

Elias Archer asked about something that turns out to be analogous to these IR problems. He asked whether and how string theory solved the difficulties with "inequivalent unitary representations" in quantum field theory on curved spaces. In particular, when we discuss the Unruh effect (the simpler, Hawking-like radiation that an accelerating observer sees in the ordinary Minkowski space), Elias was worried about the fact that the right Hilbert space contained different states according to the static observer (Mr Hermann Minkowski); and the accelerating observer (Mr Wolfgang Rindler).

Just like in the case of the universal postulates of quantum mechanics and the IR divergences, it is really wrong to expect any fix. In fact, this only looks like a "difficulty" because the worried person hasn't understood the simpler theory properly. The difference between the two Hilbert spaces is damn real. String theory changes nothing about the difference – and no other more complete theory ever will. Well, string theory only changes the "engine of the theory" which lives in the UV, at short distances, and this is really an IR problem, so it's obvious that string theory doesn't change anything about the "problem". (String theory also changes some long-distance issues, especially when we start to distinguish the black hole microstates, but there are no black holes in the basic discussion of the Rindler coordinates in the Unruh effect.)

Why are the Minkowski and Rindler Hilbert spaces different? It's because of the subtleties "which extreme enough formal combinations of the Fock space etc. should be included and which shuldn't be". Sometimes, physicists may use the term "the Hilbert space" in a slightly sloppy way. For example, many physicists often include the "Dirac delta-function" (distribution) form of a wave function into "the Hilbert space" of allowed wave functions. Well, the delta-function wave function has an infinite norm and that's why it shouldn't be included in the normal Hilbert space. The broader Hilbert space that allows the distributions as wave functions has been named by the rigorous mathematicians (who are about the difference) according to the 2020 U.S. Presidential Election, "the rigged Hilbert space".

But the finite norm isn't the only extra constraint we normally demand when we build the Hilbert space. We also want the expectation value of the energy to be finite (because no lab ever gets the infinite amount of money to pay for the infinite electricity that is needed to produce the infinite-energy states). In non-relativistic quantum mechanics, the finite energy means that the wave function must be continuous. A discontinuity (jump) in \(\psi(x)\) would mean a delta-function in \(\vec p \psi(x)\) which would be squared in the integral for the expectation value of the kinetic energy \(p^2/2M\). The integral of \([\delta(x)]^2\) diverges, so would the expectation value of the kinetic energy, and that's why the finiteness of the kinetic energy prohibits wave functions with jumps.

Analogous constraints on the finite energy exist in quantum field theory and they are more nontrivial already in the free quantum field theory with Fock spaces because these Fock spaces are effectively wave functions defined on an infinite-dimensional space (i.e. wave functionals if the state vector is written in the maximally continuous representation). And now we see a very clear reason why the Minkowski and Rindler spaces may be different. We demand a finite energy but these two observers use a different definition of the energy, a different Hamiltonian. The Minkowski one uses \(H\) as the generator of \(t\to t+\delta t\) while the Rindler observer uses \(H=J_{tz}\), a generator of the boost in the \(z\)-direction (to be specific). That's a simple reason why the two spaces of finite-energy states will differ; they differ because the concept depends on the word "energy" and they have a different definition of energy. Each obsrver i.e. each definition will impose slightly different conditions on the "continuity of the wave functional", where and how much it has to be "continuous".

There exist similar "problems and complications". I won't discuss others although there are several other very important examples. But the general lesson of this blog post should be: sometimes, the problem is on your side, you should improve your understanding of the "old theory". Some things that look "hard or strange" are here to stay because they are basically mathematical facts. You should learn them and you should be prepared for the answer that the "newer theory" as well as an arbitrarily superior hypothetical future theory will exactly reproduce the qualitative conclusions (and often the approximate quantitative conclusions) extracted from an approximate or "older theory".

And that's the memo.


P.S.: When you define the Lagrangian including the values of the coupling constants (in your scheme which you may choose but it won't matter for the testable predictions) as well as your choice of the Hamiltonian (generator of some variations of the spacetime coordinates), the Hilbert space becomes unambiguous. This ambiguity still exists in AQFT (the algebraic or axiomatic quantum field theory) but that is because the assumptions of AQFT are wrong (especially because of its naive assumptions, especially the implicitly incorporated denial of all UV divergences and the need to remove them – by the assumption that the bare fields and bare parameters in the simple Lagrangian actually contain divergent pieces). So this ambiguity predicted by Haag's theorem in AQFT is of the "UV type" because all theories that formally obey the naive axioms of AQFT are physically wrong and unusable.

No comments:

Post a Comment