Monday, September 17, 2018 ... Deutsch/Español/Related posts from blogosphere

How path integrals mirror Feynman's personality traits

The two-day silence is mostly due to the September 19th bike trip, 100 kilometers starting in the mountains (Bohemian Forest), sorry.
Courage, playfulness, analogies, shut up and calculate (calculation instead of words), lots of calculations extending simple rules, numbers instead of philosophy, don't give up easily

The most successful theories of classical physics may be formulated in terms of the principle of least action. We may consider alternative histories \(x_i(t)\) where some observables \(x_i\) depend on time \(t\). The principle says that the action \(S\) which is a functional of the history (a collection of functions) \(x_i(t)\) is minimized for the history that is actually allowed by the laws of physics:\[

\delta S [x_i(t)] = 0.

\] Paul Dirac has been convinced that this elegant formalism of classical physics – based on the concept of the action – should have a correspondingly nice role in quantum mechanics. And he found a good guess. In quantum mechanics, one could perhaps calculate the probability amplitudes for the evolution of \(x_i(t_1)\) to \(x_i(t_2)\) as \(\exp(iS/\hbar)\).

That was nice and Dirac presented some basic argument why the Lagrangian (whose integral over time gives the action) is related to the Hamiltonian but he didn't do much with this idea. It looked too heuristic to him.

Richard Feynman did much more than that. He turned the path integrals into a big industry within quantum mechanics, calculated the amplitudes for some basic systems using the path integrals, and derived the Feynman diagrams from the path integrals applied to quantum field theories. He developed many tricks to actually calculate the path integrals – the Feynman parameterization, among many other things – and he later added some clever cherries on a pie such as the Faddějev-Popov ghosts (Feynman introduced them first, while quantizing general relativity using path integrals).

Path integrals became the key engine used by particle physicists to calculate things in quantum field theory as well as string theory.

It's easy to rationalize the history – the hindsight makes it trivial – but I can't resist to say that there seem to be very good reasons why Feynman, and not anyone else (not even Dirac), was able to get this far. He had a comparative advantage because the personality traits that are apparently helpful to do progress with path integrals are highly correlated with Feynman's personality traits – perhaps traits that may be seen in many other places of his intellectual and even everyday life.

Let me explain this statement a little bit.

Feynman liked analogies – for example, he would explain mechanics using analogous electrical circuits. He realized that the action was a very powerful concept in classical physics which is why it seemed intriguing to believe that it could be powerful in quantum mechanics, too. Dirac has sketched the only plausible way how the action could enter. The particles or other physical systems "sniff" all trajectories or histories at once. This is a generalized version of a particle's going through "both slits" in Feynman's favorite "template" for all quantum mechanical experiments.

Now, "sniffing all possible [classical] answers" is something that is arguably Feynmanesque by itself. All possible answers are given a chance to influence the total result (this positive attitude was perhaps linked to his desire to "look at all things from many perspectives"); and one must only specify the rules of the competition or cooperation between these answers. Complex amplitudes are everywhere in quantum mechanical expressions, we want something that depends on the action, so there has to be an amplitude that depends on the action, and it has to be \(\exp(iS/\hbar)\). Note that Feynman has also considered \(\exp(i\pi)+1=0\) to be the most beautiful equation of mathematics so the imaginary exponents were surely something he liked. When all of state-of-the-art physics might be reduced to straightforward calculations of sums of \(\exp(iS/\hbar)\) terms, that sounds pretty amazing and one should spend some energy on it, right?

OK, for these reasons, the path integral for the amplitudes\[

{\mathcal A}_{fi} = \int {\mathcal D}x(t)\,\exp(iS / \hbar)

\] with the appropriate initial and final conditions was probably more natural and attractive for Feynman than for 95% of theoretical physicists of his time. But Dirac could figure out this basic rule as well. Dirac stopped too early while Feynman continued. Why? Well, when Feynman started to continue, Dirac was around 40 years old. Maybe it's already too much. Maybe it's not too much.

But Dirac has already had lots of groundbreaking discoveries in the book of his achievements. It's natural to slow down. Moreover, these achievements have made Dirac a crucial theorist. On the other hand, all the extra insights about the path integral that Feynman did were applications of some basic ideas. In this sense, Feynman was an industrialist like Edison.

He liked to fix radios, play sophisticated games with simple enough toys. That's different from some other kids and physicists who prefer to play with complicated toys.

Now, the path integral \(\int {\mathcal D}x(t)\) is infinite-dimensional – which sounds complicated – but from the beginning, especially if the integrand is familiar enough, it's a "repeated application" of something that is simple enough, something that Feynman knew very well from the finite-dimensional integrals.

So although the infinite-dimensional character looks hard, the path integrals seem to be a game that has simple rules in principle. It's like fixing the radios all the time or picking the locks all the time. So the path integrals were simply a game that he simply had to like. You know, the space that is integrated over is flat. It's not curved and it doesn't have a terribly complicated topology. It's good. The first integrand that we insert – for the harmonic oscillator and the free field theory – is a Gaussian function (times a polynomial prefactor if we need to get some correlation functions). That's straightforward, too. Because it's possible to calculate the finite-dimensional Gaussian integrals, it should be possible to calculate the infinite-dimensional ones, too.

Some results for the amplitudes in the simple enough quantum mechanical systems exist in the literature, the path integral should be formally right, perhaps with some modifications, so there must exist a way to deal with the path integral that produces the correct results.

Now, much of Feynman's activity and excitement while dealing with the path integrals was driven by his "shut up and calculate" approach to thinking. Lots of people could try to dismiss the path integrals for many reasons – all of which were ultimately proven spurious:
  1. The exponential of the imaginary exponent has a constant absolute value – so unlike the Gaussian case, the support is non-compact and the integration over "regions at infinity" is bound to produce problems.
  2. The functional integral is a continuously infinite dimension so it's not even a limit of a finite-dimensional integral.
  3. The functional integral may be different from the integral when the time is sliced, so the slicing could fail to be helpful.
  4. There is no way to define the infinite-dimensional Riemann integral because there's no way to divide the integration region to small pieces.
  5. There is no way to define the Lebesgue integral because the regions of the integration space where the integrand has some value are extremely complicated and can't be assigned a measure.
  6. All the prefactors that appear in the partial calculation are singular – infinite or zero – in the continuum limit which makes the whole calculation indeterminate.
  7. Divergent expressions such as \(1+2+3+4+\dots \) appear even in the factors that are surely important, and those make finite results impossible.
  8. Other generically (UV) divergent integrals (over momenta) appear in the loop integrals and they make finite answers impossible.
  9. Long-distance, IR divergences appear, too.
And I could continue for quite some time. Most of these possible objections would have been raised very early on and most people would lose the motivation to study the path integrals. If you take these counter-arguments seriously, path integrals look hopelessly ill-defined, useless, or inconsistent.

Well, they clearly didn't stop Feynman. He continued and the amazing success, universal applicability, and precision of the path integrals have proven that in the practical world, all these "lethal" arguments must be absolutely spurious. And be sure that they are.

Now, I would have never cared much about any of these counter-arguments – and more importantly, half a century earlier, Feynman didn't care about them, either. Again, there's something typically Feynmanesque in dismissing all these "lethal" arguments against the path integral. What is it?

All of these arguments against the path integral are "verbal" – and they are expressions of "qualitative, unverified dogmas" and "feelings". None of them is really a positive recipe to do some calculation correctly. And none of them is justified by any quantitative comparison of the theory and the observations. Or a consistency check within a theory, for that matter.

In other words, all these arguments are similar to the pompous question "What is the name of this bird?" Who cares about the name of the bird? Only moronic children do. What matters is how the bird works. Let's go through the arguments above.

First, the imaginary exponent makes the support of the integral non-compact. Well, that looks like a problem because the imaginary Gaussian is seemingly very different from the Gaussian. Except that Feynman immediately saw that it isn't. When the exponent of the imaginary Gaussian is very large and variable, the integrand oscillates quickly and averages out. A small rotation towards the real axis may add some real Gaussian which does make the support compact.

It means that the correct intuition is that the imaginary exponent in the Gaussian is actually as good for the well-definedness of the whole picture as the real, decreasing Gaussian factor. If you have trouble, some analytical continuation is bound to make the calculation harmless.

Second, the continuously infinite dimension is not really a problem because the trajectories \(x_i(t)\) may be expanded e.g. to Fourier series and the path integral's dimension becomes countably infinite. This is an equivalence that Feynman was very aware from his early encounters with quantum mechanics and Fourier series. Hilbert spaces often have continuous and discrete bases but they're the same Hilbert spaces. So the continuously infinite dimension won't be a bigger problem than the countably infinite dimension, the correct intuition says.

Third, it was said that the time slicing could fail to give the right thing in the limit. Why don't we just try to do the time slicing instead of giving it up without evidence? Feynman clearly tried to time-slice the path integral and it made some calculations very explicit. To produce finite results at the very end, one has to adjust the overall coefficient at the very end but that's it.

The statement that there is some problem that prevents one from defining the path integral using the time-slicing is really an unverified assumption. Feynman's attitude would be to verify i.e. to calculate as much as possible, to get as far as possible.

Fourth and fifth, there is no Riemann and Lebesgue approach to define the integrals. The regions in the integration space are too complex, they're infinite-dimensional manifolds, and the measure theory is absent, too. But does it really matter? It matters for the kids of the type "What is the name of this bird?" who like to parrot things. One statement that is often parroted is that the integral should be a Riemann integral or a Lebesgue integral.

But this assertion is just another physically unjustified – and unjustifiable – verbal dogma. In physics, results are often given by integrals and Mother Nature doesn't tell us that they should be Riemann or Lebesgue integrals. Mother Nature doesn't say and cannot say either! Instead, what matters is that these integrals obey certain mathematical identities. You can write the product of integrals over \(M\) and over \(N\) as an integral of the product of functions over \(M\times N\), the Cartesian product. The integrals are additive when it comes to integrands. You may integrate by parts and use substitutions etc.

You know, the point is that all these rules that are actually helpful to extract some particular numerical answers keep on working when the dimension of the integral becomes infinite. And it's these rules, not philosophical dogmas such as "integrals should be described according to Riemann or Lebesgue", which are physically important. In physics, our goal isn't to worship Riemann or Lebesgue and the axioms they prescribe to everyone. Our goal is to calculate integrals – and that job has much less to do with Riemann and Lebesgue. The claim that Riemann or Lebesgue integrals, as opposed to integrals that just obey the usual rules of integrals, should be "useful for physics" is another unverified, qualitative dogma – and I would argue that given the success of Feynman's path integrals, this dogma has been proven incorrect.

Sixth, the prefactors aren't really a problem. There is clearly a correct way to normalize the path integrals that produces the unitary matrix of evolution (obeying \(UU^\dagger = 1\) with the correct, true, properly normalized identity on the right hand side). If some prefactor is needed to fix the normalization, we may simply add it multiplicatively. At the end, this universal prefactor in a path integral is physically irrelevant. We're interested in the quantities that aren't universal, that depend on the initial and final conditions, \(x_i(t_1),x_f(t_2)\), or other variables related with the results of actual measurements.

So just don't be afraid of the overall normalization at all. The potentially problematic overall prefactor cancels in physically interesting quantities.

Naively divergent sums such as the sum of positive integers, the seventh point. Well, such things may appear in the exponent of some factor that describes the value of the path integral. Are they a problem? They cannot be a problem because for some simplest cases such as the harmonic oscillator, the correct amplitudes were computed differently, using the operator formalism. In this sense, if some field theory between metallic plates etc. requires us to calculate the sum which results from the path integral, we may perhaps set\[

1+2+3+4+5 + \dots = -\frac{1}{12}

\] as an identity obtained from "assuming that the path integral works" and "assuming the correct result calculated otherwise". When we substitute this identity to different path integrals, we may verify whether it keeps on working. And it does. All the consistency checks pass. That's evidence that the sum of positive integers should be considered equal to \(-1/12\) when it appears during the evaluation of a path integral.

The narrow-minded lovers of bird names could protest. It can't be \(-1/12\) because it's a "positive integral infinity", a divergent expression, not a "negative fractional finite constant". And this statement may be justified by defining the sum as a limit\[

1+2+\dots = \sum_{n=1}^{\infty} n = \lim_{K\to\infty} \sum_{n=1}^K n.

\] Great. But if you think about it, this rewriting of the sum as a limit is just another physically unverified dogma. There is no reason why the sums that appear in the path integrals should be defined in terms of these limits. The situation is completely analogous to the discussion of the Riemann and Lebesgue integrals above. In fact, it's not just an epistemic analogy. The present case is a continuation of the previous controversy. You could have defined the infinite-dimensional integrals so that their results are defined as limits of some finite-dimensional integrals over the frequency modes, and that could tell you that the sum of integers should be interpreted as the limit above, too. But it's not really clear that the infinite-dimensional integral should be defined as such a limit.

The right way to approach "seemingly problematic" integrals in natural science doesn't necessarily require us to use the Riemann and Lebesgue integrals and/or particular ways how to extend them to an infinite-dimensional integration space. And similarly, the right way to approach "seemingly divergent" sums isn't through the limit of partial sums. Instead, it's the algebraic properties that should dominate. The analytical continuation must be allowed at very step if it is useful. And "infinity" is clearly not the correct result. Saying that an intermediate result of some calculation is "infinity" is clearly equivalent to "E" on the calculator and it kills the whole calculation. You must simply avoid such an assertion because it's equivalent to "let's give up calculating anything".

Feynman has never given the calculations up. He knew that the physically correct result isn't infinite. So if the infinity appears at some point of the calculation employing the formally correct path integrals, what is wrong is the way how we calculate these expressions, not the theory! In particular, the intermediate "infinity" is just a sloppy excuse not to care about the detailed form of the infinity. Some terms or factors within the "infinite number" still matter – you simply can't forget about them. Forgetting about some numbers that clearly do reflect the dependence on the question or initial or final conditions means to kill the calculation. A set theorist may be happy with a final answer "aleph zero" to a complex question but a physicist must never do it. For a physicist, it's clearly the finite parts that must matter – and the infinite parts or factors that are spurious and may be eliminated and/or ignored. The infinity is "numerically greater" than a finite number (also in the sense of ordinals and cardinals) but it is much less important in natural sciences because the infinity, like "E", doesn't carry any detailed verifiable information about the physical phenomena!

In physics, the finite David always trumps the infinite Goliath – because that David is smarter and empirically verifiable.

Eighth, the UV divergences and renormalization. Feynman was one of the first people who encountered ultraviolet divergences in loop (Feynman) diagrams. Again, that could be used as an argument that the path integral doesn't work at all. Or that it only works up to the tree level and breaks at the one-loop level. Indeed, Paul Dirac – who just hated renormalization – chose one of these answers. The path integral would really be almost useless if the loop diagrams were forbidden. But Feynman knew better. These one-loop and multi-loop diagrams can't be infinite because all the observations are finite. And they can't be zero because the equation \(UU^\dagger = 1\) is nonlinear in \(\hbar\). So corrections simply have to arise at every order to restore unitarity.

That's why the crippling truly infinite terms in the one-loop and multi-loop Feynman diagrams simply cannot be there. They must cancel, and if they don't cancel immediately, we must cancel them by carefully adjusting the bare couplings (to appropriately infinite values) or, equivalently, by adding counterterms.

At this level, a man like Feynman was already an experienced "cheater" according to the narrow-minded mathematicians. One works with the automatic analytic continuation, singular factors that cancel, so why shouldn't one allow a singular value of the bare coupling constant whose infinite part is chosen to cancel some loop diagrams?

All these things may be called – and have been called – illegal, black magic by narrow-minded mathematicians. But again the point of physics is to find the theories that match the observations of Nature. The point of physics isn't to defend some dogmas that integrals have to be defined according to Riemann or Lebesgue; all infinite sums have to be calculated as a limit of partial sums; all parameters in a theory have to be finite in the physical limit; all intermediate results have to be finite and non-singular.

By looking at the calculations we do in physical theories, we have strong evidence that Nature violates all these assumptions. The relevant integrals and sums are similar objects as those that are well-behaved according to the normal definition. But what they really share are the allowed algebraic operations, not the precise way how to define all the values that normally require some limits.

Nineth, infrared divergences. They cannot be removed by any counterterms. But it's actually a good thing because if we think about the physical phenomena, we realize that it's very correct that the theory predicts certain quantities to be singular. But to get finite answers, we must fix the question that was incorrect – the theory doesn't need any fixing. In particular, we must realize that the probability of a microscopic process that produces strictly zero photons is naturally zero, precisely zero, because every process with an accelerating electromagnetic charge produces infinitely many very soft photons (whose total energy is zero). When we fix the question and ask about the probability of an "inclusive" process where some soft photons up to some energy are allowed, the answer becomes finite and nonzero and the path integral calculation matches the observed value.

A key point is that "the right way to proceed in physics" has violated most of the detailed, narrow-minded mathematicians' assumptions about how the objects such as sums and integrals should be defined and evaluated in the situations that look non-trivial or problematic. At the end, when you develop all the machinery to calculate the values of path integrals, you may build a new rigorous axiomatic framework how to proceed.

The rigorous framework of a smart humanities researcher studying Feynman's behavior as if Feynman were an animal

You know, in this framework, all the motivation using the imagined infinite-dimensional "integrals" may be completely hidden. Instead, the rigorous definition may describe an algorithm that exactly tells you which characters are written on paper (to emulate what Feynman is doing during the calculation) and how to manipulate with them. Only the finite result – the amplitudes – have a physical meaning. Mathematical theorems may be proven that e.g. the S-matrix calculated by the Feynman's algorithm will be unitary and it will have other desired properties, too.

In this rigorous definition of the path integral that is surely possible, the symbol \(\infty\) may exist, e.g. as the upper bound in some objects that Feynman calls sums or integrals. But in this rigorous definition of Feynman's calculations, all the lethal character and "infinite awe" of the symbol \(\infty\) is just removed. It's just another symbol that may appear in Feynman's calculation and the calculation doesn't stop when it's there. Instead, the rigorous rules tell us what Feynman does when he writes the symbol somewhere.

Yes, I am proposing a meta-Feynman rigorous definition of the path integrals that simply looks "how Feynman works" and describes the operations rigorously. When you approach the machinery in this way, the whole infinite-dimensional space over which we integrate becomes "invisible". You know, Feynman was imagining such a space – as being analogous to the finite-dimensional space – and it has allowed him to do the calculation. But you don't really need to imagine such an infinite-dimensional space at all.

The whole infinite-dimensional space and the word "integral" may be dismissed as nothing else than a heuristic motivation that allowed Feynman to calculate the amplitudes efficiently – but it's only the final rules that matter. If someone finds a "functional integral" psychologically irritating, she can replace it with a completely different, more pleasing term, such as wakalixes. It will be a part of Feynman's calculation whose operations are prescribed mechanically. For the well-defined quantum field theories and/or formulations of string theory, it will work.

You may remove all the "heuristic picture" – the analogies between finite-dimensional and infinite-dimensional integrals; the analytically continued results of seemingly divergent sums and integrals; as limits that require you to add divergent terms to the parameters at the same moment etc. – as some dull parts of the algorithm that no one knows where they came from. You may claim that there is never really any infinite-dimensional integral anywhere, bare couplings are never infinite, either, and analytically continued expressions are never said to be equal to each other.

When you remove all these things, you may still be capable of learning a Feynman-inspired calculation of the amplitudes. (So the humanities researcher is pretty good – as a good enough student of physics.) But without such "psychologically irritating" propositions (for a narrow-minded, dogmatic mathematician), you would be extremely far from being actually capable of discovering anything important in modern theoretical physics – and even from truly understanding why it works or believing that such a complex sequence of steps inspired by some heuristic ideas you want to overlook agrees with the data. Such analogies and such irritating names and methods for the objects – sums, integrals, and infinite-dimensional integrals – are absolutely needed to combine the steps in the algorithm in a viable way. And even for a person who isn't the original discoverer, many of these "heuristic analogies" are needed for him to understand the procedures – and to learn them in a way that is more than just a case of parroting.

Because the results agree with the observations, and they do so very accurately, you should better admit that there is a reality between all these mathematical imaginations. There is an infinite-dimensional integral (although, strictly speaking, not a Riemann, Lebesgue, or similarly defined integral) that calculates all the complex probability amplitudes in quantum mechanical theories (those that have a classical limit). Feynman approached the situation in this way, dismissed the problems – that most others could consider fatal – as mere technicalities that can be cured by minor localized fixes, and decided to believe the theory and apply it as he were a "practical man" or an "engineer". And that's why he could get this far.

It is very clear that good theoretical physicists prefer to emulate him. And his kind of courage, playfulness, and patience must often be placed on steroids – 21st century theoretical physicists must sometimes be more Feynmanesque than Feynman to make progress.

And that's the memo.

Add to Digg this Add to reddit

snail feedback (0) :

(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//','ga'); ga('create', 'UA-1828728-1', 'auto'); ga('send', 'pageview');