## Thursday, June 21, 2012 ... /////

### Why Feynman's path integral doesn't contradict the uncertainty principle

I just decided to write a short (but not too short) text elaborating on the April 2011 Physics Stack Exchange question by Jane.

Are the Lagrangians in the Feynman path integrals operators? And if they're not, why doesn't it conflict with the basic fact of quantum mechanics that observables are represented by noncommuting objects, or $q$-numbers, to use Dirac's terminology, rather than $c$-numbers?

These questions and several related ones are what I want to be clarified by this text although no one can guarantee that it will.

The answer to the first question is that all the variables we integrate over are $c$-numbers: observables are represented by numbers, the same ones as the numbers in classical physics. And the Lagrangian that appears in the exponent is a $c$-number-valued functions of these $c$-number variables; it is not a genuine operator here, despite its similarities with the Hamiltonian. This fact is OK because we don't really use these quantities to directly make statements about the world, at least not about the final state. Feynman's approach to quantum mechanics never says that $x$ in the final state is equal to a particular value.

Instead, $x(t)$ for all intermediate values between the initial ($i$) and final ($f$) moments are dummy variables we integrate over. We shut up and integrate over them in order to calculate different quantities, the probability amplitudes, which may be easily converted to probabilities that the final state will have various properties.

So if we consider such a probability amplitude${\mathcal A}_{i\to f} = \int {\mathcal D} x(t)\cdot \exp(iS[x(t)]/\hbar)$ where the integral goes over all continuous (but not necessarily smooth – this will be our big point later) curves $x(t)$ connecting $x(t_i)=x_i$ and $x(t_f)=x_f$, we are probing all conceivable histories but the result of the calculation is ${\mathcal A}_{i\to f}$ rather than $x(t_f)$.

I decided to preserve the undergraduate reduced Planck constant $\hbar=h/2\pi$ in this entry; all the comments may be translated to the $\hbar=1$ units of mature physicists by simply erasing $\hbar$ everywhere.

It may be proved that the complex probability amplitudes that result from Feynman's path integral are the same ones as those you may obtain by solving Schrödinger's equation with the initial wave function at $t=t_i$$\psi_i(x) = \delta(x - x_i).$ The proof is usually explained in the introductions to the path integral. Feynman's integral formula for the evolution in a finite period of time may be separated to smaller moments; the integral respects the "transitivity" of the evolution operator. So it's enough to prove the equivalence for the evolution over very short intervals of time. And for the infinitesimal intervals, the equivalence may be proved rather easily.

This point is presented in the basic texts so I won't dedicate it too much time here. However, what is not often explained is why this path integral approach doesn't contradict the commutator behind the Heisenberg uncertainty principle,$x(t)p(t)-p(t)x(t)=i\hbar.$ Imagine that $x(t),\,p(t)$ are the values of the position and momentum somewhere in the middle of the interval whose evolution we study. Because both $x(t)$ and $p(t)$ are represented by ordinary $c$-numbers in Feynman's path integral expression, isn't it inevitable that they commute with one another, i.e. $xp=px$? This would contradict the uncertainty principle.

The answer is that it is not guaranteed but the way in which Feynman's path integral formula avoids the trap is kind of clever and subtle; it has something to do with the short-time (and short-distance, in quantum field theory) behavior of the trajectories that contribute to the path integral. The fact that the short-time behavior is "unusual" and the fact that it may lead to conclusions we wouldn't expect in the classical reasoning may be viewed as a "demo" of tons of similar short-distance subtleties that quantum field theory, a more realistic subset of quantum theories, is literally full of.

OK, so why does Feynman's expression agree with the usual Heisenberg's nonzero commutator?

There are two types of Feynman's path integral for mechanics. One of them integrates over the phase space; one of them integrates over the configuration space. In plain mathematical English, the former is $\int{\mathcal D}x\,{\mathcal D}p$ while the latter is just $\int{\mathcal D}x$. The relationship between them is pretty much straightforward: we may simply integrate over $p(t)$ first and eliminate it: such a step transforms any phase-space path integral to a configuration-space one.

When we discretize the time to "visualize" the functional integral as a finite-dimensional one (and I invite you to interpret this procedure merely as a way to present the smooth objects in Nature – who has no problems with the functional integrals etc. – to us humans who prefer finite-dimensional integrals because we're just stupid animals, especially some of us, and because many of us are lazy and want to use computers that really don't want to compute infinite-dimensional integrals "mechanically"), we describe the trajectory $x(t)$, defined as a function of a continuous $t$, to a finite collection of numbers$x(t_i), \, x(t_i+\epsilon), \, x(t_i+2\epsilon),\,\dots\,, x(t_f).$ We divided the interval from $t_i$ to $t_f$ to many short intervals whose duration is $\epsilon$. Let me just remind you that in this treatment, the natural phase-space path integral uses momenta in the middle of the short intervals,$p(t_i+\frac\epsilon 2), \, p(t_i+\frac{3\epsilon}{2}), \, \dots ,\,, p(t_f-\frac{\epsilon}{2}).$ This pattern is natural because the momentum $p(t_i+\epsilon/2)$ is related to the velocity $v(t_i+\epsilon/2)$ which may be approximated by $(x(t_i+\epsilon)-x(t_i))/\epsilon$ and the latter expression uses the values of $x(t)$ at two nearby moments: one of them may be naturally chosen to come "before" the argument of $v(t_i+\epsilon/2)$, the other one is symmetrically "after" that moment.

Although the phase-space path integral could be employed to produce a more general path-integral proof of $xp-px=i\hbar$, one that would work for complicated Lagrangians mixing positions and velocities in pretty general ways, it is probably more pedagogical to explain where the nonzero commutator comes from in the configuration-space path integral. After all, the configuration-space path integral – where we only integrate over the coordinates but not the momenta – is the version of the path integral that becomes natural and preferred in quantum field theory, the framework where Feynman's approach becomes really useful.

A reason to prefer the configuration-space path integral is that the set of counterparts of $x(t)$, namely $\phi(t,x,y,z)$, if I mention a Klein-Gordon field, are relativistic covariant (Lorentz transformations only transform the arguments $(t,x,y,z)$) while the momenta or velocity $\partial\phi/\partial t$ would have to pick a preferred reference frame and its preferred coordinate $t$, thus making the Lorentz invariance less obvious. In other words, the phase-space path integrals are close to the Hamiltonian formalism which is less natural for Lorentz-invariant theories in which the Hamiltonian (energy) is just one component or projection of the energy-momentum vector.

Getting started with the calculation

Because of these disclaimers, we finally want to solve the following problem: Calculate the nonzero Heisenberg commutator from the configuration-space Feynman path integral. I will restrict the proof to the "ordinary" actions having the form$S=\int\dd t\,L(t),\quad L(t)=\frac{mv(t)^2}{2} - U[x(t)]$ where the energy is separated to the kinetic and potential energy, kinetic energy has the usual form, and the potential energy only depends on $x$. What we want to compute is the counterpart of the operator $x(t)p(t)-p(t)x(t)$ for some moment $t$ inside the interval whose evolution operator was considered.

The ordering matters for operators but it seems not to matter for the Feynman integration variables. So shouldn't the commutator be zero? No. We must appreciate what it means for the operator $p(t)$ to be on the right side of $x(t)$ in the product $x(t)p(t)$, for example. In the operator language, it means that it acts on the ket vector $\ket\psi$ before $x(t)$ does. Let's only consider ket vectors.

Off-topic, soccer: Czechia is playing quarterfinals of Euro against Portugal tonight. We must show the door to Ronaldo and similar boys. Of course, we can do that. The video above is from the (similar?) Czech-Portuguese quarterfinals in 1996: we won 1-to-0 by Karel Poborský's astronautic goal (see above). At the end, we played the finals where we lost to Germany. A New York Times blog previews the match tonight; the author is so impressed by our guys that he completely forgets about the Portuguese! :-) Between 1930 and 1989, Czechoslovakia won 3 times, lost 3 times, and tied 3 times with Portugal.

This word "before" may look like a mathematical abstraction that has nothing to do with "before" that encodes the ordering in the actual time $t$. The ordering of some mathematicians' steps in time – the mathematician is computing a matrix product – may seem to be completely independent from the chronology of events and positions $x(t)$ that define a trajectory. However, this ordering is actually the very same one. An operator acting on the ket vector is a sort of an event in quantum mechanics.

(This is one of the ways to see that a logical arrow of time is intrinsically imprinted into the basic framework of quantum mechanics. The past and the future don't play the same role in a quantum theory for the same reason why it matters whether one operator/event is closer to a ket vector or further from it. Microscopic laws and/or Schrödinger's equation may have a time-reversal or CPT symmetry but this fact doesn't imply that the actual events and the physical relationships between them will respect the symmetry. They don't as thermodynamics makes flagrantly obvious.)

So the commutator $x(t)p(t)-p(t)x(t)$ in quantum mechanics must really be replaced by$x(t+\frac\epsilon 2) p(t) - p(t) x(t-\frac\epsilon 2) = \dots$ where the magnitude of the infinitesimal $\epsilon$ shouldn't matter but it should be a positive infinitesimal number. Conveniently enough, we may choose the size of this $\epsilon$ to exactly agree with the spacing between the moments into which we discretized the trajectory. In this way, the calculation will only involve a one-dimensional integral but believe me that if you tried to compute it with a different $\epsilon$, you would obtain the same resulting commutator. The nonzero commutator only boils down to a single discontinuity. Also, I chose the (infinitesimal and therefore irrelevant) shifts of the arguments (time) in such a way that both terms of the commutator included the same $p(t)$.

The most recent displayed formula may obviously be written as $\dots = p(t)\cdot [ x(t+\frac\epsilon 2) - x(t-\frac\epsilon 2) \approx p(t) \cdot \epsilon v(t) = \dots ]$ where I approximated the difference of the two values of $x$ by the derivative (the velocity) multiplied by the separation of the times. At any rate, if we use the configuration-space variables, the expression for $[x,p]$ is simply$\dots = \epsilon\cdot mv(t)^2.$ We want to calculate what the value of this expression "is" for an infinitesimal positive $epsilon$ according to the Feynman path integral. The way how the path integral answers such questions is that it tells us to add this factor as a factor to the path integral. If the path integral itself jumps by a certain resulting multiplicative factor for every history (one that doesn't try to squeeze additional factors/events/measurements to the moment $t$), it means that the inserted factor should be said to be equal to the resulting factor.

Note that naively, $m,v$ are finite so when we multiply it by $\epsilon$ and send the latter to zero, we should get zero. But we won't because Nature isn't naive.

Fine. Our next step has to be as follows: instead of considering the original path integral (which may already have some insertions, hopefully at mostly different moments), we consider a path integral with an extra insertion (the potential energy is called $U(x)$ and not $V(x)$, to reserve the letter $V$ in all forms for velocities – thanks to Bill Zajc for this correction and several others)$\int{\mathcal D}x(t) \,\exp\int \dd t[\frac{imv^2}{2\hbar}-\frac{iU(x)}{\hbar}]\,\cdots\, \epsilon mv^2.$ In the discretized trajectories, we may consider the integral over $x(t-\epsilon/2)$ and $x(t+\epsilon/2)$: assume that these are among the allowed "lattice sites" in the discretized time. The integral over the "earlier" $x(t-\epsilon/2)$ may be kept and the integral over the later one may be replaced by $\int \dd v(t)$ with the right factors (which cancel and may therefore be ignored because the commutator we want to get is the ratio of the path integral with an extra insertion and without an extra insertion).

What we're inserting only depends on $v(t)$ so the path-integral version of the value of $x(t)p(t)-p(t)x(t)$ which was converted to $\epsilon mv(t)^2$ is simply$"[x,p]" = \frac{\int \dd v(t) \exp(\frac{i\epsilon mv^2}{2\hbar}) \cdot \epsilon mv(t)^2} {\int \dd v(t) \exp(\frac{i\epsilon mv^2}{2\hbar})}=\dots$ The Gaussian is the exponential of $i\epsilon mv^2/2$. Complex calculus allows you not to worry about the fact that it is actually a phase. You may see that if the $i$ were absent, the width of the Gaussian would be, because $\epsilon mv^2\hbar\sim 1$, of order $\Delta v\sim \sqrt{\hbar/2\epsilon}$. It's important to realize that if $\epsilon\to 0$, the width goes like $\epsilon^{-1/2}$ and diverges in the limit.

At any rate, you see that up to factors such as $2,i,\hbar$, the insertion is exactly what appears as the exponent in the exponential we inherited from the path integral i.e. from the action. With the simple substitution $V=\sqrt{m\epsilon/\hbar}\cdot v$, the integral above becomes$\frac{\int \dd V \,\exp(iV^2/2)\cdot \hbar V^2}{\int \dd V\, \exp (iV^2/2)}$ Well, it's annoying that the integrand has a constant absolute value – it is a phase - but you may define it by another natural limit which adds a modest true Gaussian suppression at infinity. I don't want to justify all these things because their legitimacy and "rightness" really boils down to one's proper intuition about how physics works – it works in such a way that many things are analytic and if they seem ill-defined and may be fixed by an analytic continuation, one should definitely do it – and if you don't see this point, you're just missing some innate aptitudes for theoretical physics and I won't be able to convince about them, anyway.

Instead, let me assume that the reader has no problem with such continuations etc. The right way to compute the right ratio – which is mathematically the same task as a task with ordinary real Gaussians – is to define another variable $W$ such that $iV^2 = -W^2$ i.e. $V^2 = iW^2$ so that we get a nice Gaussian. The ratio $\dd W/\dd V$ from the substitution cancels in the ratio we calculate so the $xp-px$ commutator we wanted to compute finally boils down to$\frac{\int \dd W \,\exp(-W^2/2)\cdot i\hbar \cdot W^2}{\int \dd W\, \exp (-W^2/2)}.$ The integrals go from $-\infty$ to $+\infty$. But up to the universal factor $i\hbar$, this is nothing else than the expectation value of $W^2$ for a normal distribution with a unit standard deviation: this is nothing else than the $\exp(-W^2/2)$ distribution. But this expectation value is one – feel free to calculate the integral by your favorite tools. So we have just calculated in the Feynmanian way that$xp-px = i\hbar.$ It may be useful to return a little bit and see how it was possible for us to obtain such a "paradoxical" result. The reason was that despite the overall factor of $\epsilon$ in the formula $\Delta x(t) \sim \epsilon v(t)$, we got a nonzero result because the commutator boiled down to the expectation value of $v(t)^2$ and $v(t)$ was normal-distributed with a width that became infinite as the spacing of time, $\epsilon$, was sent to zero.

Paths must be unsmooth

This "infinite width" of the distribution for the velocity (as it implicitly appears in the path integral) is really the point. This is the place that stores the weapons that make the surprising result (the nonzero commutator even though we superficially deal with $c$-numbers all the time) possible and true. I must repeat this sentence once again because it's important:

The reason why Feynman's path integral agrees with the nonzero commutators i.e. with the uncertainty principle is that the standard deviation of the velocity goes to infinity as we approach the continuum limit $\epsilon=\Delta t \to 0$.
If we were thinking that we're only integrating over differentiable, smooth trajectories, we would still be able to derive that the commutator has to vanish. It is extremely important for the path integral to get a contribution from non-differentiable trajectories in which $v(t)$ is effectively divergent. In fact, "almost all" trajectories that contribute to the path integral are non-differentiable in this sense: the differentiable trajectories form a "subset of measure zero" and may actually be ignored!

To deny that the non-differentiable trajectories (or, in quantum field theory, spacetime configuration/histories of the fields) are paramount contributors to the path integral of any consistent quantum theory means to deny the uncertainty principle! Although quantum mechanics has superficially nothing to do with the requirement that important trajectories in the path integral description must be non-differentiable, the uncertainty principle is actually the same thing!

How much non-differentiable the trajectories are? We've seen that the typical value of $v(t)^2$ was scaling like $1/\epsilon$ i.e. $v(t)$ was proportional to $1/\sqrt{\epsilon}$. You may translate it to $\Delta x$, the change of $x$ over the unit of time into which we divide the trajectory. We get $\Delta x\sim \epsilon v\sim \epsilon / \sqrt{\epsilon}\sim \sqrt{\epsilon}$. What does it mean?

Well, if the typical distance you move after time $\epsilon$ scales like $\sqrt{t}$, it's nothing else than the Brownian motion! So when it comes to the power law that determines the dependence on the velocities (and position changes) on the period of time, the typical trajectories contributing to the Feynman path integral resemble the Brownian motion. They look like random walks! This shouldn't be shocking even at the "linguistic level" because the Feynman path integral does integrate over random walks because quantum mechanics says that particles walk in random ways.

The unsmoothness of these random walks is actually another way to formulate the uncertainty principle. The precise commutators of the observables are encoded in the precise shape of the "infinitely wide" distributions for the velocities etc.

I must mention that in quantum field theory in $d$ dimensions – we have done quantum mechanics so far which is quantum field theory in $d=1$ (time is the only spacetime variable on which the degrees of freedom depend) – the power laws will be different. If we still use $\epsilon$ for the lattice spacing, the action will be discretized to boxes of volume $\epsilon^d$. This tiny factor will multiply the Lagrangian density at each lattice site so the velocities (derivatives of fields...) of typical trajectories will scale like $1/\sqrt{\epsilon^d}$.

Note that for $\epsilon\to 0$, these velocities diverge even more quickly than they did in $d=1$. The short distance fluctuations of the histories in $d\gt 1$ quantum field theories are not just those of the random walk we found in quantum mechanics; they are even more violently oscillating. This may be heuristically interpreted as the reason why quantum field theories in ever higher numbers of spacetime dimensions suffer from increasingly severe short-distance problems. You may say that it is one of the ways to see why these theories ultimately become non-renormalizable and ill-defined in the ultraviolet.

Not only old Englishment could have built Stonehenge. Škoda has built this Citihenge, named after Citigo, our version of Volkswaven Up!, out of old cars. The vicinity of the Tower Bridge is immediately prettier than before. :-D

Implications for spin foams and discreteness of time

We have emphasized – or at least I have emphasized – that the divergent values of derivatives in the typical histories were needed for the path integral to agree with the nonzero commutators. This is actually a simple way to see that all would-be path integral theories that want to make the time discrete – e.g. they want to have a built-in $\epsilon=t_{\rm Planck}$ which is constant in the quantum gravity realm which means that it cannot be sent to zero – inevitably violate the uncertainty principle, a basic postulate of quantum mechanics.

If the degrees of freedom were discrete in this way, e.g. if the time were divided to Planckian intervals, all observables would have finite-width distributions and you couldn't get the finite, nonzero commutators. In such a theory, there would be no observables that are linked to functions of the dummy variables (we path-integrate over) and that refuse to commute with each other. This is another simple way to exclude all theories of the "spin foam" kind (a path-integral incarnation of loop quantum gravity although the equivalence obviously can't hold because LQG still tries to pretend that some commutators are nonzero). The people who study this garbage don't understand the basic stuff about path integrals because they would otherwise know that divergent standard deviations of the velocities are needed to get nonzero commutators from the path integral. Again, don't mess with the path integral.

And that's the memo.

Bonus: why it's OK that these paths have an infinite action

Jan Reimers made a good point in the comments. Textbooks (correctly!) say that the action computed from a particular random-walk-like trajectory mentioned above is infinite. It is indeed infinite. If $v^2$ has an expectation value going like $1/ \epsilon\to\infty$, the integral of such a kinetic term $mv^2/2$ over time is bound to diverge, too.

But that's how the things are. These paths dominate the path integral, anyway. It's because there are many of them. If you consider differentiable trajectories, you may get a smaller action, namely a finite one, but you will integrate over a smaller volume of trajectories in the infinite-dimensional space of paths and this suppression by the "excessively small volume" in the space of trajectories is more (well, in some counting equally) important than (or as) the exponential suppression due to the divergent action.

One may imagine that all the relevant un-smooth paths are fluctuations away from a classical, smooth one whose action is finite. Quantum mechanics allows one to deviate from such smooth paths and it actually allows enough so that the typical "allowed" paths are non-differentiable.

It's useful to do some maths. Expand the path $x(t)$ into some standing waves (Fourier modes), with terms $a_k\cdot \sin (\pi k t/\Delta t)$ plus some linear term to obey the right condition at the initial and final moments. Now, how do the coefficients $a_k$ of the typical allowed trajectories scale with $k$? If you rewrite the kinetic part of the action which is proportional to $\int v^2\dd t$, you will get terms such as $k^2|a_k|^2$. The extra factor of $k^2$ came from the need to differentiate $x(t)$ to obtain $v(t)$; and this got squared because we had $v^2$.

The path integral contains the factor of $\exp(-S_E/\hbar)$: let us switch to the Euclidean space so that I don't have to apologize for the imaginary unit again. Because $S_E$ is a sum over $k$, essentially, we get factors in the path integral of the form$\exp(-C\cdot k^2 |a_k|^2).$ You may see that the distribution of each coefficient $a_k$ is essentially independent of others and $k^2|a_k|^2$ is of order one, independently of $k$, which means that $|a_k|$ scales like $1/k$ for large $k$. If you have a function with Fourier coefficients scaling in this way and translate it to $x(t)$, a function of a continuous time $t$, you will get a discontinuous function of the same random-walk type discussed above.

On the other hand, the action for such a typical trajectory is infinite because it's the sum over $k$ of $k^2|a_k|^2$, up to some overall constants and other details, and because each term is of order one, independently of $k$, and because you have infinitely many terms of this kind (infinitely many Fourier modes), the action becomes infinite. But that's not a problem. Most of the infinity comes from "very large" or "infinite" values of $k$, i.e. very quickly oscillating Fourier modes, and those have a very small impact on the low-frequency observations that can be made with large and clumsy "classical" probes.

If you only have a classical probe, you're back to the classical intuition because $a_k$ modes with too high values of $k$ become invisible while their contribution to the action becomes "universal": every smooth classical action allows pretty much the same un-smooth deviations from it so having the family of nearby un-smooth paths essentially adds a universal factor to the path integral only (if you only compute low-frequency processes etc.). Of course, whenever the quantum fluctuations become so large and important that you can't consistently separate them from the "classical smooth parts" of $x(t)$, the classical limit and the classical intuition become invalid with all the implications.

#### snail feedback (19) :

I'm glad you finally started to use 'Don't mess with' instead of 'Don't mess up with'. The former just sounds much more natural to me.

Wow, I like this very nice step by step (and at my level) prove that the uncertainty principle holds for the path integrals :-). And it was interesting (and somewhat shoking) to see explicitely how badly "messing with the short scale behaviour" spoils even this important and basic pillar of "ordinary" quantum mechanics ...

"Don't mess up" is just a sign of my unauthentic, non-native English, let's not pretend it's anything else. ;-) I know that the idiom has no "up" in it. To mess something up is a slightly different verb.

Fascinating!!! For some reason I had never before heard then non differentiable paths were included in the integral. I always thought that Laplacian term in the Lagrangian would give these paths infinite action, thereby excluding them. In fact in think this is even stated in some text books. I guess those text books are either wrong, or I am miss remembering what they said.

Thanks for the enlightenment Lubos, well done.

An excellent point, Jan! It is indeed the case that the action evaluated out of a "typical" trajectory contributing to the path integral - a non-differentiable trajectory - is infinite.

But it's in no contradiction with the fact that these trajectories contribute all the size to the path integral. It's somewhat unintuitive to explain why it's so. One may "localize" the path integral near classical trajectories for which the action is finite.

However, quantum mechanics allows some fluctuations away from this classical trajectory and the degree of tolerance to such fluctuations is enough to make most of the allowed trajectories non-differentiable.

I thought y'all looked good against Portugal, for 80 minutes. Portugal may go all the way.

:-) I liked the Czech game at the beginning but for much less than 80 minutes. Maybe 40: one could watch the flagrant nerves of Ronaldo, Nani, coach, and remaining players, too. Gradually the game changed. Our guys simply couldn't compete when it came to energy, accuracy, and other things.

Very interesting. One question though, why is it ok for velocities to diverge? Shouldn't they be limited by the speed of light?

Good and important question and no, the speed limit of special relativity doesn't apply and can't apply to these velocities that appear in paths summed in the path integral.

It only applies to actual objects or information but the probability amplitudes aren't actual objects so they may propagate.

So if you make quantum mechanics relativistic, it surely doesn't mean that one is restricted to histories that respect the speed limit one-by-one. In fact, the propagators in quantum field theory which is fully relativistic may still be obtained as sums over particles' paths - paths that are mostly non-differentiable and thus "superluminal" almost everywhere. One may still prove that the causality holds. A way to do so is to see a cancellation between particles and antiparticles in their contributions to all quantities that are actually measurable.

So quantum mechanics really allows "extreme things" including apparent violations of relativity to happen in between the measurements - those things and extreme wiggles are really essential for the theory to be quantum mechanical and respect the uncertainty principle - but this intermediate engine is still 100% compatible with the fact that the observable phenomena 100% respect the speed limits and other principles.

This is a part of the "anti-quantum zeal" discussions that were dedicated dozens of posts on this blog. Some people incorrectly imagine that the wave function is real and what happens with it in between the measurements are real, objective processes. This incorrect assumption leads one believe that those should obey the relativistic speed limits and other things. But this ain't the case. The intermediate stories of the probability amplitudes are just "subjective calculational tricks" and they aren't constrained by constraints that only apply to facts and real events. If a velocity v(t) in a history that contributes to the path integral exceeds c, it is not a "fact" in the physical sense. It's just a property of an abstract term in an abstract calculation that is only turned into facts after many extra steps.

An intriguing feature is that non charged bosonic fields (scalar particle, photon) behave, in some sense, like physical observables.
More precisely, these are real fields, and two fields at points separared by a space-like interval commute.
So they are not conceptually physical observables, but technically they are very close.

Dear Trimok, it's true for all fields, including the charged ones. After all, a charged bosonic field may still be written in terms of its real/Hermitian and imaginary/antihermitian parts.

Fields in quantum field theory *are* observables. They are given by operators so they're observables. They have the same conceptual properties as all other observables in QM theories. One may measure them and QM predicts the probabilities of different values resulting from the measurement.

But what are *not* observables are the intermediate steps and entities that appear in the calculation of Green's functions or propagators - the wave functions in the middle. And if one describes fields literally as "second quantization" and interprets the classical fields as single-particle wave functions, these wave functions aren't observables.

None of these comments changes anything about any point we have made above and if you wanted to suggest it does, you just added confusion and fog to this thread.

I probably misspoke. I was thinking of physical observables which represent information, such as energy density or charge current, and this type of physical observables must respect the principle of causality. (commutation at space-like intervals)
If fields in quantum theory are "observable", they do not represent information, even if they can be measured.
Therefore a bosonic field is still a bit strange, because it behaves as an observable related to information, while it is not..

Dear Trimok, again, assuming that a new try will make the key point clearer here: physical observables only represent information if this information is actually measured. Without an actual measurement, there is no information.

The paths we're summing over in the path integral aren't being measured - they correspond to imagined intermediate non-events in between several measurements - so even if these paths contribute to the expectation value of products of quantum fields (correlation functions) which may be measured, the properties of the individual paths aren't "real" and they aren't "information". For this reason, these intermediate mathematical object don't have to obey - and they don't obey - the relativistic speed limit even though the ultimate results do.

R.F. Streater, an emeritus prof at King's College, has a web page called "Lost Causes in Physics", including many I agree with, a few, I don't. He includes "Rigorous Feynman path integrals". He admits that they have "heuristic value", but I think in including them in his list, he is using a mathematician's brain and not a physicist's in that he is looking for rigorous proofs rather than elegance and utility. But maybe not---he did write a QFT text. Personally, I love the elegance and originality of Feynman's approach. Since I am just coming back to this stuff after many years, and learning some of it for the first time, I am not looking for rigorous proofs :)

Dear Gordon, if you click at the gear in the right upper corner, you may Edit settings including your displayed name, I think.

Your attitude is sensible. I personally tend to agre that trying to make Feynman's path integral rigorous - in the sense similar to Lebesgue measure - is a lost cause, a wrong goal. They're templates that should be followed analogously with finite-dimensional integrals but whenever we encounter subtleties, we should be ready to study new details and learn new things.

OK finally I think I have Disqus solved :)

"if you try to measure the timing of an event with the precision better than the Planck time, you will fail."

http://physics.stackexchange.com/questions/3098/can-decoherence-time-be-shorter-than-planck-time

Here is something that needs better clarification within this context.

We can imagine that the trajectories of particles are observables and are thus a type of eigenvalue in some matrix representation. So from a multitude of possible trajectories, when a experimentalist looks at his photographs for the first time, he knows that one path will be seen from many possible paths that could have been observed. We see this as a classical path, but know it is in fact a quantum path that has decohered into a preferred path (via the Mott problem resolution) or rather some "pointer" path.

Although we know that decoherence is instantaneous, as discussed above, the precision of our measurement of decoherence must necessarily be imprecise. We also know that if we measure the exact location of the trajectory, our measurement must necessarily be imprecise.

It seems intuitive to think of the situation as one were although the center of mass trajectory can take on real non-integer coordinate values, there is necessary error around those values.

"The reason why Feynman's path integral agrees with the nonzero
commutators i.e. with the uncertainty principle is that the standard deviation
of the velocity goes to infinity as we approach the continuum limit ϵ=Δt→0 "

It is clear from this comment and others above that the uncertainty principle when applied to QFT is related now to the differentiability of paths, and not to precise spatial measurements of observables.

String theory posits a modification of the uncertainty principle to a more generalized form (http://arxiv.org/pdf/hep-th/0608016v2.pdf), where the additional terms are related to properties of the strings themselves (length and tension). From a naive perspective, it seems that this is directly related to the uncertainty associated with precise space time resolution of the path and is independent of the uncertainty associated with the differentiability of center of mass trajectories. This appears to be confirmed because in the modified string uncertainty principle, string terms are additive to the traditional non-string terms.

It would be interesting to see an explanation as to why it seems that imprecision in path measurement is what allows for string theory to have any possibility for existence at all (or if that is an incorrect understanding of the situation).

Apologies, I am not bold enough to try to reply to this comment. Paths are not observables but even if you had some sense in which they are, this has nothing to do with string theory, and so on.

If you asked about this: string theory regulates the short-distance problems and divergences because the objects are extended - and the histories are smooth world sheets without singular vertices.

The uncertainty or chaotic motion of the center-of-mass (or position of point-like particles) isn't enough to smoothen the short-distance problems. In fact, as I sketched above, the huge short-distance variations of the velocity in field theory is the very reason why these theories have short-distance problems - problems that are increasingly severe for increasing spacetime dimension (because the velocities of typical paths are more infinite in higher dimensions, a higher power law).