Friday, January 07, 2011

Meaning and job of the phase spaces

Four authors have submitted a preprint that appeared yesterday:
The principle of relative locality
Amelino-Camelia, Freidel, Kowalski-Glikman, and Smolin apparently think that their paper is as deep as the bottom of the sea. Instead, the actual relationship to the bottom of the sea is that their paper is another ship that was attempting to float but instead, it went glob-glob-glob-glob to the bottom of the sea within a minute after the first reader began to look at it.

Their paper tries to do some pathological things with the phase space and their symmetries. It would be silly for me to write a whole blog entry about a very dumb paper so instead, this text will try to describe the history, transformations, meaning, and structures that exist on the phase space.

Why the paper is nonsense

But at the very beginning, let me mention that the paper is utter nonsense because the authors have failed to understand that
• phase spaces are very different for different descriptions of physics - mechanics and field theory are two very different examples - and only the phase space of mechanics contains the ordinary space as a submanifold; the different character of the phase spaces also implies highly theory-dependent symmetries
• the position coordinates of the phase space and the canonical momenta have different dimensions (different units) so the symmetry between them can't be understood as a generalized "rotation"

• phase spaces only exist for a quantum system if we find and choose a particular classical limit of the quantum system; there are no unique phase spaces associated with generic quantum systems
• the coordinates on the phase space would only have the same units if one took the quantum gravity into account, but in that case, the description of the physical system is highly non-classical and the phase space doesn't work
• the relevant symmetries for the phase space - symplectic symmetries - can't be combined with the symmetries acting on spacetime - such as the Lorentz symmetry - in a nontrivial way
• there is no notion of locality on the phase space
Obviously, I can't enumerate and clarify all the fundamental mistakes in their paper because the paper is a continuous 12-page-long stream of complete crackpottery but you should be able to catch the points from the coherent explanation of the phase space below.

Phase space in classical mechanics

For a deterministic classical (non-quantum) system, the phase space is the space of all possible initial conditions or conditions valid at any other moment that contain all the information you need to know to evolve the system in time.

In Newton's mechanics, the acceleration of a point mass is determined in terms of a force which is a function of the positions of this point and others (and sometimes their velocities). That's why Newton's equations of motion can be written as second-order differential equations for "x(t)", the coordinates of the point masses.

You may calculate the acceleration, i.e. the second derivative of the position, from the equation - and evolve the system a little bit. However, the second derivative of the position is the first derivative of the velocity. If you want to calculate the position of a particle in the "next moment", you need to know both the value of the position as well as its velocity in the "previous moment".

So the initial conditions require you to specify both the position and the velocity; for certain mathematical reasons, it is much more natural to replace the velocities by the "momenta" - which are equal to "p=mv" in the case of mechanics. So the phase space is a space parameterized by the following coordinates:
xi(t), pi(t)
I wrote the coordinates as functions of time because they're evolving with time. The standard way to write the equations of motion for "x" and "p" is in terms of the Hamilton equations. Most naturally, the coordinates "x" belong to a manifold identified with the "space around us". A flat 3-dimensional Euclidean space was thought to be the only example worth considering. But it can also be any other curved manifold such as the surface of the Earth.

The momenta "p" are still associated with velocities. So most typically, they are related to tangential vectors of the normal space and the whole phase space may therefore be represented as a "cotangent bundle" (the convention of "co-" is such that it is not a tangent bundle, sorry). However, in principle, the momenta don't have to be identified with a geometry and the structure of the phase space is more general.

Hamiltonian and Hamilton equations

The most important equations of mechanics - and physics that excludes cosmology - are invariant under the translations in time. It follows from Noether's theorem that there must exist a quantity that is conserved: for the time-translational symmetry, it is called the Hamiltonian. It is just a fancy name for the total energy. It is typically equal to "H(x,p) = p^2/2m + V(x)" where "V" is the potential energy.

The Hamiltonian is a function of the initial conditions - or conditions at any other later moment. Because Noether's theorem has linked the Hamiltonian "H" to translations in time, one may actually use the Hamiltonian to determine the evolution of the degrees of freedom in time. It's given by the Hamilton equations:
dxi/dt = +∂H/∂pi,
dpi/dt = -∂H/∂xi.
These two equations are enough to determine how everything evolves in time because everything is a function of "x" and "p". In particular, one may write the equations for the evolution of any function "f(x,p)" in terms of the so-called "Poisson brackets" and show that the value of "H(x,p)" remains constant.

So there are submanifolds of the phase space that correspond to particular values of "H". The physical system never leaves them. If they're compact and the motion is sufficiently random or chaotic, the particle eventually "fills" the whole submanifold by random scratches, a set of phenomena studied under the umbrella of the "ergodic theory".

The ergodic principle says that if you wait for a long enough time, the particle spends the same percentage of time in a chosen small region of the phase space as the ratio of the volume of the region within the whole phase space (reduced to the constant "H").

It is useful to note that "x" and "p" have different units.

They are coordinates but there are no natural rotations - and no natural Lorentz transformations - on the phase space. Indeed, rotations and Lorentz transformations require the coordinates to have the same units - after some natural choices - because there must exist Pythagorean invariants of the type "x^2+y^2" or "c^2.t^2-x^2-y^2-z^2".

Such a natural invariant structure doesn't exist on the phase space. Indeed, that would be shocking because "x" is expressed in meters while "p" is expressed in "kilograms times meters over seconds". You don't want to add apples and oranges.

Symplectic structure

However, there is always a natural bilinear invariant of the type
Omegaij dxi dpj
where "Omega" is an antisymmetric tensor. Note that we are always multiplying one "x" with one "p" - there are never two "x" or two "p" factors which is what we saw in the Pythagorean theorem. This antisymmetric tensor may be linked to the Hamilton equations in general and the Poisson brackets in particular.

While quantum mechanics is usually thought of as a more complicated theory than classical physics - surely conceptually speaking, it's harder for most people to understand and believe it - there are many things that are actually simplified by quantum mechanics.

Emergence of the phase space and brackets from quantum mechanics

The Poisson bracket of classical physics may be obtained as a simple commutator in the small "hbar" (classical) limit:
{F,G} = 1/i.hbar [F,G] = 1/i.hbar (FG - GF)
and the antisymmetric tensor on the phase space is simply the commutator of "x" and "p":
Omegaij = [xi, pj]
The upper indices and lower indices of "Omega" have to be converted by inverting the matrix "Omega". The commutator looks so easy: you just multiply the two objects - such as "x" and "p" - in two different ways and take the difference. The Poisson brackets that you obtain in the classical limit looks more complicated:
{F,G} = Σi ∂F/∂xi ∂G/∂pi - ∂G/∂xi ∂F/∂pi
This looks like a mess but it is actually a limit - a "simplified version" of - "(FG-GF)/i.hbar". Funny how limits may become "harder", right? Here, "F" and "G" are two functions of all the "x" and "p" coordinates.

The phase space itself gets extended to the Hilbert space of quantum mechanics. As I said, a point in the phase space must know about "everything you can know about the system" at a given moment. The same is true for the states (complex vectors) in the Hilbert space of quantum mechanics. So they have to be related.

The map is subtle: if you try to find basis vectors of the Hilbert space in quantum mechanics with pretty sharp values of "x" and "p", you will find out that they roughly correspond to points in the phase space. However, the "uncertainty principle" of quantum mechanics tells you that "x" and "p" can't have sharply defined values at the same moment.

Instead, you will be only able to find a basis vector of the Hilbert space that roughly occupies the volume
(2.pi.hbar)N = hN
of the phase space. Here, "h" is Planck's constant and "N" is the number of the "x" coordinates (or "p" coordinates) in the phase space. Of course, quantum mechanics allows you to combine the basis vectors in an arbitrary way - into arbitrary complex linear superpositions - so for most vectors in the Hilbert space, the values of many of the "x" and "p" variables are almost completely undetermined.

Quantum mechanics also simplifies the evolution for the operators. In the Heisenberg picture, the Hamilton equations are replaced by simple Heisenberg equations:
dF/dt = -i.hbar[H,F]
A simple commutator with the Hamiltonian! Substitute "F=H" to see that the Hamiltonian itself (the energy) doesn't change with time. Again, in the classical limit, one gets the Hamilton equations
dF/dt = -{H,F}
we began with. Because of the relationship between the phase space and the Hilbert space we have just described, it is clear that you may only talk about the phase space if the volumes on the phase space are large enough - relatively to the appropriate power of Planck's constant.

Symplectic symmetry

The rotations of the "x,y" plane around the origin leave the following object invariant:
x2 + y2
By the Pythagorean theorem, this is nothing else than the squared distance of the point from the origin. In a similar way, the Lorentz transformations preserve
c2t2 - x2 - y2 - z2
These bilinear expressions may be expressed in terms of the "metric tensor" which is symmetric (and, in these bases, diagonal). This tensor is kept invariant by the rotations or Lorentz transformations. So the transformations that preserve a symmetric tensor are called orthogonal or pseudoorthogonal transformation (pseudo- if some minus signs are included, like in relativity).

However, on the phase space, we have only found a different kind of invariants, one that depends on an antisymmetric tensor "Omega_{ij}" (whose diagonal entries vanish because of the antisymmetry). If an antisymmetric tensor is conserved, we speak about "symplectic transformations". (Note that if a general asymmetric tensor were requested to be constant, it would mean that both its symmetric part and its antisymmetric part would be conserved, and the resulting transformations would belong to the intersection of the groups and would be far too constrained to be any interesting.)

For two coordinates "x" and "p", the simple symplectic group "Sp(2)" is simply made out of linear transformations that preserve the volume; the group is isomorphic to "SL(2,R)". However, this is only true for a 2-dimensional phase space whose "Omega_{ij}" is equivalent to a volume form. For a higher number of coordinates, the conservation of all the components of "Omega_{ij}" is more constraining than the conservation of a single volume (the determinant), so "Sp(2K)" groups for larger "K" are rather small subgroups of "SL(2K,R)". They're comparably large to the (pseudo)orthogonal groups.

The intersection of "Sp(2K,C)" as defined above - and naturally extended to complex numbers - and "U(2K)" is a compact group that forms one of the four infinite families in the classification of all simple compact Lie groups.

However, the "Sp(2)" symmetry of a simple phase space is broken by almost any Hamiltonian. It is not a symmetry of dynamics in any way; instead, as the quantum mechanical extension shows, the symplectic group is just a classical artifact of the quantum "U(infinity)" symmetry of transformations acting on the Hilbert space that preserve the norm (and the number of basis vectors). However, a general transformation of one state into another state (or the corresponding general conjugation of all operators, if you want to speak in terms of operators) is not a symmetry of the Hamiltonian, of course. So it's not the right thing to imagine that the dynamical laws respect any kind of a symplectic symmetry.

Also, even at the level of symmetries, the symplectic symmetry can't be nontrivially combined with the rotational symmetry of the individual "x" coordinates into something bigger. This statement is somewhat analogous to the Coleman-Mandula theorem.

Path integral

I have explained that quantum mechanics simplifies the Hamilton equation - into the Heisenberg equations. It also simplifies another abstract way to express classical mechanics, the Lagrangian mechanics. The laws of classical mechanics may be expressed by the "principle of least action",
δS = 0
For fixed initial and final values of "x(t)" - but with "p(t)" left arbitrary - we evaluate the action "S" - an integral over the Lagrangian "L" over time - for all conceivable trajectories. And the trajectory with the smallest value of "S" is one that is actually realized in Nature. This law of physics looks "retrocausal" because we apparently need to know the "final conditions" of "x" in the future to determine the properties of the trajectory "now". But this retrocausality is not real because the principle of least action, when applied to an infinitesimally close initial and finite point, may be shown equivalent to the standard differential equations.

This principle is a very concise yet mysterious way to write the laws of physics. Why does Nature try to minimize things?

Again, it can be explained by quantum mechanics. Any classical system worth considering - and written in terms of the phase space and the Hamilton equations - was derived as a limit of the commutators and Heisenberg equations in quantum mechanics. However, the quantum mechanical theory may also be rewritten in terms of Feynman's path integral. This approach to quantum mechanics tells us to calculate the amplitudes of all transitions as the sum over all histories
A (initial, final) = ∫ Dx(t) Dp(t) exp(i.S/hbar)
where we integrate over the infinite-dimensional space of all trajectories - and histories - between the initial and final values of "x(t)". Because the integrand is a wildly fluctuating phase, most of the contributions cancel among nearby trajectories. Only the trajectories for which the integrand - i.e. the phase "S" - is nearly constant contribute "collectively" and "constructively", so much of the transition amplitude may be blamed to the histories with the extremal value of "S".

I would say that the Feynman path integral is much more "inevitable" a form of the laws of quantum mechanics because it follows from the linearity of all the complex amplitudes. There simply has to exist a linear formula for the evolution amplitudes between the initial and final state. The most general form of such a formula is the sum over all histories, and one can show that "exp(i.S/hbar)" is the right integrand.

But by taking the classical limit, one may see that the corresponding law in the classical limit says that the action has to be extremized. (The global minima of "S" are the most important solutions that are relevant for the normal evolution; the other local minima of "S" are also important and give "instanton" contributions to the evolution amplitudes; the local maxima also contribute and are known as the "sphalerons"; unstable D-branes are examples of such sphalerons with a string-theoretical twist; there are no global maxima because the action "S" is not bounded from above.)

Field theory and more complicated phase spaces

The phase spaces in mechanics are naturally labeled by coordinates that deserve to be called "x" and "p". However, classical mechanics was largely superseded by classical field theory in the 19th century. Field theory totally changes the physical interpretations of the coordinates and their canonical momenta (and their number); however, it may be expressed in such a way that the equations of motion are totally unchanged. The structure of the equations is unchanged; however, the interpretation of the canonical coordinates and momenta is completely different, much like any conceivable symmetries acting on them.

In particular, field theory requires each point of space (and time) to remember the real value of some field, "phi(x,t)". This whole "phi(x,t)" plays the very same role in field theory as "x(t)" played in mechanics. The main difference is that the index "i" of "x_i(t)" that we had in mechanics is replaced by a continuous index or argument "x" - the position of the point whose field we're evaluating.

So instead of summing over the indices "i", field theoretical Hamiltonian formulae often integrate over "x" - the ordinary "space", now rebranded as a space of indices. It's important to appreciate that the character of the phase space totally changes. However, the theory still obeys a certain general form.

In field theory (or any relativistic theory, for that matter), the identity of the phase space is connected with a particular slice of the spacetime. So once you use a phase space (or a state vector in quantum mechanics at one moment), the Lorentz symmetry becomes obscure. But that doesn't mean that it's broken: in the phase space formalism, the symmetries may just become "harder to be seen" but they may still be there. All relativistic theories may be expressed in terms of phase spaces and Hilbert spaces; the Lorentz symmetry may become less obvious but it's still there.

Flat phase spaces

In classical mechanics, we could have generalized the phase space to become a curved manifold. In field theory, the phase space has infinitely many coordinates "phi(x,t)" and "d phi(x,t)/dt" and you might think about the extensive industry of designing complicated curved infinite-dimensional manifolds (phase spaces) for field theory.

But you should realize that the progress in the recent centuries has shown that this is not a terribly fruitful way of generalizing the flat spaces; at least so far, it looks so. The interesting dynamics of field theory usually arises from an ordinary flat phase space - but with an interesting prescription for the Hamiltonian.

First, it is still true that the momenta are linked to the velocities (time derivatives of the fields at various points) so they're still essentially "tangential vectors" completing the phase space to the "cotangent bundle". However, even the values of the fields themselves are usually elements of a flat space.

This is rather obvious for fields with spin (or any Lorentz indices). The space of values of a field with spin can't really be usefully "nonlinear" because that would violate the Lorentz or rotational symmetries. However, for scalar fields in the spacetime, it's often useful to appreciate that the scalar field may belong to a curved manifold of possible values - the configuration space or the moduli space.

But restricting the possible values of "phi(x,t)" for each "x" equally, for a scalar field "phi", turned out to be the only useful generalization of the flat phase space in field theory so far. Chances are that you won't find anything interesting if you try to generalize this concept too much. For example, the dilaton-axion complex field "tau(x_i)" at each spacetime point "x_i" belongs to the space "SL(2,Z)\SL(2,R)/SO(2,R)", the fundamental domain. But all useful examples are "direct generalizations" of this template.

No locality on the phase space

One of the numerous points that the authors misunderstand is that in the phase space formalism, the Lorentz symmetry becomes obscure but it should still be present because it is experimentally established. In particular, the Lorentz symmetry is mixing time "t" with the coordinates "x" - that are canonical coordinates in mechanics but they become "indices" in field theory.

What I want to emphasize is that the Lorentz symmetry is not mixing and cannot be mixing time with the momenta.

In relativity, the locality requires that things can't move faster than light. It means that "dx/dt" cannot exceed the speed of light. However, there is no condition for "dp/dt". If you play with a ping pong ball, its velocity "v" or momentum "p" may change rather abruptly. There is no speed-of-light-like limit for "dp/dt", at least not one that would follow from any symmetry, because there is no "p^2-t^2" that would be invariant under any symmetries.

The four authors' mixing of the momenta into relativity, locality, and relative locality is just a hint that they're on drugs.

Summary

The phase space is an abstract notion to encode all the information we have about a classical system - it's a tool to study classical physics in the "adult" way. The phase space, like many other concepts in "adult" classical mechanics, can be linked to their generalizations and extension in quantum mechanics. The latter are often simpler and the origin of the rules is often more transparent.

Quantum mechanics sheds new light on the general character of the laws of classical physics, too.

1 comment:

1. Even in the classical cases for which there are nonlinearities that give rise to chaos, phase spaces that diverge wildly as a result of small changes in initial conditions mean nothing except to show that chaotic behavior is possible.

It's an old attempt to assign some transcendent significance to phase spaces that has been known not to be there since the days of Poincaré