## Tuesday, August 31, 2010 ... /////

### Why complex numbers are fundamental in physics

I have written about similar issues in articles such as Wick rotation, The unbreakable postulates of quantum mechanics, and Zeta-function regularization, among others.

But now I would like to promote the complex numbers themselves to the central players of the story.

History of complex numbers in mathematics

Around 1545, Girolamo Cardano (see the picture) was able to find his solution to the cubic equation. He already noticed the quadratic equation "x^2+1=0" as well. But even negative numbers were demonized at that time ;-) so it was impossible to seriously investigate complex numbers.

Cardano was able to additively shift "x" by "a/3" ("a" is the quadratic coefficient of the original equation) to get rid of the quadratic coefficient. Without a loss of generality, he was therefore solving the equations of the type

x3 + bx + c = 0
that only depends on two numbers, "b, c". Cardano was aware of one of the three solutions to the equation; it was co-co-communicated to him by Tartaglia (The Stammerer), also known as Niccolo Fontana. It is equal to
x1 = cbrt[-c/2 + sqrt(c2/4+b3/27)] +
+ cbrt[-c/2 - sqrt(c2/4+b3/27)]
Here, cbrt is the cubic root. You can check it is a solution if you substitute it to the original equation. Now, using the modern technologies, it is possible to divide the cubic polynomial by "(x - x_1)" to obtain a quadratic polynomial which produces the remaining two solutions once it is solved. Let's assume that the cubic polynomial has 3 real solutions.

The shocking revelation came in 1572 when Rafael Bambelli was able to find real solutions using the complex numbers as tools in the intermediate calculations. This is an event that shows that the new tool was bringing you something useful: it wasn't just a piece of unnecessary garbage for which the costs are equal the expenses and that should be cut away by Occam's razor: it actually helps you to solve your old problems.

Consider the equation
x3 - 15x - 4 = 0.
Just to be sure where we're going, compute the three roots by Mathematica or anything else. They're equal to
x1,2,3 = {4, -2-sqrt(3), -2+sqrt(3)}
The coefficient "b=-15" is too big and negative, so the square root in Cardano's formula is the square root of "(-15)^3/27 + 4^2/4" which is a square root of "-125+4" or "-121". You can't do anything about that: it is negative. The argument could have been positive for other cubic polynomials if the coefficient "b" were positive or closer to zero, instead of "-15", but with "-15", it's just negative.

Bombelli realized the bombshell that one can simply work with the "sqrt(-121)" as if it were an actual number; we don't have to give up once we encounter the first unusual expression. Note that it is being added to a real number and a cube root is computed out of it. Using the modern language, "sqrt(-121)" is "11i" or "-11i". The cube roots are general complex numbers but if you add two of them, the imaginary parts cancel. Only the real parts survive.

Bombelli was able to indirectly do this calculation and show that
x1 = cbrt(2+11i) + cbrt(2-11i) = (2+i) + (2-i) = 4
which matches the simplest root. That was fascinating! Please feel free to verify that (2+i)^3 is equal to "8+12i-6-i = 2+11i" and imagine that the historical characters would write "sqrt(-1)" instead of "i". By the way, it is trivial to calculate the other two roots "x_2, x_3" if you simply multiply the two cubic roots, cbrt, which were equal to "(2+-i)", by the two opposite non-real cubic roots of unity, "exp(+-2.pi.i/3) = -1/2+-i.sqrt(3)/2".

When additions to these insights were made by John Wallis in 1673 and later by Euler, Cauchy, Gauss, and others, complex numbers took pretty much their modern form and mathematicians have already known more about them than the average TRF readers - sorry. ;-)

Fundamental theorem of algebra

Complex numbers have many cool properties. For example, every N-th order algebraic (polynomial) equation with real (or complex) coefficients has exactly "n" complex solutions (some of them may coincide, producing multiple roots).

How do you prove this statement? Using powerful modern TRF techniques, it's trivial. At a sufficiently big circle in the complex plane, the N-th order polynomial qualitatively behaves like a multiple of "x^N". In particular, the complex phase of the value of this polynomial "winds" around the zero in the complex plane N times. Or the logarithm of the polynomial jumps by 2.pi.i.N, if you wish.

You may divide the big circle into an arbitrarily fine grid and the N units of winding have to come from some particular "little squares" in the grid: the jump of the logarithm over the circle is the sum of jumps of the logarithm over the round trips around the little squares that constitute the big circle. The little squares around which the winding is nonzero have to have the polynomial equal to zero inside (otherwise the polynomial would be pretty much constant and nonzero inside, which would mean no winding) - so the roots are located in these grids. If the winding around a small square is greater than one, there is a multiple root over there. In this way, you can easily find the roots and their number is equal to the degree of the polynomial.

Fine. People have learned lots of things about the calculus - and functions of complex variables. They were mathematically interesting, to say the least. Complex numbers are really "new" because they can't be reduced to real diagonal matrices. That wouldn't be true e.g. for "U-complex" numbers "a+bU" where "U^2=+1": you could represent "U" by "sigma_3", the Pauli matrix, which is both real and diagonal.

Complex numbers have unified geometry and algebra. The exponential of an imaginary number produces sines and cosines - and knows everything about the angles and rotations (multiplication by a complex constant is a rotation together with magnification). The behavior of many functions in the complex plane - e.g. the Riemann zeta function - has been linked to number theory (distribution of primes) and other previously separate mathematical disciplines. There's no doubt that complex numbers are essential in mathematics.

Going to physics

In classical physics, complex numbers would be used as bookkeeping devices to remember the two coordinates of a two-dimensional vector; the complex numbers also knew something about the length of two-dimensional vectors. But this usage of the complex numbers was not really fundamental. In particular, the multiplication of two complex numbers never directly entered physics.

This totally changed when quantum mechanics was born. The waves in quantum mechanics had to be complex, "exp(ikx)", for the waves to remember the momentum as well as the direction of motion. And when you multiply operators or state vectors, you actually have to multiply complex numbers (the matrix elements) according to the rules of complex multiplication.

Now, we need to emphasize that it doesn't matter whether you write the number as "exp(ikx)", "cos(kx)+i.sin(kx)", "cos(kx)+j.sin(kx)", or "(cos kx, sin kx)" with an extra structure defining the product of two 2-component vectors. It doesn't matter whether you call the complex numbers "complex numbers", "Bambelli's spaghetti", "Euler's toilets", or "Feynman's silly arrows". All these things are mathematically equivalent. What matters is that they have two inseparable components and a specific rule how to multiply them.

The commutator of "x" and "p" equals "xp-px" which is, for two Hermitean (real-eigenvalue-boasting) operators, an anti-Hermitean operator i.e. "i" times a Hermitean operator (because its Hermitean conjugate is "px-xp", the opposite thing). You can't do anything about it: if it is a c-number, it has to be a pure imaginary c-number that we call "i.hbar". The uncertainty principle forces the complex numbers upon us.

So the imaginary unit is not a "trick" that randomly appeared in one application of some bizarre quantum mechanics problem - and something that you may humiliate. The imaginary unit is guaranteed to occur in any system that reduces to classical physics in a limit but is not a case of classical physics exactly.

Completely universally, the commutator of Hermitean operators - that are "deduced" from real classical observables - have commutators that involve an "i". That means that their definitions in any representation that you may find have to include some "i" factors as well. Once "i" enters some fundamental formulae of physics, including Schrödinger's (or Heisenberg's) equation, it's clear that it penetrates to pretty much all of physics. In particular:
In quantum mechanics, probabilities are the only thing we can compute about the outcomes of any experiments or phenomena. And the last steps of such calculations always include the squaring of absolute values of complex probability amplitudes. Complex numbers are fundamental for all predictions in modern science.
Thermal quantum mechanics

One of the places where imaginary quantities occur is the calcuation of thermal physics. In classical (or quantum) physics, you may calculate the probability that a particle occupies an energy-E state at a thermal equilibrium. Because the physical system can probe all the states with the same energy (and other conserved quantities), the probability can only depend on the energy (and other conserved quantities).

By maximizing the total number of microstates (and entropy) and by using Stirling's approximation etc., you may derive that the probabilities go like "exp(-E/kT)" for the energy-E states. Here, "T" is called the temperature and Boltzmann's constant "k" is only inserted because people began to use stupidly different units for temperature than they used for energy. This exponential gives rise to the Maxwell-Boltzmann and other distributions in thermodynamics.

The exponential had to occur here because it converts addition to multiplication. If you consider two independent subsystems of a physical system (see Locality and additivity of energy), their total energy "E" is just the sum "E1+E2". And the value of "exp(-E/kT)" is simply the product of "exp(-E1/kT)" and "exp(-E2/kT)".

This product is exactly what you want because the probability of two independent conditions is the product of the two separate probabilities. The exponential has to be everywhere in thermodynamics.

Fine. When you do the analogous reasoning in quantum thermodynamics, you will still find that the exponential matters. But the classical energy "E" in the exponent will be replaced by the Hamiltonian "H", of course: it's the quantum counterpart of the classical energy. The operator "exp(-H/kT)" will be the right density matrix (after you normalize it) that contains all the information about the temperature-T equilibrium.

There is one more place where the Hamiltonian occurs in the exponent: the evolution operator "exp(H.t/i.hbar)". The evolution operator is also an exponential because you may get it as a composition of the evolution by infinitesimal intervals of time. Each of these infinitesimal evolutions may be calculated from Schrödinger's equation and
[1 + H.t/(i.hbar.N)]N = exp(H.t/i.hbar)
in the large "N" limit: we divided the interval "t" to "N" equal parts. If you don't want to use any infinitesimal numbers, note that the derivative of the exponential is an exponential again, so it is the right operator that solves the Schrödinger-like equation. So fine, the exponentials of multiples of the Hamiltonian appear both in the thermal density matrix as well as in the evolution operator. The main "qualitative" difference is that there is an "i" in the evolution operator. In the evolution operator, the coefficient in front of "H" is imaginary while it is real in the thermal density matrix.

But you may erase this difference if you consider an imaginary temperature or, on the contrary, you consider the evolution operator by an imaginary time "t = i.hbar/k.T". Because the evolution may be calculated in many other ways and additional tools are available, it's the latter perspective that is more useful. The evolution by an imaginary time calculates thermal properties of the system.

Now, is it a trick that you should dismiss as an irrelevant curiosity? Again, it's not. This map between thermal properties and imaginary evolution applies to the thermodynamics of all quantum systems. And because everything in our world is quantum at the fundamental level, this evolution by imaginary time is directly relevant for the thermodynamics of anything and everything in this world. Any trash talk about this map is a sign of ignorance.

Can we actually wait for an imaginary time? As Gordon asked, can such imaginary waiting be helpful to explain why we're late for a date with a woman (or a man, to be really politically correct if a bit disgusting)?

Well, when people were just animals, Nature told us to behave and to live our lives in the real time only. However, theoretical physicists have no problem to live their lives in the imaginary or complex time, too. At least they can calculate what will happen in their lives. The results satisfy most of the physical consistency conditions you expect except for the reality conditions and the preservation of the total probabilities. ;-)

Frankly speaking, you don't want to live in the imaginary time but you should certainly be keen on calculating with the imaginary time!

Analytic continuation

The thermal-evolution map was an example showing that it is damn useful to extrapolate real arguments into complex values if you want to learn important things. However, thermodynamics is not the only application where this powerful weapon shows its muscles. More precisely, you surely don't have to be at equilibrium to see that the continuations of quantities to complex values will bring you important insights that can't be obtained by inequivalent yet equally general methods.

The continuation into imaginary values of time is linked to thermodynamics, the Wick rotation, or the Hartle-Hawking wave function. Each of these three applications - and a few others - would deserve a similar discussion to the case of the "thermodynamics as imaginary evolution in time". I don't want to describe all of conceptual physics in this text, so let me keep the thermodynamic comments as the only representative.

Continuation in energy and momentum

However, it's equally if not more important to analytically continue in quantities such as the energy. Let us immediately say that special relativity downgrades energy to the time component of a more comprehensive vector in spacetime, the energy-momentum vector. So once we will realize that it's important to analytically continue various objects to complex energies, relativity makes it equally important to continue analogous objects to complex values of the momentum - and various functions of momenta such as "k^2".

Fine. So we are left with the question: Why should we ever analytically continue things into the complex values of the energy?

A typical laymen who doesn't like maths too much thinks that this is a contrived, unnatural operation. Why would he do it? A person who likes to compute things with the complex numbers asks whether we can calculate it. The answer is Yes, we can. ;-) And when we do it, we inevitably obtain some crucial information about the physical system.

A way to see why such things are useful is to imagine that the Fourier transform of a step function, "theta(t)" (zero for negative "t", one for positive "t"), is something like "1/(E-i.epsilon)". If you add some decreasing "exp(-ct)" factor to the step function, you may replace the infinitesimal "epsilon" by a finite constant.

Anyway, if you perturb the system at "t=0", various responses will only exist for positive values of "t". Many of them may exponentially decrease - like in oscillators with friction. All the information about the response at a finite time can be obtained by continuing the Fourier transform of various functions into complex values of the energy.

Because many physical processes will depend "nicely" or "analytically" on the energy, the continuation will nicely work. You will find out that in the complex plane, there can be non-analyticities - such as poles - and one can show that these singular points or cuts always have a physical meaning. For example, they are identified with possible bound states, their continua, or resonances (metastable states).

The information about all possible resonances etc. is encoded in the continuation of various "spectral functions" - calculable from the evolution - to complex values of the energy. Unitarity (preservation of the total probabilities) can be shown to restrict the character of discontinuities at the poles and branch cuts. Some properties of these non-analyticities are also related to the locality and other things.

There are many links over here for many chapters of a book.

However, I want to emphasize the universal, "philosophical" message. These are not just "tricks" that happen to work as a solution to one particular, contrived problem. These are absolutely universal - and therefore fundamental - roles that the complex values of time or energy play in quantum physics.

Regardless of the physical system you consider (and its Hamiltonian), its thermal behavior will be encoded in its evolution over an imaginary time. If Hartle and Hawking are right, then regardless of the physical system, as long as it includes quantum gravity, the initial conditions of its cosmological evolution are encoded in the dynamics of the Euclidean spacetime (which contains an imaginary time instead of the real time from the Minkowski spacetime). Regardless of the physical system, the poles of various scattering amplitudes etc. (as functions of complexified energy-momentum vectors) tell you about the spectrum of states - including bound states and resonances.

Before one studies physics, we don't have any intuition for such things. That's why it's so important to develop an intuition for them. These things are very real and very important. Everyone who thinks it should be taboo - and it should be humiliated - if someone extrapolates quantities into complex values of the (originally real) physical arguments is mistaken and is automatically avoiding a proper understanding of a big portion of the wisdom about the real world.

Most complex numbers are not "real" numbers in the technical sense. ;-) But their importance for the functioning of the "real" world and for the unified explanation of various features of the reality is damn "real".

And that's the memo.

#### snail feedback (23) :

Learn Geometric Algebra and then you won't need complex numbers anymore (for physics)

Complex numbers are nothing more than a subalgebra of GA/Clifford algebra.

Nothing special about them at all.

Holy cow, gezinorgiva.

There is everything fundamental and special about the complex numbers as you would know if you have read at least my modest essay about them.

The complex numbers may be a subset of many other sets but the complex numbers are much more fundamental than any of these sets.

The nearest college or high school is recommended.

There is an interesting article related to the topic of this post by C.N. Yang in the book "Schrodinger, Centenary celebration of a polymath" E. C.W. Kilmister, entitled "Square root of minus one, complex phases and Erwin Schrodinger". There Yang quotes Dirac as saying that as a young man he thought that non commutativity was the most revolutionary and essentially new feature of quantum mechanics, but as he got older he got to think that that was the entrance of complex numbers in physics in a fundamental way (as opposite to as auxiliary tools as in circuit theory). He describes Schrodinger struggles to come to terms with that, after unsuccessfully trying to get rid of "i". Also included is the role of previous work by Schrodinger in Weyl's seminal gauge theory ideas in his discovering of quantum mechanics.

This comment has been removed by the author.

Noncommutativity implies complex numbers; the Pauli spin matrices sigma_x sigma_y sigma_z multiply to give i.

Carl, please... Since you're gonna be a student again, you will have to learn how to think properly again.

Your statement is illogical at every conceivable level.

First, the "product" of the three Pauli matrices has nothing directly to do with noncommutativity. The latter is a property of two matrices, not three matrices. The product is not a commutator (although it's related to it).

Second, the fact that the product includes an "i" is clearly a consequence of the fact that in the conventional basis, one of the Pauli matrices - namely sigma_{y} - is pure imaginary. This imaginary value of sigma_{y} is the reason, not a consequence, of the product's being imaginary.

Third, it's easy to see that noncommutativity doesn't imply any complex numbers in general. The generic real - non-complex - matrices (e.g. the non-diagonal ones) are noncommutative but their commutator is always a real matrix.

Noncommutativity by itself is completely independent of complexity of the numbers. And indeed, complex numbers themselves are commutative, not non-commutative. The only way to link noncommutativity and complex numbers is to compute the eigenvalues of the commutator of two Hermitean operators. Because their commutator is anti-Hermitean, its eigenvalues are pure imaginary. For example, the commutator can be an imaginary c-number, e.g. in xp-px.

For more general operators, the eigenvalues are typically computed from a characteristic equation that will contain (x^2+r^2) factors, producing ir and -ir as eigenvalues.

It's actually impossible to avoid the existence of complex numbers even in real analysis—or at least to avoid their effects.

Consider the Taylor series of the function f(x)=1/(1-x^2) centered around x=0. The series is given by f(x)=1+x^2+x^4+x^6+... . It can be seen that this Taylor series is divergent for |x|>1 and so the Taylor series will fail for large x. This isn't very surprising as it can be seen that f(x) has obvious singularities at x=-1,+1 and so the Taylor series could not possibly extend beyond these points.

However, more interesting is the same approach to the function g(x)=1/(1+x^2). This function is perfectly well behaved, having no singularities of any order in the real number. Yet its Taylor series g(x)=1-x^2+x^4-x^6+... is divergent for |x|>1, despite there seemingly being no corresponding singularity as in the previous case.

Analysis in the reals leads to the idea of a radius of convergence, but gives no clear idea where this comes from. In fact using complex numbers the reason becomes clear. g(x) has singularities at x=-i,+i. Despite these existing only in the complex plane, their effects can be felt for the real function. In fact the radius of converge of a Taylor series is the distance from the central point to the nearest singularity—be it in the real or complex plane(See the book "Visual Complex Analysis" for more).

Complex numbers become fundamental and indeed in some sense unavoidable the moment we introduce multiplication and division into our algebra. This is because these operations—and most (all?) elementary operations—hold for complex numbers in general and not just for the real numbers.

Once we write expressions like (x^2+7)/(x^4-3), while we may mean for x to be a purely real number, the complex numbers will work in this equation just as well, and indeed more importantly, will continue to work as we perform all elementary algebraic operations on the expression; "BOMDAS" operations, radixes, even taking exponents and logs.

This should not come as too much of a surprise, and we could have started—like the Pythagoreans—by meaning for the expression to be restricted to rational numbers and even disregarding irrational numbers entirely. But irrational numbers will work in the original expression and in through all our rational manipulations. What we claim holds for a subset of numbers holds for the large set too. And the existence of this larger set has concrete implications for expressions on the subset.

So, when we write down any equation at all, we must be careful. We may mean for it to hold for some restricted class of numbers, but there may be much wider implications. While I am not a physicist, I suspect a similar situation arise. We may want or expect the quantities we measure to expressible in purely real numbers; but the universe may have other ideas.

Sorry, getting old. I meant "Clifford or geometric algebra" rather than "noncommutative". I was continuing the comment by gezinoriva.

And the "i" is not "clearly a consequence" of a basis choice. The product sigma_x sigma_y sigma_z is an element of the Clifford algebra that commutes with everything in the algebra and squares to -1. That's what makes it's interpretation "i" and this does not depend on basis choice.

Dear Carl, it's completely unclear to me why you think that you have "explained" complex numbers.

A number that squares to minus one is the *defining property* of the imaginary unit "i". You just "found" one (complicated) application - among thousands - where the imaginary unit and/or complex numbers emerge.

The condition that a quantity squares to a negative number appears at thousands of other places, too. For example, it's the coefficient in the exponent of oscillating functions - that are eigenvectors under differentiation. Why do you think that Clifford algebras are special?

Dear Lumos; The Clifford algebras are special as they are related to the geometry of space-time. For example, (ignoring the choice of basis and only looking at algebraic relations) Dirac's gamma matrices are a Clifford algebra. Generalizing to higher dimension people expect that the generalization of the gamma matrices will also be a Clifford algebra.

On this subject I sort of follow David Hestenes; his work geometrizes the quantum wave functions, but I prefer to geometrize the pure density matrices. But other than that, his work explains some of the justification. See his papers at geocalc.clas.asu.edu

My concentration on this subject is due to my belief that geometry is more fundamental than symmetry. (This belief is a "working belief" only, that is, what I really believe is that it's more useful for me to assume this belief than to assume the default which almost everyone else assumes.)

A beautiful example of putting geometry ahead of symmetry are Hestenes' description of point groups in geometric / Clifford algebra. I'm sure you'll enjoy these: Point Groups and Space Groups in GA and Crystallographic Space Groups

Apologies, Carl, but what you write is a crackpottery that makes no sense. Clifford algebras are related to the geometry of spacetime?

So is the Hartle-Hawking wave function, black holes, wormholes quintic hypersurface, conifold, flop transition, and thousands of other things I can enumerate. In one half of them, complex numbers play an important role.

Also, what the hell do you misunderstand about the generalization of gamma matrices to higher dimensions - which are still just ordinary gamma matrices - that you describe them in this mysterious way?

You just don't know what you're talking about.

Mathematics is an infinite subject and uses complex numbers in an infinite number of ways. Who cares. What's important is when they appear in the definition of space itself, before QM or SR or GR.

In the traditional physics approach, the Pauli spin matrices are just useful matrices for describing spin-1/2. But they can be given a completely geometric meaning and i falls out as the product. See equation (1.8) of Vectors, Spinors, and Complex Numbers in Classical and Quantum Physics

Regarding the relationship between higher dimensions and gamma matrices see the wikipedia article Higher dimensional gamma matrices It defines the higher dimensional gamma matrices as matrices that satisfy the Clifford algebra relations. But this is well known to string theorists, why are you asking? I must be misunderstanding you.

Dear Carl,
your comment is a constant stream of nonsense.

First, in physics, one can't define space without relativity or whatever replaces it. You either have a space of relativistic physics, or space of non-relativistic physics, but you need *some* space and its detailed physical properties always matter because they define mathematically inequivalent structures.

So it's not possible to define "space before anything else" such as relativity: space is inseparably linked to its physical properties. In particular, space of Newtonian physics is simply incorrect for physics when looked at with some precision - e.g. in the presence of gravity or high speeds.

Second, the examples I wrote were also linked to space - and they were arguably linked to space much more tightly than your Clifford algebra example. So it is nonsensical for you to return to the thesis that your example is more "space-related" or more fundamental than mine. I have irrevocably shown you that it's not.

Third, it's just one problem with your statements that the Clifford algebra is not "the most essential thing" for space. Another problem is the fact that space itself is not more fundamental than many other notions in physics. Space itself is just one important concept in physics - and there are many others, equally important ones, and they're also linked to complex numbers. All of them can be fundamental in some descriptions, all of them - including space - may be emergent. It's just irrational to worship the concept of space as something special.

So even your broader assumption that what is more tightly linked to space has to be more fundamental is a symptom of your naivite - or a quasi-religious bias.

Fourth, it was you, not me, who claimed that he has some problems with totally elementary things such as Dirac matrices in higher dimensions. So why the fuck are you now reverting your statement? Previously, you wrote "Generalizing to higher dimension people expect that the generalization of the gamma matrices will also be a Clifford algebra."

I have personally learned Dirac matrices for all possible dimensions at the very first moments when I encountered the Dirac matrices, I have always taught them in this way as well, and that's how it should be because the basic properties and algorithms of Dirac matrices naturally work in any dimension - and only by doing the algorithm in an arbitrary dimension, one really understands what the algorithm and the properties of the matrices are. So why are you creating a non-existent controversy about the Dirac matrices in higher dimensions? It's a rudimentary piece of maths. Moreover, in your newest comment, you directly contradicted your previous comment when you claimed that it was me, and not you, who claimed that there was a mystery with higher-dimensional matrices.

There are about 5 completely fundamental gaps in your logic. One of them would be enough for me to think that the author of a comment isn't able to go beyond a sloppy thinking. But your reasoning is just defective at every conceivable level. I just don't know how to interact with this garbage. You're just flooding this blog with complete junk.

Cheers
LM

Some of your readers should look at Gauss on biquadratic residues. The simple fact is that Professor Hawking should return to the black hole that god made for him since he advances no argument beyond those offered many years ago by the fakers Laplace and Lagrange. For the uninformed mathematical physicists, those who don't know up from down (and these are the vast majority), "god" is the nickname among mathematicians for one Kurt Gödel .
(See discussion on "Is it possible that black holes do not exist? " on Physics Forums
In any case all rational scientific discourse has been effectively banned since the illegal shutdown of the first international scientific association and journal in 1837 by the Duke of Clarence, Ernest Augustus. See Percy Byssh Shelley's Mask of Anarchy for a pertinent depiction of the Duke of Clarence, the face behind Castlereagh. A simple google search for "("magnetic union" OR "Magnetischer Verein") AND ("Göttingen Seven" OR "Göttinger Sieben") gauss weber" shows that there has been no serious discussion of that action on the subsequent development of scientific practice.
We must assume therefore that the concurrent and congruent Augustin-Louis Cauchy scientific method of theft, assassination, plagiarize at leisure remains hegemonic. Chuck Stevens 571-252-0451 stevens_c@yahoo.com

Dear Lubos,

i don't agree that i has to be represented as a c-number. In fact i think many of the posters have been trying to say (poorly) the following:

i can be definitely defined algebraically as a c-number

OR

can be wrote in the representation of a conmutative subalgebra of SO(2) defined by the isomorphism:

a + ib <=> ( ( a , b) , ( -b , a) )

(sorry, i had to write a matrix as a list of rows, i hope its clear)

Dear seňor karra,

of course, I realize this isomorphism with the matrices. But it's just a convention whether you express the "number that squares to minus one" as a matrix or as a new letter. It's mathematically the same thing.

The important thing is that you introduce a new object with new rules. In particular, in "your" case, you must guarantee that the matrices you call "complex numbers" are not general matrices but just combinations of the "1" and "i" matrices. In the case of a letter "i", you must introduce its multiplication rules.

Cheers
LM

@Lumo:
Clifford algebra is the generalization of complex numbers and quaternions to arbitrary dimensions. Just google it. Therefore it should be no controversy here. Clifford algebra (or geometric algebra) has been very successful in reformulating every theory of physics into the same mathematical language. That has, among other tings, emphasized the similarities and differences between the theories of physics in a totally new way. One elegant feature of this reformulation is to reduce Maxwell's equations into one single equation.

The reason why Clifford algebra has lately been renamed "geometric algebra" is that quantities of the algebra are given geometric interpretations, and the Clifford product are effective in manipulating these geometric quantities directly. Together with the extension of the algebra to a calculus this formalism has the power to effectively model many geometries such as projective, conformal, and differential geometry.

In the geometric algebra over three dimensions most quantities are interpreted as lines, planes and volumes. Plane and volume segments of unit size are represented with algebraic objects that square to minus one.

In the reformulation of quantum mechanics with geometric algebra (describes geometry of the three dimensions of physical space), the unit imaginary from the standard treatment is identified with several different quantities in the algebra. In some situations, as in the Schrödinger equation, the unit imaginary times h bar is identified with the spin of the particle by the geometric algebra reformulation. This, together with other results of the reformulation suggests that spin is an intrinsic part of every aspect of quantum mechanics, and that spin may be the only cause of quantum effects.

If you have the time and interest I strongly suggest reading a little about geometric algebra. Geometric algebra is not on a collision course with complex numbers. In fact, geometric algebra embrace, generalize and deploy them to a much larger extent than before.

Dear Hugo, the very assertion that "the Clifford algebra is the generalization of complex numbers to any dimension" is largely vacuous. Complex numbers play lots of roles and they're unique in the most important roles.

One may hide his head into the sand and forget about some important properties of the complex numbers - e.g. the fact that every algebraic equation of N-th degree has N solutions, not necessarily different, in the complex realm (something that makes C really unique) - but if he does forget them, he's really throwing the baby out with the bath water.

Of course that if you forget about some conditions, you may take the remaining conditions (the subset) and find new solutions besides C, "generalizations". But it's surely morally invalid to say that the Clifford algebra is "the" generalization. It's at most "a" generalization in some particular direction - one that isn't extremely important.

Ok, that's a semi-important point for the physicist; Clifford algebra is _a_ generalization of complex numbers and quaternions. It is puzzling that all you managed to extract from my comment was that I should have written "a" in stead of "the". My comment was about the role of Clifford algebra in physics. When you state that Clifford algebra is not important you should consider explaining why, if you don't want to be regarded as ignorant and "not important" yourself.

Dear Huge, your "the" instead of "a" was a very important mistake, one that summarizes your whole misunderstanding of the importance of complex numbers.

This article was about the importance of complex numbers in physics and the branches of mathematics that are used in physics. This importance can't be understated. Clifford algebras simply came nowhere close to it. They're many orders of magnitude less important than complex numbers.

There may exist mathematical fundamentalists and masturbators who would *like* if physics were all about Clifford algebras but the fact still is that physics is not about them. They're a generalization of complex numbers that isn't too natural from a physics viewpoint. After all, even quaternions themselves have an extremely limited role in physics, too.

The relative unimportance of Clifford algebras in physics may be interpreted in many different ways. For example, it is pretty much guaranteed that a big portion of top physicists don't even know what a Clifford algebra actually is. Mostly those who were trained as mathematicians do know it.

Others who "vaguely know" will tell you that it's an algebra of gamma matrices for spinors, or something like that, but they won't tell you why you would talk about them with such a religious fervor because the relevant maths behind gamma matrices is about representations of Lie groups and Lie algebras, not new kinds of algebras.

Moreover, many of them will rightfully tell you that the overemphasis of Clifford algebras means an irrational preference for spinor representations (and pseudo/orthogonal groups) over other reps and other groups (including exceptional ones). It's just a wrong way of thinking to consider the concept of Clifford algebras fundamental. Physicists don't do it because it's just not terribly useful to talk in this way but even sensible mathematicians shouldn't be thinking in this way.

"Huge" should have been "Hugo".

One more comment. People who believe that Clifford algebras are important and start to study physics are often distracted by superficial similarities that hide big physics differences.

For example, Lie superalgebras are very important in physics (although less than complex numbers, of course), generalizing ordinary Lie algebras in a way that must be allowed in physics and is used in Nature.

However, people with the idea that Clifford algebras are fundamental often try to imagine that superalgebras are just a special case etc. See e.g. this question on Physics Stack Exchange.

The answer is, of course, that superalgebras don't have to be Clifford algebras. They may be more complicated etc. Moreover, the analogy between the algebra of Dirac matrices on one hand and Grassmann numbers on the other hand is just superficial. In physics, it's pretty important we distinguish them. The gamma matrices may anticommute but they're still matrices of Grassmann-even numbers which are different objects than Grassmann-odd numbers.

When we associate fields to points in spacetime, the difference between Grassmann-odd and Grassmann-even objects is just huge, despite the same "anticommutator".

When talking about objects such as spinors, the fundamental math terms are groups, Lie groups, Lie algebras, and their representations. For fields, one also adds bundles, fibers, and so on, perhaps, although the language is only used by "mathematical" physicists. But Clifford algebras are at most a name given by one particular anticommutator that appears once when we learn about spinors etc. and it never appears again. It doesn't bring a big branch of maths that should be studied for a long time. It's just a name for one equation among thousands of equations. It's not being manipulated with in numerous ways like we manipulate complex numbers or Lie algebras.

The Clifford algebras are the kind of objects invented by mathematicians who predetermined that a particular generalization should be ever more important except that the subsequent research showed the assumption invalid and some people are unwilling to see this fact.