Sunday, November 22, 2015 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Pi found in the hydrogen atom

...and in every other hole and round corner of the Universe...

Less than two weeks ago, dozens of media outlets brought us wonderful news. The number \(\pi\approx 3.14159265358979\) has been found in the hydrogen atom. Enthusiastic, magic reports of this kind have appeared at Science Alert, Rochester News Service, Science20, Forbes, Christian Science Monitor, Science Magazine, Space Daily, Physics World, and other places.

The most laymen-oriented sources made their readers believe that "pi" has been lost for a few centuries and Carl Hagen with Tamar Friedmann have suddenly encountered it again, in a haystack. If you search the arXiv for a few seconds, you will find out that the story boils down to the preprint

Quantum Mechanical Derivation of the Wallis Formula for \(\pi\)
by Friedmann and Hagen. Hagen is one of the co-discoverers of the Higgs mechanism, of course.

A funny thing about the paper is that it is actually shorter than many of the "excerpts" in the mass media. The preprint has 2.5 pages of mathematics plus 1 page of references. Now, I assume that the dear TRF reader isn't a complete idiot and she realizes that \(\pi\) is everywhere in mathematics and physics. It's the ratio of the circumference and the diameter of a circle.

This is equivalent to Feynman's #1 most favorite identity in all of mathematics\[

{\Large e^{i\pi}+1 = 0}

\] which connects algebra with geometry. If you put $1 to a bank that offers you 100% per year interest rates (yields are higher), then if you wait for \(\pi\sqrt{-1}\) years where \(\pi\) is the circumference-diameter ratio of a circle, your dollar will become exactly minus one dollar.

Needless to say, "the circumference of a circle" isn't the only place where \(\pi\) emerges in mathematics. It emerges literally everywhere. I don't want to mention the dozens of important examples that I know – which are loosely related but not "immediately equivalent". Instead, let's look at the paper about "\(\pi\) and the hydrogen atom". It's time to introduce the two main actors.

The quantum hydrogen atom, a famous problem

"The hydrogen atom" is a canonical simple problem in non-relativistic quantum mechanics – the quantum counterpart of the Kepler problem in classical mechanics. The spectrum etc. may be completely determined by algebra involving the \(SO(4)\) symmetry generators.

As I plan to explain in detail sometime in the future, the hydrogen spectrum and wave functions may also be calculated by Feynman's path integral. There actually exist several different clever procedures for solving the hydrogen path integral. The oldest one, one from 1979, was presented by Duru and Kleinert. These two physicists (to make things confusing, Kleinert's first name is Hagen) have developed a clever transformation – one that could map the hydrogen atom to a harmonic oscillator and one that is helpful to solve other similar problems, too.

Another method was presented by Ho and Inomata and as far as I know, the simplest path integral calculation of the hydrogen atom was found by Steiner.

The relevant identities involving \(\pi\), the other actor

But here we are talking about the "undergraduate" method to solve the hydrogen atom using differential equations. And Friedmann and Hagen are able to prove the so-called 1655 Wallis formula\[

\frac{\pi}{2} = \frac{2\cdot 2}{1\cdot 3} \frac{4\cdot 4}{3\cdot 5}\frac{6\cdot 6}{5\cdot 7} \dots

\] using the physical considerations involving the hydrogen atom. You're invited to check the identity numerically. The fractions that we multiply are numbers slightly greater than one – but increasingly more indistinguishable from one – and their product simply converges to \(\pi/2\approx 1.57\).

How do they do it? First of all, it's easy to realize (and many of us could have proved it as kids) that the Wallis formula is equivalent to a cool identity\[

{\Large \zav{-\frac 12} ! = \sqrt{\pi}. }

\] The factorial of minus one-half – also known as \(\Gamma(+1/2)\) – is equal to the square root of pi! You might ask two good questions at this point: Why is this claim equivalent to the Wallis formula? And why is it true?

It is equivalent to the Wallis formula because you may "increasingly accurately" calculate the factorial of very large, even non-integer, numbers. For example, \(1001!=1001\cdot 1000!\) is approximately \(1000\cdot 1000!\). Similarly, \[

1000.5! \approx 1000^{1/2} \cdot 1000!

\] This holds with the accuracy of 0.1% or so. Clearly, if you send 1,000 to infinity, you can get any precision you want in the limit. But if you know \(1000.5!\) pretty well, you may calculate \((-1/2)!\) by "climbing down" (because \((x-1)!=x!/x\) holds for any \(x\) – by the definition of the generalized factorial, we want it to hold for any complex \(x\))\[

\zav{-\frac 12}! = \frac{1000.5!}{1000.5\times 999.5\times \dots \times 1.5\times 0.5}

\] But we had a nice formula for \(1000.5!\) so\[

\zav{-\frac 12}! \approx \sqrt{1000}\frac{1000\times 999\times \dots \times 2\times 1}{1000.5\!\times\! 999.5\!\times \dots \times\! 1.5\!\times\! 0.5}

\] To compute the factorial of half-integers, we need to figure out what's the ratio of a product of integers and the product of half-integers or, equivalently, the product of even numbers and odd numbers. If you square the latest displayed formula, you get \(\pi\) on the left hand side indeed because the right hand side will be the Wallis formula (where all the even factors as well as all the odd factors are written twice). You have to account for the "unpaired" \(\sqrt{1000}\) and check the powers of two (from going from half-integers to odd numbers) etc.

Second, how do we know that \((-1/2)!=\sqrt{\pi}\)? There is a short but elegant proof of it, too. The factorials may also be calculated via the Euler integral\[

z! = \int_0^\infty dt\,t^z \,\exp(-t)

\] which you may prove by checking that it gives you \(0!=1\) and \(x!=x\cdot (x-1)!\). The latter arises if you try to integrate by parts. But something funny happens if you calculate \((-1/2)!\). You may see that the integral becomes equivalent to the integral of the Gaussian if you use the substitution \(t=x^2\).

And if you didn't know that \[

\int_{-\infty}^{+\infty} dx\,\exp(-x^2) = \sqrt{\pi},

\] you may prove it by another clever trick. Write the product of two integrals like that. Call the integration variable \(x\) in one of the integrals; and \(y\) in the other. The integral goes over the \((x,y)\) plane when you do it. You may switch to polar coordinates. The integral is rotationally symmetric i.e. \(\varphi\)-independent which gives you \(2\pi\) from the integral over \(\varphi\), while the remaining integral of \(r\cdot\exp(-r^2)\) over the radial coordinate is again easily solvable because the indefinite integral is the Gaussian again – or because it may be transformed back to the Euler integral for \(0!=1\).

I think that this verbal description "what you should do to prove these statements" is more pedagogic than a sequence of perfectly fine-tuned equations that the readers are supposed to passively devour and memorize. Why? Because the people who can't construct the proofs of all the statements according to the "sketch of the proof" above won't be able to use this kind of mathematics productively, anyway, and their memorization of some derivations is no better than their memorization of a poem.

Back to Hagen and Friedmann

OK, the validity of the Wallis formula and the value of \((-1/2)!\) are well-known basic things in mathematics and Hagen and Friedmann obviously don't repeat them in the paper. Their short paper is about the hydrogen atom. How do they find the Wallis formula for \(\pi\) in the mathematics of the hydrogen atom?

They consider a "candidate wave function" for the hydrogen atom in the form\[

\psi_{\alpha \ell m} = r^\ell e^{-\alpha r^2} Y_{\ell m}(\theta,\phi).

\] This is similar to the Ansatz for the hydrogen atom's ground state. However, the ground state actually contains the factor of \(\exp(-\beta r)\) for some coefficient \(\beta\) instead of \(\exp(-\alpha r^2)\). All excited states contain some exponential of \(\beta r\) (times a polynomial) instead of the Gaussian factor, too.

So the Friedmann-Hagen "test function" is not an energy eigenstate of the hydrogen atom. But we may still learn a lot of things from this "test function". How? Well, this test function is still orthogonal to all hydrogen's eigenstates with different values of \(\ell\). It is orthogonal to the energy eigenstate with \((n,\ell,m)=(\ell+1,\ell,0)\) – well, the value of \(m\) doesn't matter – which is the lowest-energy eigenstate with a given value of \(\ell\) because it minimizes \(n\) given the well-known condition \(\ell\in\{0,1,\dots,n-1\}\).

Fine. So the test function cannot have a lower energy than this "lowest energy eigenstate" with a given value of \(\ell\). However, in the limit \(\ell\to\infty\), Bohr's 1920 correspondence principle holds and all the wave functions must basically mimic the classical limit. Indeed, in that limit, they find out that the energy of their "test function" is approximately and increasingly precisely equal to the lowest energy eigenvalue given the same \(\ell\).

But they may minimize the energy of their test function with respect to \(\alpha\), the coefficient in the Gaussian exponent, and the minimum of the expectation value of the energy in their test function yields an expression that includes the factor of\[

\zav{ \frac{\ell!}{(\ell+1/2)!} }^2 \approx \frac{1}{\ell}

\] You see that this ratio of the factorials of an integer and a nearby half-integer is exactly what we discussed at the beginning. While for a large \(\ell\), the square ratio is just \(1/\ell\), the ratio of these factorials is a rational multiple of \(\sqrt{\pi}\) for small values of \(\ell\) or, after the squaring, they may recover \(\pi\) whose origin is the same in the Wallis formula.

The only part of the calculation that isn't "rather obvious" is that the minimal energy of their test function is the "square of the ratio of the nearby factorials". I have personally not verified it but I trust them. Otherwise everything has to work and the Wallis formula has to emerge in this reasoning about the hydrogen atom.

Well, I still can't get rid of the feeling that they have inserted the Wallis formula into their calculation totally unnecessarily because they only need the factorials of large half-integers and they may be calculate it without any \(\pi\) – so it's exactly like finding a formula\[

\pi = 16\,{\rm arctan}(1/5) - 4\,{\rm arctan}(1/239)

\] in a calculation by inserting this expression instead of \(\pi\) somewhere and rewriting it as \(\pi\) again in the following step. ;-) But even if the Wallis formula weren't essential in their calculation, it would probably be possible to find another calculation where it would be essential.


It's a cute calculation and the Wallis formula which is known to a large enough fraction of the educated public emerges and people may say that it is cool. However, at the end, I would say that this situation is no different from basically any mathematical calculation (in physics or any mathematics that looks similar to physics) that may be done analytically. In every calculation that may be done analytically – and the hydrogen atom is analytically solvable – one has to use some clever tricks and identities. In this case, it was basically the Wallis formula. In other cases, it is a different identity.

(Incidentally, one could argue that e.g. my rather well-known first calculation of the quasinormal modes has used the very same identities valid for the generalized factorial as Hagen and Friedmann. Our later paper with Neitzke used some cool monodromy identities describing the behavior of the Bessel functions, and so on.)

So this calculation is only special to the extent to which you consider the Wallis identity to be qualitatively neater or deeper or more fundamental than other insights in mathematics. Well, I would probably count myself in the "No" camp. The Wallis identity is just one among hundreds of comparably important identities in mathematics – and one among dozens of identities that basically include just simple functions and \(\pi\).

In other words, I think that this hype has been largely misplaced, too. It would be great if this paper had been used for many people to learn something deep and important – for example the possibility to solve certain systems exactly; or the omnipresence of constants like \(\pi\) in mathematics and theoretical physics – but I think that the actual message that these popular articles try to convey is very different.

It's a message that "\(\pi\) is weird" – just like when the journalists write that "quantum mechanics is weird" – and whenever it occurs somewhere, you should be shocked. It's just like magic. Well, that's complete nonsense. Much like quantum mechanics, \(\pi\) is not weird at all. Both of them are completely normal and omnipresent. Mathematics and/or Nature are built out of them. Quantum mechanics and \(\pi\) – and perhaps other things that are often presented as weird magic exceptions – are the roots of being and its beauty. Too bad that these basic features of the beauty and the architecture of Nature are not being communicated to the broader public at all.

Add to Digg this Add to reddit

snail feedback (0) :