A few days ago, the LIGO skeptic named Jason C-65 added a comment under a text of mine explaining why LIGO skeptics are idiots:

The detector is essentially an interferometer. It uses mirrors made of atoms, and yet it is supposed to resolve distances to less than a proton. I know that's bullshit.Well, like many others, Jason is totally wrong. LIGO indeed does measure the length of the L-shaped tunnel with the precision much better than the size of the atom. Well, the atom is \(10^{-10}\) meters in size, a proton is \(10^{-15}\) meters. But LIGO goes further, to \(10^{-20}\) meters or so. There is no contradiction with anything and Jason's expectation that "it must be impossible" is based on his completely different "theory" with which he looks at the distances and their precision.

You know, LIGO has an interferometer inside. A LASER beam goes back and forth many times through the 4-kilometer tunnel. The wavelength of the light is 1 micrometer – infrared – but the distances may clearly be measured with a much better precision than the wavelength. You know, it's because

- the beam goes back and forth many times, \(N\) times, and the distance that may be resolved is improved i.e. shortened \(N\)-fold
- on top of that, the precision of the \(N\) times length is measured with a much better accuracy than \(\lambda\) because the precision of the phase is measured with a much better accuracy than \(2\pi\): the high enough strength of the LASER beam helps, the experimenters may find a very accurate fit that implies the precise position of the interference pattern in the interferometer

This LIGO page explains the basic sources of noise: seismic N, thermal N, quantum N, gas N, charging N, laser N, auxiliary degree-of freedom N, oscillator noise, beam jitter, scattered light, and electronic noise. We may discuss each of them, they are interesting, some of them are unsurprisingly greater than others, others may often be neglected, and so on.

We could say that the number of "noises" is large and each of them makes the precision worse. Enumerating many sources of noise cannot be enough to show why Jason is wrong. Many sources of noise make it harder (or "more impossible") for LIGO to work, right? Instead, Jason – like lots of the laymen – makes a mistake at a different place, the very early expectations. In fact, we could say that Jason and other laymen basically believe that

if we find some error margin of the location, then all locations and distances – every quantity that may be naturally expressed with the units of meters – has at least this error margin.In particular, Jason believes – and he's not the only one even in this particular belief – that all positions, locations, and distances have the error margin of one atomic (Bohr) radius or larger and it simply can't be better. Well, this is completely wrong. What's wrong is that Jason starts with some totally unjustified disbelief in the statement that

the laws of physics work.But the laws of physics do work and the error margins are very different from each other, depending on

*which precise distance, length, location, or position*we consider. Some of these quantities with the units of one meter are known much much more precisely than one Bohr radius – and even much much more precisely than one Planck length, \(10^{-35}\) meters – otherwise direct inconsistencies would immediately follow from that error. You should better avoid throwing random errors that you could believe "not to hurt" at random places! If you don't start with the laws of physics applied very precisely, with the accuracy better than one Planck length, you may easily create errors that cannot be fixed later. Physics often needs this accuracy and one may verify it with that accuracy.

Let's try to be a bit more accurate and ask: What kind of an error margin is the atomic radius? Well, the atomic radius describes the relative position of the electron in comparison with the nucleus in an atom. Quantum mechanics tells us that the position isn't sharply determined, it has an uncertainty, and the uncertainty is the Bohr radius or so:\[

\vec R = \vec R_{e^-} - \vec R_{\rm nucleus}, \quad |\Delta \vec R| \sim 10^{-10}\,{\rm m}

\] Quantum mechanics implies that the Bohr radius is approximately the right one because the momentum has to have the opposite uncertainty, by the uncertainty principle. But "too large an atom" reduces the binding energy while "too fast an electron" adds too much kinetic energy. If we minimize the total energy, we find out that the minimum is achieved approximately for \(|\Delta \vec R|\) comparable to the Bohr radius, as written above.

But it's mostly the electron that is frantically flying around the nucleus – at a rather unspecified place. The nucleus itself is moving less. Because the proton is 1836 times heavier than an electron, the distances traveled during its motion are shorter by a factor of 1836 or so, taking the light hydrogen atom. And the relative positions of nucleons or quarks inside the nucleus are given much more precisely than that. After all, the wave function for the quarks is analogous to the wave function for the electrons in the atom – except that the radius of the cloud is \(10^{-15}\) meters, some 100,000 times shorter than the atomic "cloud" describing the electron in the atom.

It doesn't matter that the nucleus itself is vibrating – in the opposite way than the electron. The electron is moving plus minus \(10^{-10}\) meters and the nucleus is therefore moving in the opposite way, almost by \(10^{-13}\) meters, because the center-of-mass remains at rest (we may assume so). So the center-of-mass position of the nucleus is vibrating by \(10^{-13}\) meters while the relative positions of the quarks inside the nucleus are surely known with the precision better than \(10^{-15}\) meters – QCD needs this precision to explain how the nucleons work inside.

There is absolutely no contradiction here. Composite particles such as nucleons, nuclei, and atoms have some (1) center-of-mass positions, and (2) relative positions between the parts.

I decided to write this sentence in the bold face and turn it into a block quote because I believe that this is a central yet rudimentary assertion that is counterintuitive for the likes of Jason. And we might even link this difficulty to another general difficulty that the laymen usually have – the anti-quantum zeal. The quote above is counterintuitive exactly because such a situation seems impossible in classical physics, at least in the conventional way of thinking about classical physics.And the funny fact is that in quantum mechanics, the center-of-mass position often has a much smaller uncertainty than the relative positions between the parts!

You know, in classical physics, if you are kid and you are building a composite state, e.g. a model of the atom, you add many plastic balls. And if the inaccuracy of the shape of one plastic ball – e.g. for an electron or a quark – is one millimeter, then it's tempting to conclude that the "error of the distances of the whole atom, the composite structure" is greater than or equal to the "error of the distances on the plastic electron or quark". In other words, this intuition seems to imply that when we're adding components together, the error margins of the "whole" always goes up – it just accumulates all the errors from the parts. Why do the laymen expect such a thing? Because they imagine the "error margin of the location" to be some objectively real error that simply "affects" any other location or length that seem to "depend" on the parts.

But that's not how it works in quantum mechanics. The center-of-mass position of an atom (or a nucleus, or a molecule) and the relative positions of parts are independent observables and they may have independent distributions: the first one may have a greater error margin than the second one; or the second one may have a greater error margin than the first one. An atom is "made out of" the smaller elementary particles; but the center-of-mass isn't made out of the relative positions of these elementary particles. It's perhaps made of the "absolute" positions but when those are very uncertain, it just doesn't mean that their differences and/or averages are very uncertain.

OK, the internal relative separations between the quarks in a nucleus are much smaller distances than other distances. LIGO may measure distances that are far shorter than the atoms and even than the protons. These much shorter distances are real and the fact that the laws of physics work with this better-than-proton-radius precision may be experimentally verified in many ways.

Aside from the total laymen, these rudimentary confusions about the error margins – the misunderstanding of the fact that "some locations" have vastly smaller error margins than others – affects the professional physics amateurs, too. All the people who believe in loop quantum gravities and causal dynamical triangulations and spin foams and similar pseudoscientific superstitions are basically victims of the same basic misunderstandings. They believe that

*any quantity with the units of meters*has an error margin comparable to one Planck length, around \(10^{-35}\) meters, and it's just impossible to resolve shorter distances.

One very explicit example is the wavelength of the electromagnetic light. These people believe that "no distances shorter than the Planck length exist", so the photons can't have shorter-than-Planckian wavelengths, either. But that's completely wrong, too. Special relativity guarantees that the wavelength may become arbitrarily short. Just boost the photon a little bit more – by switching to an inertial system whose speed is a little bit closer to the speed of light – and the wavelength will keep on dropping.

The actual statement in quantum gravity is that

*some effects*that are incompatible with "simple objects located at precise positions in the flat space" start to appear once you demand the precision to be better than one Planck length. But it's only

*some effects*. Consistency guarantees that many quantities with the units of one meter may still be damn real even though they are much shorter than one Planck length.

There is a really elementary proof of the fact that the "wavelengths may be far shorter than the Planck length": just consider the de Broglie wavelength of a heavier object. The de Broglie wavelength is about \(h / p\), the ratio of Planck's constant and the momentum. Use the schoolkids' SI units for a while: \(h\) is of order \(10^{-34}\) joule-seconds so if \(p\) is of order \(10\,{\rm kg}/ {\rm s}\), then the wavelength will be of order one Planck length. It may be made easily shorter because there is no problem in getting objects whose momentum is much greater than \(10\) in the SI units. Just kick the soccer ball a little bit stronger. ;-)

I suspect that the likes of Jason – and maybe even the loop quantum gravity-style pseudoscientists – would still object. They would probably claim that the de Broglie wave of a composite object is "unreal". Only the wave functions of elementary particles such as electrons are "real", meaning parts of the correct description using physics. But this anti-quantum zeal is completely incorrect once again. The wave functions work for composite and heavier objects, too. Many such situations may be experimentally tested. There's nothing so hard about the very short wavelength of a heavy object. Even if you have the wave function for the union of unentangled particles described by the plane waves, the union will have the wave function that is simply the product of the individual building blocks' wave functions – and the momentum (i.e. inverse wavelength) gets added in a simple way. The more building blocks you have, the shorter the wavelength of the product-type wave function will be.

We started with the misconception that "the composite systems' locations have error margins that are at least as large as the error margins of the building blocks' positions". This is a special case of a slightly more general anti-quantum misconception – the idea that "some observables are universally elementary" and "others are made out of them", so these "others" must have greater error margins. But quantum mechanics doesn't respect any such "hierarchy" in the importance of observables. Observables may be written as some "operator-valued functions" of other observables in many ways – and these relationships may generically be inverted. All observables – all Hermitian linear operators – are conceptually equally good in quantum mechanics. \(L\) may have a smaller error margin than \(M\) or vice versa.

This is a fact that is hard for the anti-quantum zealots and prevents them from properly understanding the quantum entanglement. In the quantum entanglement – think about the singlet state of two spins – some "collective properties of the pair" may be completely determined (such as the total spin of the two particles which is exactly zero in the singlet state) while the parts' properties (like the \(z\)-components of the two individual spins) may be much more uncertain or completely uncertain. This is what is perfectly possible in quantum mechanics and it doesn't violate any valid rule or law whatsoever. Indeed, this situation describing the entanglement is completely analogous to the situation we started with, namely the situation in which the center-of-mass position of an atom is known much more accurately than the relative positions of electrons and the nucleus of that atom. "The whole" may be almost certain and error-free even though the parts have large errors! The electron may be 1 Bohr radius to the left or 1 Bohr radius to the right – but the nucleus is then located at the appropriate "opposite" place so that the center-of-mass weighted average is still precisely at the origin.

Needless to say, this is not just an analogy. The situation involving the error margins of electrons' and atoms' positions is a

*special example*of the situation involving the quantum entanglement because "a pretty localized atom" is a particular entangled state of an electron and of the nucleus. The error margin of the electron's position – and even the nucleus' position – may be \(10^{-10}\) or \(10^{-13}\) meters, respectively, but the error margin of the center-of-mass location of the atom may be much smaller than \(10^{-20}\) meters and actually measured by LIGO with this precision.

The lessons of this blog post should be taught and explained by popular articles – because a huge fraction of the laymen, and probably an overwhelming majority, is thinking incorrectly about these matters whenever they start to be exposed to quantum mechanics (and/or claims about the LIGO measurements). To summarize, the uncertainties of quantities describing "composite objects" are often much shorter than some error margins of "relative positions" in the objects etc. and there is absolutely nothing wrong about such a situation.

And that's the memo.

## snail feedback (0) :

Post a Comment