In June, a Western European physicist with a blog is organizing a workshop on nonlocality:

Workshop on Nonlocality, June 27-29 (B.R. blogspot blog)So far, 10 people have shared their intent to attend. Some of them are just Mr or Ms. George Musser belongs to the best that Scientific American has to offer these days but it's still true that "Scientific" American has sucked for a decade or so. And there are a few people who have something to say but no one will listen to them, anyway.

Current registrants (Workshop's website)

"Nonlocality" is a cool buzzword but the actual research that uses this buzzword is split to several categories that have nothing to do with each other and the research in most of these categories is pseudoscientific in character.

*An experimental physicist training telekinesis, a version of nonlocality*

What do I mean? Let us spend some additional time with locality and nonlocality.

As this blog has discussed many times, locality is an inevitable consequence of the special theory of relativity. The Lorentz symmetry underlying this theoretical framework implies that an action that would be faster than light in the vacuum would be equivalent, by a Lorentz transformation (a change of the reference frame), to an action that influences your past.

This can't happen because one would deal with the paradox of your prematurely castrated grandfather (you could do it to him before he had sex with your grandma). Consequently, locality has to hold. Signals are not only forbidden from propagating into the past; they're not allowed to propagate faster than light, either. One of the corollaries is that if you want to influence a distant place in the Universe, something will have to happen (or go through) any plane that sits in between you and the distant place.

Voodoo, telepathy, telekinesis, and similar things are banned by the laws of physics. Now, there exist lots of confusion about this point. People believe in many kinds of "loopholes" that actually don't exist. The research that loves to use the term "nonlocality" may be divided to these basic groups:

- Attempts to introduce nonlocality to
**"get rid of" dark matter**, dark energy, or other things - Nonlocality as a consequence of otherwise unmotivated and unjustifiable "alternative" theories of physics such as the
**Hořava-Lifshitz gravity** - Effects in theories that would love to be "somewhat relativistic" and "somewhat non-relativistic", especially the so-called
**Double (or Deformed) Special Relativity** - Misconceptions about
**nonlocality in quantum mechanics**, motivated by people's fundamental misunderstandings of the meaning of quantum mechanics in general and entanglement in particular - Appearance of weak nonlocal effects in the process of the preservation of the
**black hole information**and related weak nonlocal effects in string theory (in limited environments) in general.

**Special relativity**

For thousands of years, people understood that we can't change the past – although totalitarian ideologies often try to rewrite the history (and e.g. HadCRUT4 was just released to make the past cooler than it was according to HadCRUT3, a self-evident main goal of raising the digit). We may only influence the future. In other words, if we use the usual convention for the sign of time \(t\) that increases into the future, the coordinates of the cause and its effect have to obey\[

t_\text{cause} \lt t_\text{effect}.

\] That's a very simple assertion ("the cause precedes its effect") that holds in Newtonian physics – and that was known long before Isaac Newton was born. We may call this inequality "causality" (and reserve the term "determinism" for the idea that everything is determined by the initial state). It's logically necessary because we know that we may affect the future. If it were possible for influences to propagate in the opposite direction as well, we could easily identify "closed loops" or generalized "closed time-like curves" that would lead to logical paradoxes.

However, special relativity says that all inertial reference frames related by Lorentz transformations are equally good as frameworks to describe the physical phenomena. Imagining that the only other spacetime coordinate besides \(t\) is \(z\), the Lorentz transformation takes the form\[

t = \frac{t'-vz'/c^2}{\sqrt{1-v^2/c^2}}, \qquad z = \frac{z'-vt'}{\sqrt{1-v^2/c^2}}

\] where \(v\) is the relative velocity between the two reference frames. We may translate the inequality above to the reference frame that uses the primed coordinates as follows:\[

t'_\text{cause} - \frac{vz'_\text{cause}}{c^2} \lt t'_\text{effect} - \frac{vz'_\text{effect}}{c^2}

\] The square root in the denominator could have been cancelled. It's important that the inequality above – which is just the original "causality" condition – has to hold in all reference frames, i.e. for all values of \(v\) that are smaller than \(c\). The velocity may approach \(v\approx \pm c\) arbitrarily closely. Write the inequality above in these two limiting cases and you will see that the inequality also implies\[

t'_\text{effect} - t'_\text{cause} \geq \frac{1}{c} \abs{ z'_\text{effect} - z'_\text{cause} }

\] If we dealt with all the spatial coordinates, the absolute value would simply become the distance between the places of where the "cause" and "effect" (two events in spacetime) occur. At any rate, the formula above says that the effect can't come earlier than the time that light needs to fly from one place to another. Light in the vacuum – or anything with the same speed (e.g. gravitons) – is the fastest "messenger" or a "tool to influence" that Nature allows.

It doesn't matter whether your theory is classical, quantum, male, female, or produced out of sockpuppets. The derivation above didn't make any assumptions about these matters so it has to hold generally. And be sure that it does hold. Still, I've listed "five flavors" of nonlocality that people like to talk about. Let's look at them from the last one.

**Black hole information puzzle**

When Stephen Hawking made his calculation of the black hole radiation that is named after him today, he was able to compute the temperature and many other details about the radiation. But one property of the radiation was easy to see without the full-fledged formalism of quantum field theory on curved backgrounds: the Hawking radiation couldn't have depended on the initial state of matter that collapsed into the black hole.

The initial deviations of the newborn black hole from its ultimate static shape are quickly, exponentially disappearing. This stabilization of the newborn black hole may be (at least approximately) described by the ringing or quasinormal modes. Very quickly, the black hole becomes nearly static. Whatever happens with the collapsed matter inside the black hole can't influence the Hawking radiation because of locality: the region where the Hawking radiation propagates is spacelike-separated (i.e. requiring superluminal signals) from the region where the collapsed matter approaches the black hole singularity. The nonlocality arguments sketched above, when generalized to the case of curved spacetimes, strictly prohibit such an influence. That's why Hawking deduced that the radiation had to be exactly thermal and independent of the initial state.

In the mid 1990s, string theory made it clear that while this conclusion is correct for all practical purposes, the information is fundamentally not lost. Subtle nonlocal influences imprint the properties of the initial state into the Hawking radiation. The latter still looks almost exactly thermal but it's not quite thermal. One may say that the information may "tunnel" from the black hole interior to the exterior. It may violate the cosmic speed limit "for a while" and produce exponentially tiny imprints in the Hawking radiation that remember the initial state although the information can't be extracted in practice. In particular, Matrix theory and AdS/CFT make it totally indisputable that even though string/M-theory contains black holes and they behave exactly as they should in all the approximations we may consider, it's still a theory that fully conserves the information.

A few years ago, Stephen Hawking surrendered in his black hole war against Lenny Susskind, gave up his bet against John Preskill, and declared himself as a co-discoverer of the preservation of the information in the black hole, too. ;-) Of course, he wasn't really a discoverer of that but he developed a psychological framework that allowed him to understand why the actual theory of quantum gravity may find a loophole and produce a different qualitative conclusion than one he had believed for 30 years. His semiclassical calculations are extremely accurate but they're expansions around a point that is "qualitatively misleading". It forces you to destroy the information to all orders.

But the truth isn't destroying it because the black hole interior isn't really "sharply well-defined". A particle located in the black hole interior is similar to an alpha-particle inside a nucleus that may alpha-decay. There's no real "permanent state" of the alpha-particle in which it would be strictly isolated from the exterior of the nucleus. The particle may simply tunnel out and escape. The same is true for the matter or mass confined by the black hole. It may also get out. Just to be sure, if I talk about the tunneling, it doesn't allow you to send "small amounts of information" superluminally in any context. It only works in the presence of black holes. I don't want to go into details.

Michele Arzano, the first participant of the Nonlocality Workshop, knows quite something about the interpretation of the Hawking radiation in terms of quantum tunneling but I am afraid that it will be above the other participants' heads.

String/M-theory is based on elementary objects that are extended in space – e.g. strings. So in some sense, it allows "slightly nonlocal effects" to operate. However, this intuition is still misleading. Physical phenomena (including the splitting and merging of the stringy pieces) are still local on the string or the world sheet. And even if you describe string theory as a string field theory in which the whole string is represented by a field at one point, the string's center of mass, you still reproduce some conditions that you expect from a fully local theory. In particular, perturbative string theory seems to saturate some conditions for high-energy scattering that may be derived for local quantum field theories.

One must be very careful about these things. There are "some effects" that might be called "nonlocal" that string theory allows but it still vehemently forbids many others. String theory isn't a garden-variety nonlocal "rebel" theory that violates everything. It is highly conservative and when you study it using the right language, it's actually as local as local quantum field theory.

I don't want to focus on this topic here so let's switch to another category I mentioned above:

**Misconceptions about nonlocality in quantum mechanics**

This is the most frequently cited "nonlocality" in popular physics books. In fact, the popular physics books seem to present a near "consensus" that there's something nonlocal about the way how quantum mechanics operates. Despite this consensus and the fact that some otherwise pretty good physicists are participating in it, all these claims are complete rubbish.

The "quantum nonlocality" claim revolves around the EPR phenomena and entanglement. Readers of popular and sometimes even not-so-popular books and articles are led to believe that an object in an entangled pair may remotely control its cousin. But nothing like that occurs.

Entanglement is nothing else than the quantum variation of the concept of correlation. It either represents

*any*correlation between two subsystems that is properly described and understood in the language of quantum mechanics; or it refers to those correlations that make the subsystems behave differently than anything in classical physics. Technically, an entangled state is an element of a tensor product Hilbert space that can't be written as a tensor product of two states:\[

\ket\psi\in \HH_\text{here}\otimes \HH_\text{there},\qquad

\ket\psi\neq \ket\psi_\text{here} \otimes \ket\psi_\text{there}

\] Those states represent immense psychological problems for people who can't reconcile themselves with the fact that the world doesn't obey the laws of classical physics. It doesn't even obey the "bare realist logic" of classical physics. Everyone who tries to "squeeze" quantum physics into a classical straitjacket ultimately starts to talk about nonlocality and similar nonsense.

However, entangled states are nothing "extraordinary" that would force us to revise the rules of relativity, e.g. the proof that there can't be any nonlocality that was presented at the beginning of this blog entry. Entanglement states aren't miraculous or special or supernatural. They're generic states in the Hilbert spaces. Almost all states in a tensor product Hilbert space are entangled. If you don't learn how to properly predict situations involving entangled states, then you misunderstand at least 99.99999 percent of quantum mechanics. There isn't any quantum mechanics without entangled states.

Quantum mechanics is a theoretical framework that makes probabilistic predictions of the outcomes of measurements. Everyone who tries to pretend that quantum mechanics is something else, e.g. a version of classical physics, inevitably drowns in the quantum nonlocal new-age crackpottery at some point.

If the physical system is composed of two subsystems, various quantities such as \(x_1\) and \(y_2\) may be measured on these subsystems \(1,2\). Quantum mechanics gives us some probabilistic predictions – the probability density \(\rho(x_1,y_2)\) for any pair of values of the two quantities that may be measured, whatever they are – according to its totally well-defined rules. Quantum mechanics is a toolkit to deal with the actual observations. It surely isn't a propagandistic tool designed for you to confirm your medieval prejudices about the world as a classical place or something like that. Quite on the contrary, it says that these prejudices are wrong. It says many other things, too.

In almost all cases (at least when the systems \(1,2\) were in contact sometime in the past), the probability distribution for the two quantities will reveal some correlations:\[

\rho(x_1,y_2) \neq \rho_1(x_1) \rho_2(y_2)

\] In the typical case, you will simply be unable to find the distributions for the individual quantities so that the probabilistic distribution for both would be a simple product. This simply says that the two systems have correlated properties. It's not shocking that two objects that met in the past exhibit some correlated properties. Bertlmann's socks do the same thing. Quantum mechanics allows these correlations to affect many more pairs of possibly measured quantities etc. than classical physics could. But that shouldn't be shocking, either. Quantum mechanics isn't classical physics, stupid. So it predicts different statements about things. But they're really the same things as the things in classical physics – observables, their values, and correlations between those values.

Now, when quantum mechanics predicts the probability distribution \(\rho(x_1,y_2)\), then Nature – and She knows quantum mechanics very well – will have no problem to determine what will happen. If it has to produce a result for \(x_1\) that the first experimenter measures, it may simply start by calculating the probabilistic distribution for \(x_1\) itself. If you were given the two-variable distribution, the one-variable distribution is simply\[

\rho_1(x_1) = \int \dd y_2~\rho(x_1,y_2)

\] The funny thing is that maths of quantum mechanics guarantees that if the second experimenter decides to measure \(z_2\) instead of \(y_2\), whatever these variables are, the integral over \(z_2\) will yield the same \(\rho_1(x_1)\). This natural result isn't an independent dogma that is inserted to quantum mechanics; it is a result of equations. You may check that it works and the world is "well-behaved" in this sense. But it works for a different reason than the wrong reason that the world is classical and these facts are a priori incorporated into a classical world! The world isn't classical. Our quantum world shares some consistency conditions with classical physics but it isn't equivalent to any classical or realist theory. It's a quantum world, stupid. Some things you have believed to be true may be untrue in our quantum world; other things you have believed to be true may still be true but their proof may require some nontrivial linear algebra instead of "self-evident dogmas"; another class of things you have believed may be just "approximately true".

The independence of the integral above on the choice of the variable \(y_2\) or \(z_2\) ultimately boils down to the completeness relation\[

{\bf 1} = \int \dd y_2~\ket{y_2}\bra{y_2} = \int \dd z_2~\ket{z_2}\bra{z_2}

\] for two orthonormal bases (in this example continuous bases normalized to the Dirac delta-function). This simple piece of linear algebra with a crisp physical interpretation is a reason why the decision of the second experimenter what to measure – whether \(y_2\) or \(z_2\) – has no impact whatsoever on the probabilistic distribution for \(x_1\). That's a part of the reason why there are no nonlocal influences running between the two subsystems.

So Nature knows the distribution for \(x_1\). It gives the subsystem some freedom but the resulting \(x_1\) must look random, selected according to the calculable distribution. The same thing holds for \(y_2\) or, if you decide to measure it, \(z_2\). The role of \(x_1\) and \(y_2\) is obviously totally symmetric. The two measurements may be spatially separated and very distant and relativity doesn't allow you to say which of them was done first. The time ordering of these two events may depend on the reference frame. But Nature doesn't need to know it. Quantum mechanics only predicts a correlation between \(x_1\) and \(y_2\); it doesn't predict or require any causation. Correlation isn't causation. Whenever there's some correlation in the world – in our quantum world – it's a consequence of the two subsystems' interactions (or common origin) in the past.

You may imagine that Nature first calculates the distribution for \(x_1\), throws dice, collapses the wave function for \(y_2\) so that the freshly measured value of \(x_1\) is already assumed, and then throws dice again according to the distribution \(\rho(x_1^\text{measured},y_2)\). But the "dice" and "collapses" is just a stupid fairy-tale for babies. No "visual mechanism" like that exists in Nature. What exists are the measurable results of experiments and they follow the same distributions whether you imagine that \(x_1\) was decided before \(y_2\) or vice versa.

If you imagine that Nature is a hot babe who throws dice because it makes you more excited or pleased, be my guest. But even in principle, it's impossible to distinguish between the two orderings how the babe throws the dice so this ordering is totally unphysical. If you don't understand that things that are even in principle indistinguishable by any measurements are fundamentally indistinguishable in physics, and that all the crutches that distinguish them are unphysical superconstructions that mustn't ever be treated as physics, then you're simply not thinking about the world scientifically. You may be thinking about the world in some hardcore materialist (and implicitly classical) way but it isn't the same thing as the scientific approach to the natural phenomena! The dice touching Nature's breasts (or His penis, so that I can't be accused of sexism or homophobia) after She makes a guy called \(x_1\) collapse after an intense intercourse are just your sexual fantasies, stupid. The actual physics doesn't contain anything of the sort.

The paragraphs above were written to make you think twice or thrice or 1,000 times if you still fail to understand that there's no "nonlocal influence" needed to explain the results of measurements of entangled particles. If you still feel that there's some influence here, it's because you're making additional, invalid assumptions about the world.

**Quantum field theory**

Now, the moment of the measurement is just one part of the events that happen to the two subsystems. I've explained that there's no influence here: quantum mechanics only predicts correlations, not causation between the two measurements. The other part of the events according to quantum mechanics is the evolution in time. (That's how the Copenhagen interpretation of quantum mechanics divides "processes" in physics. This division only makes sense "psychologically": the measurement is the process in which we're learning some actual data. But the future properties of a system are unaffected by the question whether or not we consider some process a measurement; these properties are calculable independently of these colorful interpretations.)

I haven't discussed it yet. In general, the evolution in time may include actions at a distance. In nonrelativistic physics, the Sun is instantly influencing the planets. If it quickly explodes, the planetary orbits are immediately affected. You know that according to relativity, such an influence would still take 8 minutes to get to the Earth.

The simplest class of theories that obey the general postulates of quantum mechanics as well as the rules of special relativity is the class of (relativistic or local) quantum field theories. They may be "constructed" by adding hats above the observables in relativistic classical field theories such as electrodynamics with its Maxwell's equations. The hats don't change anything about the locality of these theories.

In the operator approach to quantum mechanics, the evolution in time is governed by the Hamiltonian, the operator of energy. In quantum field theory, it's essentially an integral of the energy density over space:\[

H = \int \dd^3 x~\rho_{00}(\vec x)

\] That's great because if we study the evolution of two subsystems in two regions, here and there, the relevant part of the Hamiltonian becomes a simple sum of the two Hamiltonians for the separate regions:\[

H = H_1+H_2

\] This additivity or extensiveness of energy is equivalent to locality of the evolution in time. The funny thing is that the Hamiltonians \(H_1,H_2\) commute with the quantities describing the second and first particle, respectively, i.e.\[

[H_1,y_2]=0, \qquad [H_2,x_1] = 0

\] and so on. This follows from the fact that the Hamiltonian density is a function of the local fields and their derivatives and the fields commute (or anticommute) if they're spatially separated. But if you use the Heisenberg picture, the vanishing commutators above make it very clear that the Hamiltonian \(H_2\) has no impact on the probabilistic distributions for \(x_1\) or any other quantity describing the first subsystem or vice versa.

So if you want to predict the probabilistic distributions such as \(\rho(x_1)\) only, you only need to deal with \(H_1\). The evolution of the second, distant subsystem and the measurements that could occur there (as I have already discussed) make absolutely no impact on the probabilities that the first system will experience something or something else. And vice versa.

This doesn't mean that the results of the distant measurements won't be correlated. They will be correlated but the Hamiltonian argument makes it clear that such correlations between the two subsystems can't result from the separate evolution of these subsystems in time; they must always be a consequence of the subsystems' contact in the past. Indeed, if your two subsystems are unentangled i.e. uncorrelated in the quantum sense, their state vector has to be a tensor product\[

\ket\psi_{12} = \ket\psi_1 \otimes \ket\psi_2.

\] But if that's the case, this factorized form of the state vector will be true forever. The state above will evolve to\[

U_{12}\ket\psi_{12} = U_1\ket\psi_1 \otimes U_2\ket\psi_2.

\] where the evolution operators \(U_1,U_2\) may be calculated by simply exponentiating multiples of the Hamiltonians \(H_1,H_2\), respectively. The evolution of the composite systems according to the local laws of quantum field theory won't change the degree of their entanglement at all! It may only change the pairs of quantities in which the subsystems are highly correlated or less correlated but for any pair of strongly correlated initial quantities, there will be a pair of quantities describing the two subsystems at any later time that will be exactly equally correlated!

There are just no nonlocal influences running in quantum field theory. All predictions respect the independent existence of spatially separated regions and objects. This assertion may be verified by explicit calculations in quantum mechanics and these calculations are indeed needed. It's wrong to assume that this thing has been obvious to you since you were in the kindergarten. The actual reasons why these statements holds are derived and can only be verified by a person who has learned quantum mechanics. You can't verify these features of the real world by some arguments based on your classical intuition – where these properties could be trivial – because our world isn't classical, stupid.

Entanglement isn't any sign of a nonlocality. Bell's inequalities guaranteed that the experimentally observed correlations can't be explained by a local realist theory. But in a striking contrast with the popular scientific literature, the wrong assumption isn't locality; it's realism. Locality is just a property of relativistic and similar theories, whether they're quantum or classical. And indeed, it holds. The validity of locality was one of the key results of Einstein's special relativistic revolution of 1905, a revolution that can't be undone anymore.

On the contrary, realism is an assumption behind all classical theories, whether they're relativistic or not. And it's been shown invalid in the 1920s because classical physics has been shown wrong. Only probabilities of actual measurements may be predicted by physics. This is what the quantum revolution of the mid 1920s is all about. The new picture of the world is "local, non-realist". Everyone who suggests that it's "nonlocal, realist" apparently misunderstands both major revolutions of the 20th century physics, quantum mechanics and relativity.

**DSR**

It isn't surprising I spent so much time with the foundations of quantum mechanics. It was my original goal. Let me say a few words on the status of the first three "kinds of nonlocality".

Doubly special relativity isn't a viable framework to produce realistic theories in physics, at least not in spacetimes of dimension 4 and higher, such as ours. The Western European physicist with a blog actually became one of the people who understood this point so it's unfortunate that she's still organizing workshops for the people who remain in the dark about this elementary point.

Doubly Special Relativity may be connected with a well-defined "quantum group", a quantum deformation of the Lorentz group. But if you actually try to construct any theory that has this symmetry, you will fail. You will either get theories that are fully equivalent to local, Lorentz-symmetric theories by a field redefinition; or you will end up with the class of totally nonrelativistic, and therefore nonlocal, theories that are totally ruled out experimentally because they prevent you from objectively saying whether a lethally poisonous snake is closed in a box. Such important questions become observer-dependent in any scheme of DSR that would be inequivalent to proper special relativity.

The Western European knows the simple proof of this assertion but it's probably still helpful for her to invite Jerzy Kowalski-Glikman who will surely offer his delusions about DSR's not being dead yet.

**Lifshitz and other random Lorentz-violating theories**

The second category is about random ideas how to build fundamentally Lorentz-violating theories. The Hořava-Lifshitz theory is an example from recent years. It's a theory one can write down. It's inconsistent with basic properties of black hole physics and many other things but the Lagrangians superficially have some nice properties that lead dozens of people to study such things.

Fine. Those theories still disagree with the observations. The accuracy with which special relativity works in the real world around us is amazing. For some particular terms in the effective equations, people have been able to show that the coefficients of the Lorentz-violating terms are much smaller than expected from dimensional analysis, even at the Planck scale. So they almost certainly don't exist in Nature. To say the least, they're not responsible for any large effects. Also, their existence would create a new hierarchy problem (lots of them) because people would have to be explaining why many terms that can a priori be of order one are so incredibly tiny that we haven't seen them yet.

I have no clue what motivates people to write papers about this stuff. In my opinion, it's the money. People get paid for that even though they know that the papers are pretty much worthless for physics. We're not learning any physics because those theories aren't correct descriptions of the Universe; we're not learning any maths because these theories don't have any deep, unifying, pretty, or profound maths in them.

**Replacing dark matter, dark energy etc.**

Various nonrelativistic and/or nonlocal theories were also proposed to get rid of a "problem" – a word that the proponents may use for the need for dark matter or dark energy. Well, I have nothing against such attempts to "simplify" cosmology a priori but the resulting proposed theories must still be fairly compared with the "alternative" – i.e. with general relativity including dark energy and dark matter. When one looks at the ability of both competitors to naturally explain the observed phenomena and the general lessons extracted from them, including the Lorentz symmetry, equivalence principle etc., I find it obvious that proper general relativity is, regardless of the "need" for dark matter and dark energy, more convincing – by many orders of magnitude.

This is not a proof that e.g. a particular nonrelativistic MOND theory has to be wrong but it is surely a sufficient explanation why I consider it to be a huge waste of money and fossil fuels if people fly to conferences about this physically unlikely and mathematically unappealing alternative theories.

So most of the nonlocality research is pure rubbish. I believe that the actual reason why this stuff hasn't gone extinct in the Academia yet is the support from the laymen who just find it cool if someone does something that looks like telekinesis or telepathy etc. That's why I really think that the picture at the top of this blog entry is a great visual representation of the nonlocality research. It's a P.R. game building on ordinary people's ignorance, ordinary people's thirst for magic and mystery, and ordinary people's misconceptions that these people consider particularly "cool".

And that's the memo.

## snail feedback (4) :

Wicked.

George Musser on Twitter is awesome.

I can't turn back the clock, but if I could I couldn't kill my grandfather because I wouldn't be there.

I personally encountered strange objections from 't Hooft to my letter where he says that my "theory" in non relativistic and non local, so he is not interested in such. And non locality he meant was about smearing the electron charge due to its interaction with the vacuum electromagnetic field.

"Non locality" is a regular thing even in Classical Mechanics when you describe the system of two particles in terms of the center of mass coordinate R and the relative motion coordinate r=r1-r2 and an external force acts only on one particle (on particle-1, for example). Then the CM equation for R contains the external force F_ext that depends on a "shifted" argument: $M \ddot{R}=F_{ext} = F(R + \epsilon\cdot r)$. This "shifted" argument is nothing but r_1 so no non locality is implied, but in terms of R it looks as non local potential or force. Hence objections.

Vladimir, Gerard 't Hooft is obviously excessively polite...

Post a Comment