## Saturday, August 31, 2013 ... /////

### Argumentation about de Broglie-Bohm pilot wave theory

Guest blog by Ilja Schmelzer, a right-wing anarchist and independent scientist

A nice summary of standard arguments against de Broglie-Bohm theory can be found at R. F. Streater's "Lost Causes in Theoretical Physics" website. Ulrich Mohrhoff [broken link, sorry] also combines the presentation of his position with an interesting rejection of pilot wave theory. These arguments I consider in a different file. Here, I consider the arguments proposed in several articles of Luboš Motl's blog "The reference frame": David Bohm born 90 years ago and Bohmists & segregation of primitive and contextual observables, Anti-quantum zeal and in off-topic responses of "Nonsense of the day: click the ball to change its color". Below, we refer to Luboš Motl simply as lumo (his nick in his blog).

Another argument (also with lumo's participation), related to Lorentz-invariance, I have considered at another place.

If you know other interesting pages critical of de Broglie - Bohm pilot wave theory, Nelsonian stochastics, non-local hidden variable theories in general, as well as ether theories, please tell me about them.

The most important thing: Measurement theory

The most important part of physics are, of course, experiments. Moreover, this is also the point where lumo is simply wrong, so it is worth to start with it.:

... it is not true that the de Broglie-Bohm theory gives the same predictions in general. It can be arranged to do so in the case of one spinless particle. But in the real quantum theories we find relevant today, such as quantum field theory, de Broglie-Bohm theory cannot be constructed to match probabilistic QFT exactly, and one can see that its very framework contradicts observable facts.
At another place, we find some hint where his misunderstanding is located:
Your equations about $X$ are completely irrelevant for the measurement of the spin. The problem is not when one wants to measure $X$. Indeed, the measurement of $X$ might occur analogously to its measurement in the spinless case. The problem occurs when one actually wants to measure the spin itself.

The projection of the spin $j_z$ is an observable that can have two values, in the spin $1/2$ case, either $+1/2$ or $-1/2$. It is a basic and completely well-established feature of QM that one of these values must be measured if we measure it.

How is your 17th century deterministic theory supposed to predict this discrete value? Like with $X$, it must already have a classical value for this quantity. Except that in this case, it has to be discrete, so it can't be described by any continuous equation. ...

Preemptively: you might also argue that any actual measurement of the spin reduces to a measurement of $X$. But it's not true. I can design gadgets that either absorb or not absorb the electron depending on its $j_z$. So they measure $j_z$ directly. deBB theories of all kinds will inevitably fail, not being able to predict that with some probability, the electron is absorbed, and with others, they're not. This has nothing to do with $X$ or some driving ways. It is about the probability of having the spin itself.
The last paragraph gives the hint: lumo has interpreted the claim that all measurements reduce to position measurements as "all measurements of the electron reduce to position measurements of the electron". If that would be true, I would concede that lumo's polemics against pilot wave theorists are justified. This was, by the way, the state of the art before Bohm's measurement theory appeared 1952. Thus, lumo's arguments illustrate in a nice way why de Broglie had given up pilot wave theory.

Once the question has been asked how the 17th century deterministic theory manages to predict discrete values, let's explain this story. As a 17th century theory, with real aristocratic origin, it leaves the hard work to servants (quantum operators), reserving for itself the final (and most important) decisions ;-).

First, there is some interaction of the wave function of the electron with the wave function of the measurement device. (There is of course also an equation for the position of the electron $q_{el}$ – the $X$ in lumo's text – but it is completely irrelevant, not only at this stage, but in the whole process.) The result of the measurement is, as usual, a wave function of type$|\psi\rangle = \alpha_1|{\rm up}\rangle|q_1\rangle + \alpha_2|{\rm down}\rangle|q_2\rangle$ This exploitation of standard QT is not enough – now decoherence will be exploited in an equally shameless way. We leave it to decoherence considerations to decide which observables of the measurement device become amplified or macroscopic. Assume the quantum states $|q_1\rangle, |q_2\rangle$ are decoherence-preferred. In this case, decoherence amplifies the microscopic measurement results $|q_1\rangle, |q_2\rangle$ into classical, macroscopically different states $|c_1\rangle, |c_2\rangle$. After finishing this hard job, it presents the following state:$|\psi\rangle = \alpha_1|{\rm up}\rangle|c_1\rangle + \alpha_2|{\rm down}\rangle|c_2\rangle$ Now, everything is prepared, it remains to make the really important decision which of the wave packets is the best one ;-). At this moment a hidden variable enters the scene. But, surprise, it is not the hidden variable of the electron $q_{el}$ (lumo's X), but that of the classical measurement device $q_c$.

The job of $q_c$ is not a really hard one. After driving around (no, being driven around by quantum guides) in an almost unpredictable way, it simply takes the wave packet prepared for him by the quantum operators at the point of arrival ;-). In other words, we simply have to put the actual value of $q_c(t)$ into the full wave function $|\Psi\rangle$ to obtain the (unnormalized) effective wave function:$\psi(q_e) = \Psi(q_e, q_c(t))$What we need for this scheme to work as an ideal quantum measurement is not much. We need that the two states of the macroscopic device $|c_1\rangle, |c_2\rangle$ do not (significantly) overlap as functions of the hidden variable $q_c$. In this case, whatever the value of $q_c$, the result $\psi(q_e)$ will be a unique choice between two effective wave functions, namely between $|{\rm up}\rangle$ if $q_c$ is in the support of $|c_1\rangle$, and $|{\rm down}\rangle$ otherwise. And we need the quantum equilibrium assumption for $q_c$ to obtain the probabilities for these two choices as $|\alpha_1|^2$ resp. $|\alpha_2|^2$.

Thus, everything works as in quantum theory – Born rule as well as state preparation by measurement (only without any ill-defined wave function collapse or subdivision of the world into a classical and quantum part, or the equally ill-defined "subdivision of the world into systems" used in many worlds or other decoherence-based approaches).

But maybe one of the two assumptions we have used used are wrong? Given Valentini's subquantum H-theorem, together with the numerical results of Valentini and Westman, which show a remarkable relaxation to equilibrium already in the two-dimensional case in a quite short period of time (arXiv:quant-ph/0403034), there is not much hope for observations of non-equilibrium in our universe.

One can, of course, also doubt that macroscopically different states do not have a significant overlap in the hidden variables. Such doubts have been, for example, expressed by Wallace and Struyve for pilot wave field theories. See my paper "Overlaps in pilot wave field theories" at arXiv:0904.0764 about the solution of this problem.

About the zeros of the wave function

There is a second point where experiment is involved, with an easy solution:
How do we know that $m=l_z/\hbar$ must be an integer? Well, it is because the wave function $\psi(x,y,z)$ of the m-eigenstates depends on $\phi$, the longitude (one of the spherical or axial coordinates), via the factor $\exp(i\cdot m\cdot\phi)$ which must be single-valued. Only in terms of the whole $\psi$, we have an argument.

However, when you rewrite the complex function $\psi(r,\theta,\phi)$ in the polar form, as $R\exp(iS)$, the condition for the single-valuedness of $\psi$ becomes another condition for the single-valuedness of S up to integer multiples of $2\pi$. If you write the exponential as $\exp(iS/\hbar)$, the "action" called S here must be well-defined everywhere up to jumps that are multiples of $h = 2\pi\hbar$.
That's a nice argument, and, because of this argument, today the original form of de Broglie's "pilot wave theory" is preferred in comparison with the "Bohmian mechanics" version proposed 1952 by Bohm. In pilot wave theory, the pilot wave is really a wave, and you can apply the original argument to show that these observables are quantized. In Bohm's second order version, this is different, and the quantization of certain observables becomes, indeed, problematic. This has been another reason for me (beyond history, see arXiv:quant-ph/0609184) to prefer the name "pilot wave theory" in comparison with "Bohmian mechanics".
More generally, something very singular seems to be happening near the $R=0$ strings in the Bohmian model of space.
The "model of space" in pilot wave theory is a trivial one, nothing strange happens there if R = 0. The singularity of the velocity at these points is harmless – a simple rotor localized in a string, moreover, there is nothing in the place where velocity becomes undefined.
So even though the Bohmian mechanics stole the Schrödinger equation from quantum mechanics, the superficially innocent step of rewriting it in the polar form was enough to destroy a key consequence of quantum mechanics - the discreteness of many physical observables.
If there would be property rights for equations or functions, one could argue as well that Schrödinger has stolen the wave function from de Broglie's pilot wave theory. Fortunately, such nonsense does not exist in science. But there is a point worth to be mentioned: Without pilot wave theory, there would be no Schrödinger picture, and we would have to use the Heisenberg formalism all the time. And if some Bohm would have found the Schrödinger equation later, it would have been named, as well, an unnecessary superconstruction and banned from physics, for almost the same reasons.

About relativistic symmetry and the preferred frame

Last but not least, there are some claims that pilot wave theories will be unable to recover QFT predictions in the relativistic domain. Unfortunately for his argumentation, the equivalence theorem remains to be a theorem even in the relativistic domain – nothing used in it has any connection to the particular choice of spacetime symmetry. Thus, if the quantum theory has relativistic symmetry for it's observable predictions, the same holds for the observable predictions of pilot wave theory.
More concretely, it is inconsistent with modern physics in many ways, as we will see.

Special relativity combined with the entanglement experiments is the most obvious example. Bell's theorem proves that if a similar deterministic theory reproduces the high correlations observed in Nature (and predicted by conventional quantum mechanics), namely the correlations that violate the so-called Bell's inequalities, the objects in the theory must actually send physical superluminal signals.

But superluminal signals would look like signals sent backward in time in other inertial frames. It follows that at most one reference frame is able to give us a causal description of reality where causes precede their effects. At the fundamental level, basic rules of special relativity are inevitably violated with such a preferred inertial frame.
I was already afraid that lumo does not even understand that in a preferred frame everything is fine with causality. The introduction was, at least, the highly dramatic one which is typical for such crank cases.

I like the formulation "at most". Sounds as if we would really like to have more reference frames and are, now, very disturbed that at most one preferred frame is available ;-).
You might think that the experiments that have been made to check relativity simply rule out a fundamentally privileged reference frame. Well, the Bohmists still try to wave their hands and argue that they can avoid the contradictions with the verified consequences of relativity.
Who is hand waving here? Lumo might, of course, think that experiments rule out a hidden preferred frame. But it's his job, in this case, to point out which observations rule out such a preferred frame. As long as he fails to do it, I don't even have contradictions with any verified consequence of relativity to wave my hands.
I wonder whether they actually believe that there always exists a preferred reference frame, at least in principle, because such a belief sounds crazy to me (what is the hypothetical preferred slicing near a black hole, for example?).
I'm happy to answer this question: The preferred coordinates are harmonic. Given, additionally, the global CMBR frame, with time after big bang as the time coordinate, this prescription is already unique. For a corresponding theory of gravity, mathematically almost exactly GR on flat background in harmonic gauge, physically with preferred frame and ether interpretation, see my generalization of the Lorentz ether to gravity.
But it is possible to see that one can't get relativistic predictions of a Bohmian framework for all statistically measurable quantities at the same moment, not even in principle. If a theory violates the invariance under boosts "in principle", it is always possible to "amplify" the violation and see it macroscopically, in a statistically significant ensemble. If such a violation existed, we would have already seen it: almost certainly.
I would be interested to learn more about this mystical way to amplify high energy violations of Lorentz symmetry into the low energy domain, without access to the necessary high energies. As far, it is lumo who is waving his hands.

I know that there are some nice observations, which use the extremely large distances light has to travel for some astronomical observations, to obtain boundaries for a frequency dependence of the velocity of light. Some of the boundaries obtained in this and different ways suggest even that these Lorentz-violating effects are absent for distances below Planck length. But Planck length is merely the distance where quantum gravity becomes important. The fundamental distance where our continuous field theories start to fail may be different.
In proper quantum mechanics, locality holds. If one considers a Hamiltonian that respects the Lorentz symmetry - such as a Hamiltonian of a relativistic quantum field theory - the Lorentz symmetry is simply exact and it guarantees that signals never propagate faster than light.

In proper quantum mechanics, one can define the operators that generate the Poincaré group and rigorously derive their expected commutators. Also, it is exactly true that operators in space-like-separated regions exactly commute with each other. This fact is sufficient to show that the outcome of a measurement in spacetime point B is never correlated with a decision made at a space-like-separated spacetime point A.

These facts allow us to say that quantum field theory respects relativity and locality. The actual measurements can never reveal a correlation that would contradict these principles. And it is the actual measurements that decide whether a statement in physics is true or not. Bohmian mechanics is different because these principles are directly violated. You may try to construct your mechanistic model in such a way that it will approximately look like a local relativistic theory but it won't be one. Consequently, you won't be able to use these principles to constrain the possible form of your theory. Moreover, tension with tests of Lorentz invariance may arise at some moment.
First, there is no reason not to use some symmetry principles for one part of the theory which do not hold for another part of it. For example, the symplectic structure in the classical Hamilton formalism has another symmetry group – the group of all canonical transformations – than the whole theory including the Hamiltonian.

Then, to postulate a fundamental Poincare symmetry is, of course, a technically easy way if one wants to obtain a theory with Poincare symmetry. But what is the purpose of a postulated global Poincare symmetry in a situation where the observable symmetry is different, depends on the physics, as in general relativity? Whatever the representation of the $g_{\mu\nu}(x)$ on the Minkowski background – it will (except for simple conformally trivial cases) have a different light cone almost everywhere. If the Minkowski background lightcone is the smaller one, one has somewhere to violate the background Poincare symmetry. It may be always the other way. But in this case, the axioms of the theory give only restrictions for the background Minkowski light cone, not for the physical light cone. Thus, tensions with the physical Lorentz invariance may arise in the same way, because the theory only looks like one which, in the particular point $x$, has the Lorentz invariance for the metric $g_{\mu\nu}(x)$. But really it is a theory with Lorentz invariance for a different metric $\eta_{\mu\nu}$, with a larger light cone, thus, allows for superluminal information transfer relative to $g_{\mu\nu}(x)$.

String theory, as far as I understand, obtains gravity as a spin two field on Minkowski background. This requires, as far as I understand, that this problem is solved in string theory. Fine. Means, it is a solvable one.
The contradiction between relativity and semi-viable Bohmian models (that violate Bell's inequalities, and they have to in order not to be ruled out by experiments) is a very profound problem of these models. It can't really be fixed.
Again, nice formulation. Sounds like poor Bohmians have tried hard not to violate Bell's inequalities and finally given up. "Semi-viable" is also a nice word. But the "very profound problem" remains hidden. (A nice place for problems in a hidden variable theory.;-))

Instead, I prefer to follow the weak suggestions one can obtain based on mathematical equivalence proofs. When I construct a pilot wave theory based on a relativistic QFT, it seems really hard to avoid the consequences of this theorem to violate Lorentz invariance. At least, I don't know how to manage this. We obtain a pilot wave theory which does not violate observable relativistic symmetries. Simply because there is an equivalence proof for observables.
Today, we have some more concrete reasons to know that the hidden-variable theories are misguided. Via Bell's theorem, hidden-variable theories would have to be dramatically non-local and the apparent occurrence of nearly exact locality and Lorentz invariance in the world we observe would have to be explained as an infinite collection of shocking coincidences.
I'm impressed by the verbal power of "dramatically nonlocal", even more by the "infinite collection of shocking coincidences". Sounds really impressive. But I would not name a nonlocality, which, because of an equivalence theorem, cannot be used even for information transfer, and can be observed only indirectly, via violations of Bell's inequality, a dramatical one. Instead, it seems to me the most non-dramatical one. As well, I would distinguish the simple and straightforward consequences of an equivalence theorem from an "infinite collection of shocking coincidences". Instead, I would be more surprised if an quantum equilibrium large distance low energy limit would not change anything in the symmetry group of a theory.

Last but not least, the Lorentz group is simply the invariance group of a quite prosaic wave equation, an equation we find almost everywhere in nature. And such, a wave equation (or it's linearization) usually defines also an effective (and in general curved) Lorentz metric, so that the wave equation becomes the harmonic equation of this Lorentz metric. As a consequence, for everything which follows such a wave equation we obtain local Lorentz symmetry. (See arXiv:0711.4416, arXiv:gr-qc/0505065 for overviews.)
To assume that a symmetry, which so often and for very different materials appears as an effective symmetry in condensed matter theory, is fundamental, is a hypothesis which seems quite unnatural for me.

... and the ether ...
The similarity with the luminiferous aether seems manifest. ...

I just don't think that this is a rationally sustainable belief. It's just another repetition of the old story of the luminiferous aether.
About the similarity with the aether I fully agree with lumo ;-)))). But what is irrational in the belief that there is an ether? I would like to hear some details. I would be really interesting to hear which of the beliefs expressed in my ether model for particle physics are not rationally sustainable.

Now, it seems we have finished the claims of empirical inadequacy. It's time to consider the metaphysical arguments.

It is not surprising in any way that the new, Bohmian equation for $X(t)$ can be written down: it is clearly always possible to rewrite the Schrödinger equation as one real equation for the squared absolute value (probability density) and one for the phase (resembling the classical Hamilton-Jacobi equation). And it is always possible to interpret the first equation as a Liouville equation and derive the equation for $X(t)$ that it would follow from. There's no "sign of the heavens" here.
I think there are "signs of the heavens" here. First, the guiding equation for the velocity is a nice, simple, and local (in configuration space) equation. The derivation mentioned by lumo could as well lead to a dirty nonlocal one.

Then, the equation for the phase resembles the classical Hamilton-Jacobi equation, and for constant density becomes simply identical with it. Now, the same guiding equation is, as well, part of the classical Hamilton-Jacobi theory – a theory which was in no way related to the conservation law of the first derivation.

Now, Hamilton-Jacobi theory is really beautiful mathematics, it has all properties of "signs of the heavens", even if taken only alone. See arXiv:quant-ph/0210140 for an introduction. That one and the same simple law for velocity gives, on one hand, Hamilton-Jacobi theory in the classical limit, and, on the other hand, a Liouville equation, is, at least for me, a sufficiently strong hint from the mathematical heaven. In many worlds I have not seen any comparable signs of beauty.

And there is, of course, the really beautiful derivation of the whole quantum measurement formalism.

How to distinguish useful improvements from unnecessary superconstructions
The mechanistic models add a new layer of quantities, concepts, and assumptions.
Indeed, every new, more fundamental theory adds a new layer of quantities, concepts, and assumptions. So what?
[Einstein] called the picture an unnecessary superconstruction.
Appeal to authority does not count. And there is no reason to expect that the father of relativity would like a theory which violates his child. But how to distinguish unnecessary superconstructions from interesting more fundamental theories? Above add something to the old theory. But useful more fundamental theories allow to explain something else from the old theory: Some postulates of the old theory can be derived now. So, one has to compare what one has to add with what can be derived now.

This relation is quite nice for pilot wave theory: The new layer is, essentially, the configuration together with a single additional equation – the guiding equation for the configuration. What can be derived from this equation is, instead, the whole measurement theory of quantum mechanics, including the Born rule and the state preparation by measurement. Compared with the Copenhagen interpretation, the additional layer also replaces the "classical part" of this interpretation and removes the collapse from the theory.

These last two points have been a major motivation of other reinterpretations as well. In particular, for many worlds it seems to be the only aim. The interpretation I prefer to name "inconsistent histories" is focussed on this aim too. Thus, two things which have been obtained in pilot wave theory first, have been widely recognized today as important contributions to the foundations of quantum theory. One can object that pilot wave theory does not get rid of the classical part, but even extends it into the quantum domain. This depends on what one considers as problematic with the classical part: If the problem is the imprecision of this notion, the absence of well-defined rules for this part, then it is clearly solved in pilot wave theory. Anyway, pilot wave theory was the first interpretation with completely unitary dynamics for the wave function, without a collapse.
One can perhaps create classical mechanistic models that mimic the internal workings of quantum mechanics in many situations. For example, one can write a computer simulation. But you can't say that the details of such a program or Bohmian picture is justified as soon as you confirm the predictions of conventional quantum mechanics.
There is no necessity to justify every detail. The important point of the pilot wave interpretation is that to explain the observable facts there is no necessity to reject classical logic, realism, or to introduce many worlds, inconsistent histories, correlations without correlata or other quantum strangeness and mysticism. We have at least one simple, realistic, even deterministic, explanation of all observable facts. That's enough to reject quantum mystery. Why should we justify every detail of some particular realistic model? There may be several realistic models compatible with observation. I would expect this anyway, given large distance universality.
The mechanistic models add a new layer of quantities, concepts, and assumptions. They are not unique and they are not inevitable. The similarity with the luminiferous aether seems manifest. If they only reproduce the statistical predictions of quantum mechanics, you could never know which mechanistic model is the right one: it could be a computer simulation written by Oracle for Windows Vista, after all.
But what's the problem with this? Is Nature obliged to work with theories which can be inevitably reconstructed by internal creatures? You could never know? Big problem. Anyway, our theories are only guesses about Nature, and we can never know if they are really true. If you doubt, I recommend to read Popper. (I ignore here, for simplicity, the modern ways to recognize the truth of theories, like counting the number of papers written about them, or getting inspirations about the language in which God wrote the world.)

Moreover, science has developed lot's of criteria which allow to compare theories which do not make different predictions: Internal consistency, simplicity, explanatory power, symmetry, mathematical beauty. Lumo uses such arguments himself, thus, he is aware of their power. They are usually sufficient to rule out most of the competing models. And if there remain a few different theories, all in agreement with observation, this is not problematic at all – it is even useful: It allows to see the difference between the empirically established parts of these theories – these parts will be shared by all viable theories – and the remaining, metaphysical parts, which may be very different in the different theories. Thus, they serve as a useful tool to show the boundaries of what science can tell at a given moment.

For example, today the existence of pilot wave theory shows that almost all of the quantum strangeness, in particular the rejection of realism, "quantum logic", and the esoterics of many worlds, are in no way forced on us by any empirical evidence, but purely metaphysical choices of some particular interpretations.

What are the fundamental beables?
I could make things even harder for the Bohmian framework by looking into quantum field theory. What are the real, "primitive" properties in that case?
In the simplest case of a scalar field, the natural candidate for the "primitive property" or the "beable" is simply the field $\phi(x)$. This is a very old idea, proposed already by Bohm. But the effective fields of the standard model are also bad candidates for really fundamental beables. They are, last but not least, only effective fields, not fundamental fields. In my opinion, one needs a more fundamental theory to find the true beables.

My proposal for such more fundamental beables can be found in my paper about the cell lattice model arXiv:0908.0591. Even if pilot wave theory is not mentioned at all in this paper, it is quite obvious that the canonical quantization proposal for fermion fields I have made there allows to apply the standard formalism of pilot wave theory to obtain a pilot wave version of this theory.

Problems with spin and with particle ontology in quantum field theories

A large part of lumo's arguments is directed against two particular versions of pilot wave theory – strangely, I don't like them too. The first one is the idea to describe particles with spin using only wave functions of particles with spin, but leaving the configuration without spin. In this case, the wave function is no longer a complex function on configuration space, but a function with values in some higher-dimensional Hilbert space. But, as a consequence, the very nice pilot wave way to obtain the classical limit via the Hamilton-Jacobi theory no longer works, and one would have to use the dirty old way based on wave packets to obtain some classical limit.

There are other examples of such pilot wave theories. First, this trick was used by Bell, who has proposed a pilot-wave-like field theory with beables for fermions, but not for bosons. Now, one can argue that this is already sufficient, and leave the bosons without beables. The reverse situation was a theory from Struyve and Westman for the electromagnetic field. Again, it has been argued that this is sufficient. And, for the purpose to obtain a realistic theory which is able to recover QFT predictions, it is. But I think that such pilot wave theories are sufficient only for one purpose: To be used as a quick and dirty existence proof for realistic theories in situations where some parts of the theory cause problems. For this purpose, they are indeed sufficient, if the part of the theory represented in the beables is large enough to distinguish all macroscopic states – a quite weak requirement. If one doubts that a theory without fermions, or without bosons, is sufficient for this, one should think about renormalization: If we use these incomplete theories to describe one type of the bare fields (for some energy), then all types of the dressed fields already depend on this single type.

The second type of theories I don't want to defend are theories with particle ontology in the domain of field theory. One reason is that semiclassical gravity shows nicely that fields are more fundamental, and the pilot wave beables have to be, of course, fundamental. Then, to handle variable particle numbers is a dirty job. There should be something more beautiful. Particles which pretend for a status of beables should be at least conserved.

Therefore, the parts of the argumentation where lumo attacks particle theories I can leave unanswered. Let's note only that a short look at the particle-based approach to field theory in arXiv:quant-ph/0303156 suggests that lumo's arguments don't hit this target as well. This version introduces stochastic jumps into the theory (showing, by the way, that pilot wave theorists are not preoccupied with determinism). But I can leave the comparison to the reader.

Because experiments eventually measure some well-defined quantities, the likes of Bohm think that there must exist preferred observables - and operators - that also exist classically. They are classical to start with, they think. Positions of objects are an important example.

But the quantum mechanical founding fathers have known from the very beginning that this was a misconception. All Hermitean operators acting on a Hilbert space may be identified with some real classical observables and none of them is preferred.
I think it is a misconception to interpret pilot wave theory as preferring some observables. It is not an accident that Bell has even proposed another word, beables, for the configuration space variables in pilot wave theory. In particular, measurements of the beables play no special role at all, nor in the classical limit, nor everywhere else in pilot wave theory. To derive the measurement theory, we don't need them (this would be circular anyway). What we need are the actual values of the beables, not some results of observations. Indeed, let's assume for simplicity we consist of atoms, which are the beables of some simplified pilot wave theory. Then, a theory about our observations does not need anything about our observations of atoms – if we "observe" them at all, then only in a quite indirect way, and most people do not observe atoms at all. Therefore, observations of atoms cannot play any role in an explanation of our everyday observations. Of course, in these explanations atoms have to play a role, at least indirectly – as constituent parts of our brain cells. But these atoms inside our brain cells are nothing we observe, if we observe something in everyday life. Thus, we use only the atoms themself, not the observations of atoms, in such explanations of our observations.

Thus, as observables the beables play no special role – in particular, the theory of their measurements can be derived in the same way, without danger of circularity. In particular, their measurements have to be described by self-adjoint operators or POVMs as those of every other observable too. In this sense, there are no preferred observables in pilot wave theory.
And this construction is actually very unnatural because it picks $X$ as a preferred observable in whose basis the wave vector should be (artificially) separated into the probability densities and phases
Configurations (I prefer "q" instead of "X", because "X" is associated with usual space, while "q" is associated with configuration space) play indeed a special role. But this is the same special role they play in the Lagrange formalism as well as in Hamilton-Jacobi theory. Above are very beautiful, useful approaches. I don't remember to have heard any objections that the Lagrange formalism is unnatural, because it picks "q" as a preferred observable. Instead, the Lagrange formalism is an extremely important tool in modern physics, in quantum field theory as well as in general relativity. Moreover, this "segregation" is a very natural one: If nothing changes, the configuration remains the same, while the velocities have to be zero. Instead, I have found the symmetry between such different things as position and momentum in the Hamilton equations (and, similarly, in the canonical approach to quantum theory) always strange and unnatural, (even if, because of its symmetry, beautiful).

So why lumo does not fight against segregation in the Lagrange formalism? The segregation is the same, the poor momentum variables are degraded to the role of "derivatives". (Or maybe he does? I have not checked. Anyway, the important role of the Lagrange formalism in modern science, which is based on exactly the same "segregation", is a fact which shows that there is nothing wrong with this particular segregation.)
In order to celebrate the Martin Luther King Jr Day, I will dedicate the rest of the text to a fight against the segregation of observables. :-) So my statement is very modest – that observables can't be segregated into the "real" primitive ones and the "fictitious" contextual ones – a fact that trivially rules out all theories (such as the Bohmian ones) that are forced to do so.

... I guess that you must agree that the "philosophical democracy" between all observables is pleasing and natural.
I see no reason at all to find such a "democracy" pleasing. You can observe a honest guy telling us the truth. As well you can observe how a liar is telling us lies. Above are observable. There may be even more symmetry between them. They may even make the same claims: "I have seen that he has stolen the money". That means, without segregation among observables, without destroying observable symmetry, we have to give them equal status. I don't plan to follow this idea, and will always prefer a segregation between truth and lies, even if this destroys some observable symmetries.

In all these cases, the same "formalism" is used to obtain the results – communication in human language. Thus, that the same formalism – that of self-adjoint operators, or, more general, of POVM's – is used to describe the results of interactions in quantum theory is in no way an argument against this particular segregation.
Clearly, some quantities in the real world look more classical than others. But what are the rules of the game that separates them? The Bohmists assume that everything that "smells" like $X$ or $P$ is classical while other things are not. ...

Clearly, they want some quantities that often behave classically in classical limits.
Clearly not. The "segregation" in pilot wave theory is between configuration and momentum variables, and it is in no way related with one of them being "more classical". In classical situations, above behave classically, and the same segregation exists in classical theory too, in the Lagrange formalism as well as in Hamilton-Jacobi theory. There is no place in pilot wave theory where one has to care that something in the behaviour of the configuration is "classical": In the classical limit, it follows automatically, from the classical Hamilton-Jacobi equation, that everything behaves classically. For other questions this is simply irrelevant.

It is the many worlds community which is focussed around the classical limit. That's reasonable – they have a very hard job to construct something which at least sounds plausible (at least if one uses words like "contains" for a linear relation between some points in a Hilbert space, talks about "evolution" of branches without defining any evolution law, and applies decoherence techniques without explaining how to obtain the decomposition into systems one needs to apply them).
In order to simplify their imagination, the Bohmists imagined the existence of additional classical objects – the classical positions.
Simplification has, it seems, been removed from the aims of science. Ockham's razor is out, simple theories have to be rejected. The higher the dimension, the better.

But the objects are in no way additional. They have been part of the Copenhagen interpretation: Its classical part contains, in particular, all the measurement results. And Schrödinger's cat proves that a unitary wave function alone is not sufficient, that we need something else. Or some non-unitary collapse, or some particular configuration as in pilot wave theory. Something – be it the collapsed wave function, or some different entity – has to describe the reality we see: or the dead, or the living cat. Many worlds claims something different, but introduces, for this purpose, the "branches" – some sort of collapsed wave functions without collapse, or configurations without a guiding equation, which is claimed to be "contained" in the wave function. (How a decomposition of some vector into a linear combination of others defines a containment relation remains unclear. A concept where a function like $\psi(q) = 42$ "contains" all possible universes has it's appropriate place in the Hitchhiker's Guide to the Galaxy, not in scientific journals.) The approach named "consistent histories" leaves us with many inconsistent histories, subdivided into families.

Theories with physical collapse need dirty and artificial non-unitary modifications of the Schrödinger equation. The branches of many worlds are, it seems, left today without any equations at all. (A very scientific approach, indeed. Time to rename it into "many words"). Only pilot wave theory gives us a nice, simple, and beautiful equation for this "additional" entity. Moreover, it allows, just for nothing, to derive the whole measurement formalism of quantum theory.

Imagination is completely irrelevant for these questions. I see, of course, no reason to object if a theory allows to simplify our imaginations too. Instead, I would count it as one additional advantage of a theory. But I recognize that this attitude is not shared by other scientists. And there are, indeed, good reasons to prefer theories which are complex and mystical. Imagine you are in a company of nice girls (or boys, whatever you prefer), and they ask you what you are doing. Isn't it much more impressive if you can tell them about curved spacetimes, large dimensions, a strange new quantum realism, or even quantum logic, many worlds and other strange quantum things? Compare this with the poor 17th century scientist, the fighter against any form of mystery, the classical loser in every popular mystery film. The choice is quite obvious.

Louis de Broglie wrote these equations for the position of one particle, David Bohm generalized them to N particles.
Not correct, the configuration space version of pilot wave theory was presented by de Broglie already at the Solvay conference. See de Broglie, L., in “Electrons et Photons: Rapports et Discussions du Cinquieme Conseil de Physique”, ed. J. Bordet, Gauthier-Villars, Paris, 105 (1928), English translation: G. Bacciagaluppi and A. Valentini, Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference”, Cambridge University Press, and arXiv:quant-ph/0609184 (2006)
I think that in analogous cases, we wouldn't be using the name of the "updater" for the final discovery.
After having read something about the history of this theory (I do not care that much about history), I use "pilot wave theory" instead of "Bohmian mechanics". But Bohm has a point too: de Broglie has broken his theory as not viable, being unable to develop the general measurement theory. This has been done by Bohm. Therefore, if I use names, I use now the combination "de Broglie-Bohm".
Of course that I have always known that Bell constructed his inequalities because he wanted to prove exactly the opposite than what he proved at the end. He was unhappy until the end of his life. Bad luck. Nature doesn't care if some people can't abandon their prejudices.
This sounds like lumo thinks that Bell has tried to prove, with his inequalities, that quantum mechanics is wrong. This does not sound very plausible. It is quite clear that he liked Bohmian mechanics, that he has seen it's nonlocality as an argument against it, and tried to remove this argument, by showing that this nonlocality is a necessary property of all hidden variable theories. About his bets before the experiments have been performed, there is the following quote: "In view of the general success of quantum mechanics, it is very hard for me to doubt the outcome of such experiments. However, I would prefer these experiments, in which the crucial concepts are very directly tested, to have been done and the results on record. Moreover, there is always the slim chance of an unexpected result, which would shake the world." (Freire, arXiv:quant-ph/0508180, p.20)
[arguing against "I've read that the Broglie-Bohm theory makes the same predictions that the normal quantum randomness theory makes but the latter was chosen because it was conceived first.":]

Concerning the first point, people can have various theories in the first run. But once they have all possible alternative theories, they can compare them.

Second, it is not true that the probabilistic interpretation was conceived "first". Quite on the contrary. Technically, it's true that de Broglie wrote his pilot wave theory in 1927, one year after Max Born proposed the probabilistic interpretation, but the very idea that the wave connected with the particle was "real" was studied for many years that preceded it. Both de Broglie (1924) and Schrödinger (1925) explicitly believed that the wave was real which is incorrect.
Given that de Broglie has given up pilot wave theory shortly after 1927, unable to find a viable measurement theory for other observables than position, one can say that pilot wave theory appeared in a viable form only 1952, with Bohm's measurement theory. At that time, the Copenhagen interpretation was already well-established (even if the label "Copenhagen interpretation" was coined only later). So there was an advantage of historical accident for the standard interpretation.
In 1952, Bohm wrote down a very straightforward multi-particle generalization of de Broglie's equations and added a very controversial version of "measurement theory". Is it a substantial improvement you expect from 25 years of progress?
Depends on how many people have worked on it during this time. In this case, most of these 25 years nobody has worked on it. In particular, de Broglie himself had broken it, because he was unable to find the "very controversial" measurement theory found later by Bohm. Bohm, who was 1927 only 10 years old, had not worked most of this time in this domain too. Thus, very few man-years have been sufficient to transform a theory broken by it's creator as not viable into a viable theory. I would name this a sufficiently efficient and substantial improvement.

The next important defender of this theory – again almost alone for a long time – was Bell. The results of his work in the foundations of quantum theory are also well-known. Despite their foundational character, they have caused a large experimental activity. Thus, also a quite efficient relation between man-years and results.

(Given that lumo has not understood the main point of Bohm's measurement theory, we can ignore the characterization of this theory as "very controversial").

About decoherence and the classical limit
Moreover, the question which of them will emerge as natural quantities in a classical limit cannot be answered a priori. Which observables like to behave classically? Well, it is those whose eigenstates decohere from each other.
The role of decoherence in the classical limit is largely overexaggerated, see the Hyperion discussion about this (Ballentine, Classicality without Decoherence: A Reply to Schlosshauer, Found Phys (2008) 38: 916-922, DOI 10.1007/s10701-008-9242-0, Schlosshauer, Classicality, the ensemble interpretation, and decoherence: Resolving the Hyperion dispute, Found Phys (2008) 38: 796-803, DOI 10.1007/s10701-008-9237-x, arXiv:quant-ph/0605249, Wiebe and Ballentine, Phys. Rev. A 72:022109, 2005, also arXiv:quant-ph/0503170).

Essentially, you can measure every operator, together with every other, if the accuracy of the common measurement is below the boundaries of the uncertainty relations. And in the classical $\hbar \to 0$ limit they all like to behave classically.
Everything in this real world is quantum while the classical intuition can only be an approximation, and it is a good approximation only if decoherence is fast enough i.e. if the interference between the different eigenstates is eliminated. If it is so, the quantum probabilities may be imagined to be ordinary classical probabilities and Bell's inequalities are restored.

So if you want to know whether a particular quantity may be imagined to be classical, you need to know how quickly its eigenvectors decohere from each other. And the answer depends on the dynamics. Decoherence is fast if the different eigenvectors are quickly able to leave their distinct fingerprints in the environment with which they must interact.
A nice description of the decoherence paradigm. The little dirty secret of decoherence is that it depends on some decomposition of the world into systems. Such a decomposition can be found, without problems, if we have some classical context as in the Copenhagen interpretation, or some well-defined configuration of the universe as in pilot wave theory, by considering an environment of the actual state of the universe. But without such a background structure you have nothing to start these decoherence considerations. The different systems we see around us – cats, for example – cannot be used for this purpose, at least not if we want to avoid circular reasoning. arXiv:0901.3262 The Hamilton operator, taken alone, is not enough to derive a decoherence-preferred basis uniquely.
Mechanistic models of state-of-the-art quantum theories are not available: it is partly because it's not really possible and it's not natural but it is also partly because the champions of Bohmian mechanics are simply not good enough physicists to be able to study state-of-the-art quantum theories. They're typically people with philosophical preconceptions who simply believe that the world has to respect their rules of "realism" or even "determinism".
I have a quite nice "mechanistic model" for the standard model of particle physics. One which essentially allows to compute the SM gauge group (as a maximal group which fulfills a few simple "mechanistic" axioms). How many more years (and how many more man-years) string theory needs to reach something comparable?

The idea of "philosophical preconceptions" is quite funny. My concept is quite pragmatical: If there is a simple way to do the things, use it. Simplicity is a good thing, independent of the age or the popularity of the particular concept. About determinism I don't care even today, in particular I have certain sympathies for Nelson's stochastics. And I have as well looked at non-realistic interpretations of quantum theory, like the concept I prefer to name "inconsistent histories". But I think there should be really good evidence to justify the rejection of such simple, general, fundamental and beautiful principles like realism. But pilot wave theory would be preferable even without it, simply for the beauty of the guiding equation.

Last but not least, some funny but unimportant polemics
The attempts to return physics to the 17th century deterministic picture of the Universe are archaic traces of bigotry of some people who will simply never be persuaded by any overwhelming evidence – both of experimental and theoretical character – if the evidence contradicts their predetermined beliefs how the world should work.
Well formulated. I like such polemics. Especially replacing the standard 19th century in such flames by 17th century is nice. But there is room for enhancement. In philosophy of science, I follow Popper, who likes to identify the origin of some of his ideas in Ancient Greece. I also prefer the economic system based on ideas of Adam Smith in comparison with much more modern ones developed by Lenin and Mao, so one can identify this sympathy for old ideas as deeply rooted in my personality. Indeed, I think there is nothing wrong with old ideas.

To describe pilot wavers as "predetermined" sounds really nice, but is, unfortunately, wrong. There are, of course, people who follow predetermined ideas. But these are the ideas they have learned in their youth. Where are the proponents of pilot wave ideas supposed to have learned it? What I was teached was quantum theory and Marxism-Leninism, not pilot wave theory and Adam Smith. And I remember, in particular, some uncritical fascination learning von Neumann's proof of impossibility of a classical picture. I have had nor a prejudice for 17th century determinism, nor any of the "bourgeois prejudices" the communists have liked to argue against.

It was not predetermination, but the power of arguments (in particular, of Bell's "speakable and unspeakable in quantum mechanics"), which has persuaded me to switch to pilot wave theory. And an important part of this argumentative power was the simple proof of equivalence between pilot wave theory and quantum theory. There simply is no experimental evidence against pilot wave theory.

And, indeed, the "experimental evidence" presented by lumo was (in his polarizer argument, and similar ones about spins) based on the common error not to take into account the measurement device, or (in his quantization argument) not applicable to de Broglie's version of pilot wave theory. About the theoretical evidence judge yourself.
But the very fact that the Bohmists actually don't work on the cutting-edge physics of spins, fields, quarks, renormalization, dualities, and strings is enough to lead us to a very different conclusion: they're just playing with fundamentally wrong toy models and by keeping their focus on the 1-particle spinless case, they want to hide the fact that their obsolete theory contradicts pretty much everything we know about the real world.
It is always fun to compare the "very facts" of such claims with reality. The one-particle spinless case has never been in the focus of my interest, except if this appears sufficient to show some serious problems of other interpretations ( arXiv:0901.3262, arXiv:0903.4657). The results of my work with spins, fields, and quarks I have already mentioned. And even renormalization is on my todo list, even if some other problems have, yet, higher priority for me.

I'm not sure that naming strings and dualities "cutting-edge physics" is justified. This is clearly a domain of research I leave to lumo – it may have a value as a nice exercise in mathematics, which is an important part of human culture, even if it has nothing to do with physics. Of course, one never knows – results of pure mathematicians, who have been proud of doing things which will never find an application, are applied today in cryptography. It would be a really nice joke if some result found by lumo would find a physical application in some hidden variable ether theory ;-).

#### snail feedback (44) :

Dear Ilja, thanks for this almost professionally constructed reply - with a nice formatting, formulae etc.

Unfortunately, almost no part of the content of this blog entry is correct. ;-)

Perhaps, a valid point is that the "pilot wave theory" is more accurate than "Bohmian mechanics". However, when you said that the original de Broglie theory is preferred to solve the non-single-valuedness problem of mine, I had to laugh out loud because a few paragraphs earlier, you wrote that this theory was abandoned by de Broglie because of another argument of mine, more or less.

Concerning some other points, it's amusing that you say that "decoherence solves everything" because decoherence only works in proper quantum mechanics. The pilot wave theory isn't quantum mechanics and indeed, the very main point of this theory is that it replaces the genuine dynamical quantum mechanism selecting the "preferred observable and bases" - decoherence - by something totally different, namely predetermined observables that also have classical values aside from the pilot wave that guides them.

So if you need decoherence in the pilot wave theory, it won't work and it will become yet another crushing argument against the pilot wave theory because decoherence is incompatible with the actual pilot-wave-based mechanisms that select what will be observed. Do you agree with that?

When you say it's just a "rotor", you don't actually show that the theory gives the right prediction - S is single-valued up to additive integer multiples of 2.pi. You don't show that because you can't - this correct constraint doesn't really follow from the pilot wave theory. Incidentally, the divergent velocity isn't harmless, either. It's experimentally more or less demonstrable that there's nothing special happening near the places where psi=0. In particular, the relativistic corrections don't get any stronger because of these points. In the pilot wave theory, as you admit, the "Bohmian trajectory's" velocity goes to infinity which does indicate that relativity should play an increased role there. But it doesn't.

Second part. I used the term "dramatically nonlocal" because the ability to influence remote regions belongs to the very basic built-in properties of the objects in the pilot wave theory. I mean that there doesn't exist any glimpse of an argument that these effects should be small - so they won't be small unless one tries to fine-tune everything. The pilot wave theory contains classical waves that are functions of several position vectors and the evolution equation directly guides the positions of particles depending on the immediate values of these multilocal objects anywhere in the configuration space. Those guiding waves are affected by other particles, e.g. those freshly created ones if you assume that the theory *is* able to produce new particles, which it's not, so there is a heavily, dramatically, lethally nonlocal action in both directions. The result must look like a completely generic nonlocal evolution, in contrast with all observations of the 20th century physics.

with the risk of sounding like LM 's echo: there is *nothing* valid about "boemian" pilot wave theory...

Lubos, well done for offering this guest blog. I'm broadly in agreement with Ilja, and I think you should more closely into this subject and try to set aside your hostility. Einstein reintroduced an aether for GR , the optical Fourier transform is an analogy for wavefunction-wavefunction interaction, see work by Aephraim Steinberg et al and Jeff Lundeen et al re "wavefunction is real", check out Percy Hammond re electromagnetic geometry, look at The Other Meaning of Special Relativity by Robert Close, see http://www.cybsoc.org/electron.pdf , and http://www.antiprism.com/album/860_tori/imagelist.html , to think of the electron as a Dirac's belt standing-wave photon-field structure. Etc etc. There's elements of TQFT and even an underlying "stringiness" to this. Don't dismiss it all because somebody can't get the maths right.

I personally like Bohm. He was a nice person with great strength of character, a political victim of the McCarthy era. But from what I know about the Bohmian mechanics, I do not believe it to be true

Arguments against it

1) Bohm theorists believe that the quantum wave is real. It is easy in 1 particle case. But if you have N particles, you need a wave function in 3N+1 dimensions. Are these 3N+1 dimensional wave functions also real?

2) Spin. Lumo made the point: "If de Broglie and Bohm claim that a particle should also have a well-defined position and velocity, it should naturally have a well-defined z-projection of spin, too. But once you adopt such an assumption, you clearly break the rotational symmetry. Particles would only have classical projections of spin with respect to the z axis so the z axis is preferred and you can measure its direction, at least in principle, uncovering anisotropy of space. The rotational symmetry of a theory including spinors heavily depends on the probabilistic nature of quantum mechanics. If you give up the equal treatment of position and spin and decide to treat spin differently and give an electron well-defined binary-valued projections of spin with respect to all axes, you will also encounter problems. Bell's inequality will show you very sharply that the required dynamics is completely non-local but you will also have problems with the Lorentz invariance and the precise rules for the evolution of the discrete function of the direction. The probabilistic meaning of the spinorial wave functions is completely essential for us to be able to translate a physical arrangement to any convention, including an arbitrary choice of the z-axis."
Spin needs to be understood within the framework of relativistic quantum field theory. In QFT, every particle species is associated with a quantum field and the quantum field Lorentz transforms in a partiular way - we have spin 0,1,2 and spin 1/2,3/2 etc fields. It turns out that all these fields are related to representations of the Poincare group (Wigner classification). There is a deep connection between the relativistic symmetry of spacetime (Poincare group) and the spin of quantum fields (representations of the group). This connection is imho very elegant and powerful and the Bohmian mechanics is ugly mess in comparison.

3) My personal issue with QM. I agree that wave function is not real. Collapse of the wave function is just a change in our knowledge. Most misunderstandings of quantum theory come from incorrect use of language and the use of vaguely defined concepts like "local reality". Lumo says that in an entangled pair, the particles do not communicate in any way. I agree. It is the only meaningful way how to avoid terrible paradoxes with space-like sepated entangled particles. But I have issues with the following claim "The moon is not there if nobody is looking". Where and how does nature store information about the correlation of the particles (how does nature remember the correlation), if the particles DO NOT EXIST prior to measurement. Only the quantum fields of bubbling probabilities exist before the measurement. If the particle and its spin is created (comes into existence) by the act of measurement at detector A, how does nature know that other particle at detector B should be created in such a way that it is correlated. This in my oppinion seems to invalidate the claim that nothing exists prior to measurement. (position of the Copenhagen school)

And when discussing someone's theory, it is always best to go the to source

David Bohm - The de Broglie Pilot Wave Theory

Well, if and when the original theory doesn't work, it doesn't help one much to go to the original source.

1.) In dBB, yes. I don't like this too, and I think it is possible to get rid of this, using dBB theory as a starting point. See arXiv:1103.3506

2.) As explained, I also prefer field-theoretic variants.

3.) To reject realism is of course consistent, as consistent as "God moves in mysterious ways". If you accept realism, you have to accept its nonlocal variant, given the violation of Bell's inequality. So if you want causality without causal loops, you need a hidden preferred frame.

Bohm can be called a "victim of the McCarthy era" but he can hardly be called "an innocent victim". I no more sympathetic to communist victims of McCarthy during the Stalin period than I am to the 740 members of the British Union of Fascist who were interned in Britain from 1940 till the end of the war.

I would like to add that I find this discussion fascinating (thank you Lubos) although I don't want at this point in time to take clear sides. I agree, however, that there most people have psychological difficulties with the Copenhagen interpretation and that this is only natural. Unfortunately the Bohmian approach, ,does not seem to me to be significantly better in this respect (although I need to think more about it, when I find the time). Personally I still prefer to think of QM as a computing tool. In this sense the key issue would seem to me to be: does the Bohmian mechanics really enhance computation? It seems unlikely.

As for Mephisto's question about "where the information is stored" - clearly the information about the correlation needs to be "remembered" by Nature. It seems indeed strange that the information about correlation could be "remembered" if the correlated particles do not exist but one could also ask: where are the "laws of nature" themselves stored? I seems to me we can't expect intuitive ideas acquired from our daily experience to apply to these sort of matters.

Interesting piece. But the premise:

"... I think there should be really good evidence to justify the rejection of such simple, general, fundamental and beautiful principles like realism. "

seems flawed.
Are not experimental outcomes confirming always QM during last 100 years "good evidence", if needed, that nature is not required to share our primitive view of reality. Also, our personal subjective construction(s) of reality does not established realism as any of those glowing adjectives.Why should realism be so fundamental?

Experimental outcomes of QM are in agreement with dBB theory, which is realistic, so are not a problem for realism. And one should, of course, distinguish our particular primitive realistic models from realism, that means, the general hypothesis that such a model (however complicate) exists in principle.

One can, in principle, use rigorous positivism, we observe correlations, and have formulas to compute them, that's all, no idea why the formulas work. I don't think it is a good idea.

Dear Ilja, the probabilistic distribution of X for non-relativistic QM models for one or several spinless particles may be "emulated" in this "realistic" dBB pictures but that's far from enough to do physics today and all opinions that the theory agrees with more than that are flawed ideas based on wishful thinking, neverending promises, and lies.

The pilot wave theory can never deal with quantum field theory or any other relativistic theory. It's not just the absence of the Lorentz symmetry. It's also the existence of observables with discrete spectra that appear everywhere and that can't be given dBB "actual value supplements".

Moreover, dBB is inevitably incompatible with the particle production - creation and annihilation of pairs in QFT. This is also easy to see.

dBB also fails to account for the actual macroscopic quantum behavior of large systems, contradicts decoherence, and I am not even discussing the aesthetic flaws that show, to a person with a good physics intuition, that it is just a completely fabricated attempt to deny the important insights that the quantum revolution has made.

It's experimentally more or less demonstrable that there's nothing special happening near the places where psi=0. In particular, the relativistic corrections don't get any stronger because of these points. In the pilot wave theory, as you admit, the "Bohmian trajectory's" velocity goes to infinity which does indicate that relativity should play an increased role there. But it doesn't.

It does not indicate. The Bohmian trajectory is unobservable, but relativistic symmetry is about observables only. (For Bohmian field theories, which is what is preferable in the relativistic case, it is irrelevant anyway, because it is $\dot{\phi}$ and not a velocity in space which becomes infinite.)

When you say it's just a "rotor", you don't actually show that the theory gives the right prediction - S is single-valued up to additive integer multiples of 2.pi. You don't show that because you can't - this correct constraint doesn't really follow from the pilot wave theory.

Again, if one starts with the wave function as being fundamental - as modern dBB or "pilot wave" theory does - this is as unproblematic as in quantum theory.

It becomes problematic only if one goes beyond standard dBB theory and prefers, instead, to consider R and S, or $R^2$ and v, as fundamental. This is what I prefer. But I also have a way how to solve this problem, see arXiv:1101.5774. This approach also regularizes the infinity of the velocity.

It's ugly? Ok, I think it is a good idea to look for more beautiful interpretations. It seems, the difference is about the criteria for comparison. I think giving up realism is stupid, equivalent to "Nature moves in mysterious ways". With realism and loop-free causality we need a preferred frame. That's my starting point. The next inacceptable thing are infinities. I don't have anything against hidden variables. Ok, its not nice that they are hidden, so let's try to find them, for example by looking where they become very large or infinite - this may be the place where the theory is wrong and they become visible. Symmetry is something very important, but not as important as realism and finiteness. As simple and as symmetric as possible.

The dBB scheme works for arbitrary configuration spaces Q, no need to restrict it to particles. The first example of a relativistic quantum field theory (EM) is already part of Bohm's paper. All what you need for observables with discrete spectra see Bohms original paper or the text here in the blog. The classical limit in dBB theory is much easier.

Ilja,

What about having a non-local interaction without a preferred frame? e.g. preserving Einsteinian relativity.

A realistic Einstein-causal theory cannot give the violations of Bell's inequality predicted by quantum theory. So this is rather hopeless.

(I also don't like that "local" is used instead of "Einstein-causal", but this is how "local" is used today.)

And if you read what I wrote, I did not say the theory should be local, or "Einstein-causal" as you like to write. I wrote that it should 1. be non-local
2. preserve Einstein causality

Also, by Einstein causality I mean no preferred frame, but also Lorentz invariant.

Before you say this is impossible, let me remind you that before 1905 everybody in the world thought that
1. Inertial frames are equivalent
2. speed of light is frame independent

were incompatible.

Sounds like I misunderstood you, but, whatever, I see no reasonable chance to make realism compatible with preservation of Einstein causality.

What's wrong about a hidden preferred frame?

The most horrible point: It's been really known not to exist since 1887 when its inevitable prediction of the aether wind was falsified by Morley and Michelson. That's the end of the story. A theory without it had to be designed. Einstein shown that the Lorentz invariance was needed for every theory that avoids the pathological because falsified prediction of the aether wind. Sounds like the type of argument from "relativists" who have not even heard about the Lorentz interpretation, which has a hidden preferred frame. So, relativity 101, there have been two interpretations of the Lorentz-Einstein theory, the Minkowski interpretation, without ether but a spacetime, and the Lorentz interpretation, with absolute time, and an ether which distorts rulers and clocks, in such a way that one cannot measure absolute time, and so the preferred frame remains hidden. Above variants predict Lorentz symmetry for all observables, and the same result for Michelson Morley. So, the MMX does not falsify the Lorentz interpretation.

Ok, maybe lumo used a polemical way to point out that the Lorentz interpretation has a problem to explain why the preferred frame is hidden? That would be fine, because this is really an interesting problem. How to solve it? The next failure in lumos answer: Without an infinite amount of fine-tuning, you just can't get it. Really no other way? Just an idea: It is quite typical that the symmetry groups of a fundamental theory and its approximation are different. Fine, lumo even has some nice theory about this: Quite generally, the recipe for "partially valid" symmetries in particle physics goes in the opposite direction. They're preserved at short distances, in the fundamental equations, and broken at long distances where symmetry-breaking mechanisms become important.

Oh, really only in this direction? Ever heard about a lattice theory? The fundamental theory has a discrete symmetry, its large distance approximation, instead, a continuous symmetry group. A nice example is the silicon lattice. It, of course, has some preferred planes. But if one considers its mechanical properties in the large distance and the lowest order, these preferred planes become unobservable and we obtain rotational symmetry. So, the other direction exists too. Approximation means loss of information, and the result of a loss of distinguishing information may be an increase in symmetry.

Let's clarify: These are only simple common sense arguments, appropriate for a blog, to show where lumo's arguments fail. The problem remains: To explain why we have Lorentz symmetry for the observable effects. Fortunately, it has been solved in arXiv:gr-qc/0205035. In this paper, I have derived the Lagrangian of my theory from some simple first principles, and this gives, as a side effect, the Einstein equivalence principle, thus, local Lorentz symmetry. So, yes, there is a problem, but it is not unsolvable, as lumo claims with obviously weak arguments, but already solved.

Another rather trivial example of a higher symmetry obtained by approximation is equilibrium. In the simplest example of global thermodynamic equilibrium we obtain, instead of a lot of inhomogeneous non-equilibrium solutions, only homogeneous equilibrium solutions, thus, we obtain translational symmetry not present in non-equilibrium theory. Something similar happens in dBB theory. We start from a nonlocal theory, and consider quantum equilibrium. And the theory reduces effectively to quantum theory, with quite different symmetries - those of the Hamiltonian. In particular, if the Hamiltonian has the appropriate relativistic symmetry, the predictions about observables will show relativistic symmetry, and it becomes impossible to use the nonlocal fundamental features for information transfer.

Sorry, but No! I tried again but I still think LM is right... You just cannot change the place of the "hidden variable" and think you cured everything. You remain with the same problem explained for the spin 1/2 electron...

Do you mean the choice of the beable, particles vs. fields? This changes a lot because you don't have to handle particle creation.

Do you think about how to handle Dirac particles in dBB field theory? A completely different and nontrivial question, there are various ideas about this. My own approach gives only pairs of Dirac fermions together with a massive scalar field, to be interpreted as electroweak doublet together with some dark matter. See arXiv:0908.0591 for how to reduce this to a simple scalar field with strange potential. How to handle a scalar field in dBB is well-known and simple.

OK, let me clarify. I agree with what you just wrote. You can't have traditional notions of causality. But, relativity doesn't say that we can't modify notions of causality. It only says C is constant in all frames, and inertial frames are equivalent. Now, if we modify our traditional ideas about causality, then we can save Einsteinian relativity and also preserve realism. No preferred frame is necessary.

As an experimentalist , in particle physics, I hope I am a student of reality. Quantum mechanics is a beautiful self contained mathematical framework that works for all the known data and at the same time an intuition can be developed about how nature behaves in the microcosm, which helps in looking for new unexpected effects.

For an experimentalist, a new microcosm mathematical framework which gives the exact same measurable predictions is not interesting or relevant to reality, it is a mathematical game. Are there any predictions of this new mathematical framework, supposing that all of Lumo's objections are met, which diverge from the predictions of the standard QM mathematical framework? Is there an experiment that can show it up?

If not the adjective "real" cannot really be applied to mathematics, except if one is talking about the form of written formulae . In my books, "real" in physics means "measurable".

It sounds kind of boring to be an experimentalist - no matter how it works, why it works, what it means, I am happy if I can feed it with numbers and get predictions for my experiments. Fortunately not all experimentalists have this attitute. I read a book from Zeilinger (Einsteins Schleier). He is an experimentalist and he is interested in what it means. The quest for meaning of quantum mechanics was probably his driving motive for his career choice and his work.

There are various formulations and various interpretations of QM and every formulations gives you an unique perspective. Through every formulation you understand the underlying theory better. I remember Feynman talking about the same thing in one of his lectures (The Character of Physical Law)

Quantum mechanics is very interesting for philosophers. The questions of reality, ontology, knowledge etc. were always the traditional domain of philosophy and QM can tell us much about these things. Unfortunately, not many philosophers understand QM, since to understand it, you have spend years studying physics.

The various formulations of QM are all within the same framework/postulates. This proposal adds another level/ a meta level of complexity without giving any physics results different than the simpler in complexity levels, except philosophical preferences. I am interested in the physics not the philosophy.

Fine. But this is simply a direction of research which I would not follow. I think there are a lot of other people following these directions, while I'm almost alone in the other direction,

Of course lumo is right if he argues that giving up some symmetry is not nice. I argue only that giving up realism or causality is even worse. But the point is not even what is worse, because it is clearly reasonable to look in different directions. I have found a quite nice one, with no competition, because to research in this direction is anathema. Quite comfortable, if one does not need a job. Interesting problems with reasonably simple solutions abound, because nobody looks for them.

“Every formulation offers a unique perspective” sounds like a truism. Of course for philosophers such truisms can be interesting, especially if you agree with Wittgenstein that “philosophy leaves everything as it is”. But physics does not leave everything as it is: the point of physics is not to describe the same thing again in a new way but to discover new phenomena, explain things that have not been previously explained, suggest new experiments etc.

If you have two formulations of a theory that are formally equivalent (in the sense that they can be used to derive the same mathematical formulas in all areas of applicability of the theory) they may still differ in their convenience and effectiveness. Things that take simple form in one formulation may become complex and convoluted in the other. I think this is much more important to a physicist than the purely psychological comfort of being able to retain “realism”.

Philosophers, who generally don’t compute things or apply mathematics to resolve confusing physical puzzles (like the recent discussion of black-hole “firewall”) have different priorities but for most physicists the key issue should be: how good is bohmian mechanics compared with standard Copenhagen approach as a computational tool? Even if Lubos’s objections can be overcome, the record suggest that very few new phenomena have been discovered by means of bohemian mechanics and most of the work done within this formalism is “parasitic” on the standard QM formalism. The only area in which this is not true is, I think, quantum chemistry. It would be interesting to hear some suggest some explanation of this fact.

Dear Anna, I would personally not endorse the algorithm of theory selection that you propose - it's Occam's razor ad absurdum.

Of course that in the development of science, there are often moments in which the newer theory *is* or at least *looks* more complicated than the older one, but it must still be accepted and this necessity becomes more manifest later when further unification or addition of new sectors or applications arrives.

The only legitimate way to rule out a theory in science is falsification - a proof of incompatibility of theory's predictions either with themselves or with the empirical data. The pilot wave theory may be falsified in this way but if it couldn't, your vague philosophical observations of complexity wouldn't be a solid enough proof to abandon the framework.

Dear Ilja, this favorite verb of yours, "giving up", just doesn't belong to science. Your usage of it proves that you are not thinking about these things rationally, scientifically, impartially.

Science is not about "preserving" or "giving up" something. These labels mean nothing else than some bias, an emotional attachment to some belief. Science is about finding the truth about Nature.

Darwinism has to "give up" God, at least some previously believed essential parts of this construct. Heliocentrism has to give up the "natural" (blah blah, propaganda) assumption that the body we inhabit is the center of the Universe. A kinetic theory of heat "gives up" the idea (of phlogiston) that everything we can feel by our skin is a material with a particular atomic composition. And so on, and so on.

But it's right to "give up" these assumptions because they are simply wrong. The case of realism behind foundations of classical/quantum mechanics is *totally* analogous. One must "give up" - without any crying - the assumptions behind classical physics (and the pilot wave theory) because science has demonstrated them to be wrong. If you cry or whine, you're just not an honest scientist.

The real problem (one of very numerous problems) of the de Broglie theory isn't that it "gives up" a symmetry. It's that the theory gives wrong predictions for experiments that show that the symmetry is actually there - in some cases, an absolute contradiction that can't be fixed by any improvement; in other cases, a soft contradiction which means that the pilot wave theory has to be unacceptably fine-tuned or fudged to account for the observations. The first situation is a straight and immediate falsification of the theory; the latter is a gradual disfavoring of the theory that may become arbitrarily strong and urgent.

For me, "possibly measurable in 500 years" means also "real".

The problem of infinite velocities in dBB theory near the zeros of the wave function suggests (if one assumes that there are no infinities in Nature) that there has to be some regularization. As a consequence, in the regularized subquantum theory there would be no point with exactly zero probability. See arXiv:1101.5774.

I think this is a general scheme, and a reason to consider different interpretations. Different interpretations may have different weak points, which suggest modifications (regularizations) of these interpretations, which are, then, already different theories making different predictions. Atomic theory has been, at the start, only an interpretation. Predictions came later.

From this point of view interpretations which propose hidden variables seem especially good ideas, because "hidden" in a normal situation does not mean "without problems". A preferred frame, even if hidden in the Solar system, becomes problematic if one considers solutions with causal loops, but possibly already in more harmless situations. Which? These may be the places where new physics appear.

So my theory of gravity arXiv:gr-qc/0205035 identifies such places as the big bang (replaced by a very rigid inflation with a big bounce, and an additional dark energy term which would shift the expansion toward a'=0) and black holes near the horizon, a place where according to GR nothing strange happens. Now there are discussions about firewalls at the same place.

By the way, I would not name QM (at least in the Copenhagen interpretation) self-contained.

You have forgotten Bell's inequality. Bell was at that time almost the only proponent of dBB, so this suggests good output per man-year.

Technically, the classical limit is much easier in dBB - you don't have to consider wave packets, instead, already in a wide packet (rho close to const) the Bohmian trajectories are almost classical. This may explain why it may be useful in chemistry.

Looking at things from different perspectives can give you better understanding of the problem, can help you train your intuition and this can help you later to look for future research directions and advance physics.

Why study the Hamilton-Jacobi theory of classical mechanics? It gives you nothing new except of better understanding of classical mechanics. Later it unexpectredly helped Schrödinger to invent his equation

And the same applies to string theory. First various formulations were discovered. Later partly unified into M-theory. By studying the various perspectives (versions) of string theory, you gain a better understanding of the whole, of the structure underlying all of the versions. So even in physics, it always a good idea to study a problem from all available perspectives, because it helps you to understand a problem better and if you understand a problem better, you have a better chance of coming up with new solutions.

Scientists are human beings with human errors and emotions, me too, not a problem as long as other scientists follow other emotions and make different errors. Giving up or preserving some principles are strategies for the search of new theories, and I insist that it is useful if different scientists follow different strategies. Most of them will fail, that's the risk.

Wrong predictions for experiments are not (yet) a problem of an interpretation with an equivalence theorem with QM.

Not having an explanation for an observable symmetry is, without doubt, a serious problem. See my other reply for how I propose to solve it.

How many man-years have been spend for versions of string theory unable to handle fermions? I doubt you think this was wrong. dBB theory is in a better situation now if we count open problems.

I completely agree that it is worth to study a problem or a phenomenon from all available perspectives - I don't think many people would disagree with such a general statement. If (and that is a big if) bohmian mechanics is really capable of offering different (and correct) insights into quantum mechanics than by all means people should study it ( I don't think even Lubos would disagree with this conditional statement). However, I don't think that "preserving reality" alone provides sufficient justification - and that seemed to be a key element of Ilja's original argument.

Dear Ilja, nope, your opinion that a researcher's bias is "compensated" by other emotions of someone else is completely and fundamentally wrong. There is absolutely no reason why the "average emotions" of all the researchers should be close to the truth, why the errors caused by the emotions should "cancel".

The opinion that they cancel is precisely the idiotic meme that e.g Feynman beautifully attacked in his Judging Books By Their Covers:

http://www.textbookleague.org/103feyn.htm

Search for Emperor of China's nose.

A string theory without fermions was never argued to be a right description of phenomena that obviously do contain fermions - it would indeed be as preposterous as what you're doing.

I haven't studied the Bohmian mechanics enough to be able to make strict judgements.

From what I gathered, in the interpretation of quantum mechanics, we either need to give up locality or reality. Some interpretations give up reality (copenhagen), some locality (Bohm), some both. I personaly believe, that it is probably necessary to modify the concept of reality. The Bohmian mechanics is very non-local (the quantum potential spreads instantly ftl). But these FTL influences lead to time paradoxes. The preference for various interpretations is a problem of psychology - what you find more tolerable to give up.

It is clearly nonsense to compose various contributions. Of course, not. Research directions which fail contribute nothing to the final results. But we don't know in advance which research directions will be successfull (with you as an exception for string theory, of course) and which will fail. If all scientists would follow the same strategy, there would be a much larger possibility that all would fail. If different scientists follow different strategies, most of them will fail, but there will be, with higher probability, some of them who make the correct choice.

The advantage of science is that it has a method to evaluate the final results of the work of different people in very different directions, following different strategies.

Not by nonsensical averaging, counting papers and taxpayer's money spend for them (here string theory wins), but by identifying the single one which was not a complete failure.

No, dBB does not lead to time paradoxes, because it assumes a preferred frame. A hidden one, so no problem with relativistic predictions for observables.

Time paradoxes are a problem of GR, not of quantum theory or dBB.

Dear Ilja, the only problem is that some theories have already failed - theories containing any Lorentz-violating aether failed in 1887, for example.

Correct. So what? That's what I have said - most theories have failed and will fail in future too. Nobody proposes to go back to pre-relativistic ether falsified 1887. What I propose in the direction of ether theory is a generalization of the Lorentz ether to gravity arXiv:gr-qc/0205035 which gives a metric theory of gravity with GR equations in a limit, and an ether model which gives fermions and gauge fields of the SM arXiv:0908.0591. It's something very different from old ether theory, which tried to explain only the EM field. What it shares with the old ether is the preferred frame of Lorentz and the attempt to use condensed matter models to explain the observable fields. I don't see a reason to reject these ideas in general forever only because the old ether has failed to explain the EM field.

In the cited paper you say "Giving up realism means giving up the search for realistic explanations of observable phenomena."

One can instead accept experimental evidence - 'observable phenomena' beautifully described by QM (leaving aside the aether) as better revealing true reality.