Thursday, November 17, 2011 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Tom Banks and anti-quantum zealots

When my (former) PhD adviser Tom Banks wrote his first guest blog for Cosmic Variance, I couldn't quite agree with his views on eternal inflation and/or understand his groundbreaking 22nd century conceptual ideas based on holography, even though I've spent a nonzero fraction of the recent 12.5 years by attempts to internalize his proposals and their logic.

However, Tom decided that he must like guest blogging after all ;-) and the situation is very different in his second posting:

Guest Post: Tom Banks on Probability and Quantum Mechanics (CV, by Tom Banks, PDF)
Of course, I agree with (almost or completely) everything he says about the logic of quantum mechanics.

And I am kind of pleased and amused that Tom has finally managed to jump from the Rutgers and UCSC ivory towers to the Earth's surface where the mortals live. Despite Tom's apparent expectations, this surface is a realm filled with anti-quantum zealots and other morons (some of whom are routinely called philosophers, as we're going to see).

Even though some of the most critical comments beneath Tom's article have been erased, it still seems that a majority of the survivors hysterically dislikes what Tom has to say about the 20th century physics. While many readers surely think that I must enjoy fights against tons of assorted idiots and crackpots, I actually hugely prefer watching someone else who gets beaten for saying the truth, at least if it is someone who has been denying that one may get beaten for saying the truth. ;-)




Let me start with a systematic review of Tom's text. Once Tom Banks is re-introduced by Sean Carroll, his first paragraph says:
Rabbi Eliezer ben Yaakov of Nahariya said in the 6th century, “He who has not said three things to his students, has not conveyed the true essence of quantum mechanics. And these are Probability, Intrinsic Probability, and Peculiar Probability”.
Well, this is a part of the "bring culture and humor to the physicists" paradigm. Unless Tom Banks can't distinguish the 6th century from the 20th century, and I think it's very unlikely (but not strictly impossible) that he can't :-), he surely realizes that 6th century rabbis didn't know quantum mechanics. In fact, I would interpret the Old Testament as circumstantial evidence that unlike their descendants, the old Jews sucked even in astronomy and geology. :-) If Tom were serious, he failed to convince me that 6th century rabbis discussed quantum mechanics.

Moreover, there have only been two famous people who were called Eliezer ben Jacob: the first one who lived in the 1st century and the second one who lived in the 2nd century after Jesus Christ, a man whom Tom has called just another Jew. ;-)

Well, did this attempt to bring culture to the physics audience work? In the comment #9 which had to be later removed, a reader nicknamed OXO wrote something like this:
I tried hard to Google search for Rabbi Eliezer ben Yaakov of Nahariya but I failed: did Mr Banks simply mean Eliezer [some surname of a guy born in 1979]? But why Nahariya would be there? Well, I am not going to read the rest of the junk!
Too bad, OXO, but you managed to skip 99.9% of Tom's article and 100% of its detailed physics content. That would be a pretty bad starting point for someone who would want to rationally judge Tom's article and its validity – as well as for all those who are decided to learn something out of it.

The third sentence of Tom's essay is already all about physics. He says what the probability is; it is assumed that there are just finitely many possible outcomes. It's been explained that the frequentist approach to the probability understands it as \(N/N_{\rm total}\) obtained from a large number of repetitions of the same situation; the Bayesian version of the probability concept tells you about your expectations, "how you should bet".

There are controversies about the meaning or superiority of the frequentist and Bayesian notions of probability as well but I don't want to open them because they're relatively peaceful and they don't have much to do with the disagreements concerning the foundations of quantum mechanics. Tom says that most scientists prefer the Bayesian interpretation of the probability, "how you should bet". Well, I think that it's true that most scientists prefer it and I also admit that in many cases, I don't avoid using the Bayesian attitude, either. Still, I consider the frequentist definition to be more well-defined and less subjective, if you wish. When comments about "confidence" and "probability" become truly quantitative, and especially when some probabilities are meant to have a lasting value, a frequentist interpretation of probabilities must become applicable.

Of course, I guess that Tom won't deny that for the concept of probabilities to be useful, one has to know that whenever they can be accurately measured, they're measured from the \(N/N_{\rm total}\) frequentist formula. Just to be sure, I would agree with Tom and others that probabilities make sense even if the situation only occurs once or a few times; it makes sense to talk about them even if we're going to observe something just once. The only comment to add is that if we want to measure the probabilities of particular statements or outcome accurately, we have to repeat the same situation many times.

Going quantum mechanical

The first thing that the quantum picture of physics forces us to do is to adopt an intrinsically probabilistic description of the world, Tom says. Probabilities are fundamental.

At some point, Tom starts to discuss one of Feynman's textbook examples of two-dimensional Hilbert spaces, the ammonia molecule. I didn't mention it for years but it just happens that I used it as an example of oscillations in yesterday's discussions with Cynthia. Tom's usage of the same setup is either a coincidence, a result of telepathy or another ESP phenomenon that both of us are going to deny ;-), or perhaps a sign that Tom read my exchange with Cynthia. Another explanation of the coincidence is that it is simply a good and seemingly classical example of oscillations in a system with two states. The \(NH_3\) molecule resembles a tetrahedron with three hydrogen atoms arranged into a triangular base and a single nitrogen atom which is either above or below the base. When it's above it, there's actually a nonzero probability amplitude for the nitrogen atom to tunnel through the hydrogen triangle and to appear at the bottom. For this reason, the ammonia molecule is oscillating between the upward and downward pyramid states with a characteristic frequency.

Later in the text, Tom argues that because of the interference or, equivalently, the failure of the quantum projection operators to commute with each other, there can be properties of a quantum system, \(A\) and \(B\), such that they're neither strictly mutually exclusive nor completely compatible and the common rules such as \(P(A{\,\rm or\,} B) = P(A)+P(B)\) no longer hold; the double-slit experiment is a commonly used setup to present this idea although the wording is usually different. You should really admit that the word "or" is meaningless when the projection operators corresponding to \(A\) and \(B\) are non-commuting.

Hurricanes

But returning to the main line of thought for a while, Tom correctly says that the "apparent nonlocality" of probabilities could be seen even in classical physics. If we're observing a tropical cyclone, we may make probabilistic predictions on whether the cyclone is going to destroy New Orleans or Galveston. When we already know that it was New Orleans that was demolished (because we happened to talk about Katrina), we may immediately shrink the probability distribution and erase its pieces that covered Galveston. (We are assuming that the cyclone doesn't have enough energy to hit both cities.) One could also protest that such a shrinking appears "instantly" or "faster than light" except that no one does so: everyone understands that we're only talking about probabilities, i.e. expectations whether one or another city will experience a tragedy, and probabilities are "living inside our minds" and they may change instantly.

The same logic applies to predictions of polarizations of photons in an EPR experiment as well. When the first photon is measured, we may instantly refine our expectations about the other photon. We replace the overall probabilities by the conditional probabilities where the known outcomes of partial measurements are already taken into account. I discussed this analogy of the quantum mechanical wave functions with the classical probabilistic distributions in Density matrix and its classical counterpart, written in June 2011, and I may assure you that Tom is just using slightly different words to say the same thing.

Of course, the reason why some people reject this explanation of the absence of any problems in quantum mechanics is that they refuse to accept that the wave function is just a tool to predict probabilities and plays an analogous role to the phase space distribution functions in classical mechanics. They always want to imagine that the quantum wave function is a classical object, a classical wave, something that one may in principle touch or measure. This is a bloody dogma, an incorrect assumption they're simply not able to abandon regardless of the staggering amount of evidence that they're just wrong, which is why those people can never understand quantum physics; this is why they can never understand modern science.

Tom says that the anti-quantum zealots (he doesn't use any fixed term for this group of people) have a problem with "determinism". Some of the zealots living in the comment thread protest: it's the realism they need to preserve. Are these two assumptions really different? Well, they are different. You could construct classical laws evolving a classical configuration but the equations of motion could contain unpredictable terms that depend on "quantum generators". Well, yes, quantum mechanics rejects that the world may be described in this way, too.

I need to say the following: if the classical equations of motion had some extra quantum generators, the possible predictions would be inherently probabilistic, anyway. But once we know that only probabilistic descriptions are possible, we must search for the most general system of laws to predict the probabilities (as long as we want to claim that we are not prejudiced). And it's simply not true that a classical system with random generators is the most general framework for such laws. Quantum mechanics is more general: it predicts the probabilities of all allowed and well-defined outcomes despite its refusal to produce some "objective truth" about the state of the system prior to the observation (which verifies the probabilistic predictions). There's really no reason why such a "real objective state of the system" should exist before the measurement. We don't have to talk about it because it can't be given an operational definition; it can't be measured. Only the outcomes of the experiments may be measured and quantum mechanics gives us all the probabilistic predictions you need.

To summarize, while realism and determinism are different things, it is rationally unjustifiable to refuse to give up realism once you admit that you have to give up determinism – because in the presence of random generators in the equations of motion, realism doesn't allow you any "more accurate" or "more complete" predictions than the general probabilistic predictive scheme such as quantum mechanics. And indeed, if you collect evidence, you may see that it vastly prefers the non-realist description of quantum mechanics.

Ammonia and the spin-1/2 system

Well, this is the point where Tom introduces the ammonia molecule so his text is a bit more organized than my summary seems to suggest. Perhaps sections with headlines could be enough to make the inner structure of Tom's text more comprehensible. ;-) Tom uses the spin-1/2 notation for the two-state ammonia molecule Hilbert space and discusses that (if I can use the spin-1/2 language even here) the component of the spin with respect to any axis \(a\) not parallel to another axis \(b\) is unknown even if the spin with respect to \(b\) is fully and exactly known with certainty. However, one may always calculate the probability distributions.

Tom mentions that one may define questions that have an uncertain answer – but calculable probabilities – even in classical physics. They may be awkward but they may be clearly articulated in the context of classical physics. In this sense, quantum mechanics doesn't really force you to generalize your conceptual cannon behind the toolkit of probabilities distributions on the classical phase spaces. Once again, all the problems that people may have with the quantum setup is their inability to accept that the fundamental description of the reality is probabilistic in nature.

For these reasons, the truly new aspect of quantum mechanics relatively to probabilistic classical physics is that observables – physical quantities and even the important and natural ones such as positions and momenta – don't commute with each other. That's what guarantees that if some quantities are well-defined and completely unambiguously determined in the initial state, it still means that other quantities or even the same quantities at a later time are ambiguous and only probabilities of different outcomes may be predicted. The nonvanishing commutators are what makes the usage of uncertain properties and probabilities inevitable.

Measurement theory

All these things seem obvious, Tom stresses, so it is surprising why it took so much time for people to realize (and, later, to viscerally accept) that important observables (and therefore the corresponding projection operators) are non-commuting in the world around us. OK, what's the reason why people have been assuming that things commuted and could be determined simultaneously and forever?

The answer is related to the existence of macroscopic objects and decoherence. Tom says that quantum theory textbooks are doing a bad job so they not only fail to fully explain decoherence; they fail to fully explain almost any central idea that is critical for the derivation of the classical limit from quantum mechanics. Usually, textbooks only say that for the large objects, the position and momentum are so large that the Heisenberg lower bound for the product of their uncertainties ceases to be a constraint in practice. However, wave functions in realistic situations still spread and most beginners (and sometimes not only beginners) don't understand "what it is" that ultimately makes particles localized and their positions to obey the classical probability calculus.

In other words, the textbooks don't really explain why the coherence goes away: it is because of the large number of internal states of the macroscopic objects and due to their chaotic evolution that depends on the quantities that become classical (e.g. the center-of-mass positions and total momenta). For this reason, the corrections to the classical logic are as small as \(\exp(-10^{26})\), not far from the inverse googolplex. This number is expo-exponentially small, i.e. far tinier than any numbers we can normally "see" in the Universe (or their inverse values).

The smallness of these errors is why we may approximate all the quantum-predicted probabilities for (even mildly) macroscopic objects by classical conditional probabilities. Without running into inconsistencies, we may imagine that the observables describing the center-of-mass properties of (even mildly) macroscopic objects follow classical (but probabilistic) physics. As I've been numerously explaining on this blog, this is the only "real content" of the mysterious (bogus) process known as the "collapse of the wave function". Nothing is really physically collapsing. The only thing that happens is that the correlations between various (unobserved) degrees of freedom become so complex that any potential for a future interference evaporates.

That's why we may just assume that the quantum probabilities for the macroscopic questions have become ordinary classical probabilities. One of the outcomes is ultimately observed in the real world; but that doesn't need a special process. It's just a direct consequence of a careful interpretation of the probabilities. We're saying that the probability is 90% at some point that Katrina will hit New Orleans and 10% that it will hit Galveston. This statement doesn't say that someone may see a "real cloud in the skies" whose 90% is in one city and 10% is in another city. It says that one of the cities will be demolished and we don't known quite exactly at the moment which one it is even though it will probably be New Orleans. It's still a probability distribution, not a real cloud.

Tom's partial confinement in the ivory tower is reflected in innocent statements such as
The general line of reasoning outlined above is called the theory of decoherence. All physicists find it acceptable as an explanation of the reason for the practical success of classical mechanics for macroscopic objects.
What the second sentence really wants to say is that anyone who misunderstands why decoherence explains why classical logic emerges as a limit of quantum mechanics is an incompetent imbecile because the tools needed to verify that this statement is correct are just a little bit more advanced than those needed for \(2\times 2=4\). However, if you use a sociological definition of the term "physicist", the second sentence Tom wrote is clearly incorrect. Many people among the incompetent imbeciles call themselves physicists. Many of them are also called physicists by many people in their environments, some of them submit papers to the arXiv, and some of these incompetent imbeciles even present themselves as world's experts in the foundations of quantum mechanics!

Welcome to the real world, Tom. ;-)

Anti-quantum zealots are eager to refuse any "positively sounding" sentence about the foundations of quantum mechanics, regardless of arbitrary rigorous proofs that they're wrong. The refusal of quantum mechanics is a religious dogma.

Tom's summary

Tom says that quantum mechanics may have been surprising for mammals that have been trained by millions of years of experience for which classical concepts were good enough. But quantum mechanics is not like Jabberwocky, a fictitious language used by Mother Nature according to Freeman Dyson and designed so that humans could never understand it. Quantum mechanics actually can be understood.

The text is summarized by comments that all apparent "non-locality-related" and other mysteries of quantum mechanics are shared by all descriptions of physics that use the concept of probabilities. Quantum mechanics is new because the probabilities are "intrinsic" and one has to accept that they're not derivable from any "more fundamental" deterministic and/or realist (i.e. classical, whichever of the first two adjectives is identified with "classical") starting point. The failure of the observables to commute is linked to the non-existence of such a hypothetical deeper classical explanation.

Einstein's expectation that God doesn't play dice may be classified as nothing more than prejudice and hubris. The need to listen what Nature is telling us is being stressed. All the evidence seems to be pointing exactly in the direction that the anti-quantum zealots don't like, namely the conclusion that quantum mechanics is exactly correct, inevitable, and shouldn't be modified in any way. Many other things in the Heaven and on the Earth that were unexpected by the classical philosophy are awaiting us.

While Tom's text began with a 6th century quantum mechanical rabbi, it ends up with Hamlet who talked about ghosts. You may see the careful doses of "culture brought to the physicists": the first two sentences and the last one sentence take care of it. This intelligent design in which the technical stuff is sandwiched in between three cultural sentences is completely spontaneous and carefully prepared, too. ;-)

Looking at some comments

The first comment is by an anti-quantum zealot called Matt Leifer who says that the lack of determinism isn't the issue: the lack of realism is. Tom avoids the term "realism" altogether; he just replaces the "unrealist" character of quantum mechanics by the adjective "intrinsically probabilistic". If something is intrinsically probabilistic, it really means that you can't or shouldn't look for a picture where the probability is derived from something else. One only predicts probabilities, not the "reality". That's the point of "intrinsically probabilistic". So Matt Leifer is wrong that Tom didn't address the complaints of those who like "realism". He did although he may have used different words. By the way, Matt Leifer refers to a new crackpot paper on the arXiv that absurdly claims that the probabilistic interpretation of the wave function is internally inconsistent.

(As I mention on Physics Stack Exchange, this paper is just breathtakingly idiotic. It claims that there is a problem or inconsistency with the calculation of probabilities i.e. the squared inner product of two vectors in the Hilbert space. It's the kind of blunder that, in the commercial sector, could be done by the authors just once per life. They would be fired and could never get a similar job. In the Academia, this parasitic idiotic junk is thriving and they get even hyped by equally idiotic writers in Nature. Sure, a revolution debunking quantum mechanics, be my guest.)

The second comment by P. Hayes offers a lukewarm compliment and promotes a text about the Gibbs paradox, claiming that it wasn't a paradox in classical physics. In my opinion, this has nothing to do with the topic of Tom's text. The Gibbs paradox is the fact that the entropy of very many distinguishable classical particles fails to be extensive in classical physics – essentially because the logarithm of \(N!\) grows faster than a multiple of \(N\): it's like \(N\log N\). The problem is fixed once the particles become indistinguishable i.e. in quantum mechanics. The discrepancy may also be "softly removed" by realizing that the overall additive shift to the entropy is undetermined in classical physics (because classical physics doesn't have any \(2\pi\hbar\) that determines the natural unit of volume of phase spaces in quantum mechanics). If you ban particle production or annihilation and admit that the uncertain additive shift may depend on the number of particles, this additive shift may of course be chosen so that the entropy agrees with that of the indistinguishable particles and the paradox goes away, too. I can't understand how some people may write whole long articles about trivial issues like this one today.

The third comment offers a better compliment and asks whether Einstein's description of EPR correlation as a "spooky action at a distance" was legitimate. Well, the commenter obviously didn't understand Tom's text too well. The answer is No: it's not really spooky. The correlation is guaranteed because it's predicted in advance and no physical propagation of the information is needed as long as one admits that the wave function (or density matrix) is just a set of numbers encoding the probabilities, not the "classical reality".

In the fourth comment, g said: "Rabbi Eliezer ben Yaakov = Eliezer Yudkowsky? (In which case: why Nahariya?) Or is there some actual 6th-century rabbi who said something about three things students ought to be taught? (In which case: who and what? Google doesn’t seem to know.)" – Well, I don't know what's the exact basis of this mutated history at the beginning, either. At least, it succeeded in attracting the readers' attention. ;-)

The fifth comment was written by Arun who sometimes reads this blog. He's clearly an anti-quantum zealot, too. He claims that two non-commuting observables \(Q,B\) may be measured simultaneously. Well, be sure that they cannot, Arun, and your opposite opinion indicates that you couldn't have possibly be listening to a single undergrad lecture on quantum mechanics. \(Q\) may be known with certainty but if you measure \(B\) first, you change the state and the predictions for \(Q\) will inevitably get modified (and become uncertain), too. Are you sure you have never heard about this elementary fact about quantum mechanics, Arun?

The sixth comment, written by another, even more aggressive anti-quantum zealot named Tim Maudlin, starts in a really combative way: "Unfortunately, the understanding of the interpretive difficulties of quantum mechanics in this article is incorrect, so the discussion does not touch the important issues." It's kind of amazing to see that for most of those people, the more stupid they are, the more aggressively they behave on the Internet – and quite likely, away from the Internet, too. Just to be sure, don't forget about the possibility that your polite and humble correspondent may be an exception that confirms the rule. ;-) This moron claims that the measurement problem has nothing to do with the interference of the states of macroscopic objects. Except that as the decoherence theory makes indisputable, it has everything to do with it. The whole emergence of the classical probabilities from the quantum ones is a result of the suppression of the interference (and potential future interference) between the microstates and it occurs if there are many microstates with a chaotic enough evolution and strong enough interactions with the observables that become classical.

The moronic mode of thinking – or the absence therefore – preferred by the likes of Tim Maudlin is beautifully exemplified by this sentence he wrote: "Anyone reading Schrödinger’s original “cat” paper can see that possible interference is not among the issues he discusses." – This "argument" is like referring to the verses of the Bible. What Mr Maudlin is doing is not science or a search for the truth: it's the production of marketing tricks how to spread randomly chosen misconceptions held by various men in the history. His activity is all about the preservation and strengthening of myths. The reason why the correct explanation of the measurement theory isn't described in Schrödinger's paper is that Schrödinger didn't understand it. In fact, he didn't understand anything about the foundations of quantum mechanics, much like Tim Maudlin doesn't understand anything about the foundations of quantum mechanics today. But that didn't prevent other physicists – such as Niels Bohr and his Copenhagen school – from understanding this issue and it surely doesn't stop physicists in 2011 such as Tom Banks from fully grasping these issues.

Maudlin also says: "Banks apparently agrees with the contention of the EPR paper that the wavefunction does not provide a complete physical description." – Obviously, Tom Banks doesn't agree with this misconception. Quite on the contrary, he stresses that according to all the evidence we possess, the wave function is the exact description of the real world and the correct description of the real world must be intrinsically probabilistic. Has Mr Maudlin really failed to notice that this is clearly written in the text? Why are such people so violently unable to accept the truth - namely that quantum mechanics is exact – at least as a possibility? Their treatment of this "possibility", if I use this modest word, really shows that they still view the truth as a heresy.

Aggressive idiot Tim Maudlin also says all the wrong things one may say about non-locality: "The discussion of non-locality and Bell is completely off-target. Einstein was not worried about indeterminism or “God playing dice”: he was worried about the evident non-locality of the standard theory with collapse if one takes the wavefunction to be complete." – Blah blah blah. The theory is self-evidently local. Quantum field theory is exactly local; that's why it's sometimes called "local quantum field theory". And the dynamical laws describing the evolution of the EPR photon pair doesn't even contain any interaction Hamiltonian, as Sidney Coleman liked to emphasize, so not only non-local influences are absent: all interactions are absent. The "non-locality" is just an artifact of a misinterpretation of what probability distributions affecting outcomes connected with different loci mean. Enough Maudlin.

The seventh commenter, anti-quantum zealot named Chris, complains about the previously discussed sentence that "all physicists accept that decoherence explains the success of classical physics in the world governed by quantum mechanics". He's sure he's seen objections. Again, right, some of the stupid people who are not capable of verifying that Tom's sentence is demonstrably correct call themselves physicists. But that doesn't change the fact that their claims are scientifically indefensible. We hear that "the unsatisfying philosophical aspect is that you leave macroscopic states still in a superposition, with no way of ever resolving it, whether or not they can interfere with each other." – Sigh. Decoherence is the set of processes that make any future interference between various values of certain properties of the system impossible in practice. So once we've seen enough decoherence, it's exactly the point when we can say that no interference will occur in the future (the information about the relative phases is irreversibly lost) and when the probabilities may be treated classically. This outcome is never 100% accurate: classical physics never emerges quite 100% exactly out of quantum physics but the deviation from 100% may be quickly as small as \(\exp(-10^{26}\).

In the eighth comment, the same Chris also complains about the ammonia example:
I don’t get what the ammonia example is supposed to demonstrate at all:
So, we take the system either definitely up or down (\(Q\)). Then we do some junk with matrices, which are all still completely defined, and then we define \(B\) in terms of coefficients \(b_x\) and \(b_z\) that we just made up. Then we act surprised that the value we get for \(B\) depends on the coefficients we put in?
Right, we're not surprised. It's completely obvious that this dependence on the coefficients we introduced will be there. The point of this argument by Tom isn't to make someone surprised. The point is to show that what Tom says about quantum mechanics is totally inevitable, obvious, and it shouldn't be surprising. The only real complaint one could say is the following: In classical physics, the questions may look "contrived". However, questions about totally natural and essential properties of the physical systems are the counterparts of these classical questions in quantum mechanics.

So yes, the main difference between classical physics and quantum physics is that in classical physics, one may "consistently" ban all such "contrived questions": in quantum mechanics, it is not possible to ban them because questions about pretty much every quantum observation have this form and have these properties. It is no longer possible to consistently keep "the complete information about the whole physical system" that would be composed of mutually commuting operators (i.e. that could avoid the probabilistic description of all properties) simply because a generic pair of operators fails to commute. However, Tom is still right that one may construct fully analogously behaving questions in probabilistic classical physics, too. The fact that in classical physics, one may also avoid the discussion of them is a different question.

The author of the ninth comment, the anti-quantum zealot Marty, posts a very long incoherent diatribe in which he insists that the wave function must surely "collapse" according to the "usual" interpretation of quantum mechanics. He also equally absurdly demands that there must exist a sharp factorization of physical systems to the measured particles and the measurement apparatuses. It's one of the points that such a separation doesn't exist. Quantum mechanics allows one to compute probabilities of any properties of any quantities describing anything in the physical system, whether or not these quantities are viewed as properties of "measured particles" or "apparatuses". There is no sharp separation and there doesn't have to be any sharp separation. The only thing decoherence needs is to identify degrees of freedom that will behave "classically" after the decoherent processes. But that doesn't mean that one must be able to sharply separate quantities to classical and quantum ones.

What I find remarkable is that the point 1. written by Marty self-evidently fails to be a question. It's just a sequence of wrong statements and incoherent would-be arguments and the purpose of this gibberish is clearly to make Tom say Yes. But he won't say Yes because it isn't correct. I wonder whether Marty and others have ever understood what "learning" means. You can't possibly ever learn anything new if all your questions are really non-questions and if you're not listening to anything that others (and Nature) are telling you and if you never fix the fundamental errors that underlie your current thinking about the problem.

Marty also claims that decoherence can't work because the interaction of the particle with the apparatus is "localized". Well, if one only looks at one or two electrons surrounding the measured particle, there's indeed no decoherence (yet); that's why one must always use quantum mechanics without classical simplifications to understand one pair of particles. Decoherence only arises once many degrees of freedom are entangled with those that ultimately behave classically. After many years of trying to communicate these trivial matters, I am pretty sure that most people such as Marty can't possibly get it in their lifetime. They don't like decoherence simply because it still "fails" to return quantum physics to classical physics. They don't want decoherence. They want their f*cking old wave function collapse that says, right after the first interaction of two particles in the apparatus, "quantum mechanics has collapsed, welcome back to classical physics". But that won't ever happen in physics. In this sense, the process of learning physics is irreversible.

The tenth commenter signed as "J" offers some sociological arguments why anti-quantum zealots shouldn't criticize veteran researcher Tom and why they (e.g. OXO) shouldn't use the word "junk".

In the eleventh comment, Scott Aaronson tells Tim Maudlin that "The view that I take Banks to be defending here is actually one I’ve found extremely common among physicists, so maybe it would be worth philosophers trying to understand it sympathetically and seeing how much sense they can make of it." – Right. It's not just extremely common: it's the only view that is supported by the actual scientific evidence. Aaronson's comment is helpful for another reason, too: we learn that Tim Maudlin calls himself a "philosopher" (and so do many tortured students at NYU). From the observable and empirical viewpoint, the term "philosopher" still seems indistinguishable from an "aggressive imbecile".

In a recent text on all-male conferences, Sean Carroll argued that philosophy is another discipline in which women have almost no chance to match men. In the comment section where people confirmed that they haven't read any coherent female philosophy yet, libertarian hero Ayn Rand was identified as the only well-known female philosopher in the history whose essays and books had a depth and internal logic. So much like physics, philosophy largely depends on the men's wider distribution of abilities. However, unlike physics, philosophy of physics may crucially depend on what's going on on the left tail of the curve. ;-)

Aaronson presents Tom's view as "many worlds without many worlds". Well, in normal mathematics, the result is pretty much zero so I don't like this name (and I don't really like many worlds) and Tom hasn't used it (and in a later comment, he says he doesn't this description that depends on many worlds, either). At any rate, Aaronson correctly summarizes the main ideas of Tom's description of quantum mechanics, such as the fact that the "act of the measurement" influences the wave function in the same way as the switching from overall probabilities to conditional ones in any probabilistic calculus.

To assure us that Aaronson was an exception, the twelfth commenter named Michael Bacon endorses Tim Maudlin, the most idiotic commenter so far among those who were not deleted. ;-) Michael Bacon actually only agrees with Tim Maudlin's words that he didn't agree with the post but offers a few other delusions, including the fresh crackpot paper claiming that orthodox probabilistic quantum mechanics is inconsistent (it was mentioned in the first comment).

The thirteenth commenter "J" who previously wanted some civility when he replied to OXO says that his comment may also be deleted because OXO's scream has experienced the same fate.

In the fourteenth comment, Jeff is polite and pleasant but he clearly missed everything, too. In the comment where the invalid word "collapse" appears more than once per sentence in average, he claims that decoherence can't deal with the entanglement. It was the main point of this part of Tom's text that predictions such as those of the properties of an entangled EPR photon pair are analogous to classical predictions of hurricanes in New Orleans and Galveston. In the hurricane case, there's also a correlation between the hurricane in the two cities: it won't hit both cities simultaneously. The entanglement in quantum mechanics is analogous: entanglement is nothing else than correlation (as expressed in the broader and more general quantum scheme with all of its non-commuting observables etc.). Well, you may find the sentence saying that "entanglement is just correlation in QM" in dozens of TRF posts, too.

The fifteenth comment by David Brown wants to hear about vacuum catastrophes except that he probably doesn't mean a real physical discussion on vacuum catastrophes because a part of his question is whether it's been decided who is right: Stephen Wolfram's model of the world as a cellular automaton or quantum physics? What do you think is the answer, Mr Brown? ;-) To make his comment useful for others, Brown's comment also features an URL pointing to the Wikipedia article on A New Kind of Science. ;-)

Tom Banks appears in the sixteenth comment for the first time and the discussion has kind of exploded so I am going to stop the one-by-one comments about the Cosmic Variance comments at this point. Be sure that the newer 10 comments (not discussed above) contain lots of Tim Maudlin, too, and I am not sure whether my guts are ready for such a generous inflow of adrenaline. ;-) Of course, I was just awaiting a clarification of how many comments would be posted before we learn that Sean Carroll is a hard-core anti-quantum bigot as well. The answer turned out to be 40.

Add to del.icio.us Digg this Add to reddit

snail feedback (1) :


reader Dimension10 (Abhimanyu PS) said...

Wavefunctions aren't real, but they're absolutely real : ) .