Off-topic, Nobel:the physics Nobel prize went exactly to the three men whom I recommended which is great. Now, Martin Rees wrote a tirade that teams (more than 3) should be rewarded instead. And an Arab inkspiller says that Einstein couldn't or shouldn't get a Nobel prize now (even though isolated theorists are still getting prizes in analogy with Einstein). Can you really read the damn Nobel's will? It originally insisted on one winner per year per field – which was already expanded to three – and there are extremely good reasons not to dilute the prizes further which simply can't change after 100 years. These prizes reward folks who have done way more than what they were compensated for by salaries. Generic workers and spokeswomen of LIGO etc. are just technicians and secretaries who were already compensated by their salary, at least approximately, for their business-as-usual. The LIGO Nobel prize went to 3 particular men and all the talk about "whole teams that win it" are just politically correct lies that all the important people are forced to parrot by the organized mediocre ones. They're bullšit and it's just absolutely terrible when this politically correct garbage is treated by someone as reality. I urge the Arab and Rees jerks to memorize the actual winners' biographies, shut up, and calculate.

*If you don't know, muddled or Maudlin is a puzzle and the solution is "beery"! It's impossible not to mock a guy's surname whose first three consonants are MDL. ;-)*

A reader sent me a few URLs to recent texts by the anti-quantum zealots. You can be sure that they haven't disappeared, either. A certain Don Weingarten has proposed a new, 51,682nd interpretation of quantum mechanics by rearranging the words "hidden variable", "theory", "single world", "many worlds" in a new way. Jess Riedel helpfully summarizes the the new important idea of the paper by pointing out that there's none. But according to Riedel, the new aspect of the paper is that it shows that some people find it appealing to use the words from another paper that has no ideas.

Last month, Nobel prize winner Gerard 't Hooft who became a full-time warrior against quantum mechanics some 20 years ago published

Free Will in the Theory of Everything"Philosopher" Tim Maudlin has responded via Facebook – on September 22nd and October 3rd – and some people including 't Hooft have joined the discussion under these Facebook posts. On this blog, Maudlin's fake science has been discussed at least since 2011 when Maudlin displayed his anti-quantum exhibitionism under a guest blog by my former PhD adviser.

Now, Maudlin doesn't understand quantum mechanics at all – what is the new way how it makes predictions, how they're actually made, and why the transition from classical physics to quantum mechanics is forced upon us by the evidence. He's one of the millions of idiots who just don't get it and who insist on classical physics – and try to build a classical model or a simulator that could explain the phenomena that are

*actually*only explained by quantum mechanics, a completely different framework.

Like the millions of his fellow dimwits, Maudlin is obsessed with Bell and his theorem although they have no implications within quantum mechanics. Indeed, Bell's inequality starts by assuming that the laws of physics are

*classical*and

*local*and derives some inequality for a function of some correlations. But our world is

*not*classical, so the conclusion of Bell's proof is inapplicable to our world, and indeed, unsurprisingly, it's invalid in our world. What a big deal. The people who are obsessed with Bell's theorem haven't made the mental transformation past the year 1925 yet. They haven't even begun to think about

*actual*quantum mechanics. They're still in the stage of denial that a new theory is needed at all.

Maudlin is unquestionably a mediocre pseudointellectual but sadly, in his interaction with Gerard 't Hooft, he ends up as the more intelligent one. Gerard 't Hooft has been dismissing quantum mechanics for some 20 years. He has been saying that "some cellular automatons, hydrodynamics laws, or something like that" would surely render quantum mechanics and its novelties unnecessary and all this stuff. It's been more than 16 years (it was on 9/11/2001, the day of my PhD defense) when I already won a bet – I made a bet that 't Hooft papers wouldn't be considered a breakthrough in 2001 by a majority of a selected quintuplet of "five top physics judges". My statement is much firmer today than 16 years ago.

't Hooft's vague wording has been changing a little during the two decades. It wasn't too fast but the change may be detectable. Some kind of superdeterminism has become his pet in the recent decade.

There are lots of isolated details in 't Hooft's recent paper that are enough for an expert reader to immediately see that 't Hooft has just lost it and he has absolutely no idea what he is talking about. While his paper is clearly a paper about some rudimentary assaults on quantum mechanics and childish proposals for a new interpretation of quantum mechanics, his wording makes it sound that he's discussing the special and general theory of relativity, quantum field theory, and the Standard Model, not to mention a theory of everything, of course. Needless to say, there isn't a single sentence or equation in his paper that would have anything to do with general relativity or the Standard Model. No tensors, no gauge fields, no quarks, no leptons, no differential equations – well, no equations at all, except for \([x,p]=i\hbar\) that he wants to overcome.

At the same moment, it's clear that he has abandoned so much that with his new, reduced "axiomatic system", he can't possibly understand even the simplest quantum harmonic oscillator, potential well, or any other undergraduate textbook problem in quantum mechanics. So why does he talk about big words such as general relativity or the Standard Model? He isn't misunderstanding just isolated important insights – he is confusing whole

*subfields of physics*. He no longer seems capable of seeing that the general meaning of the formalism of quantum mechanics is a

*separate issue*from the choice of the appropriate Lagrangian for the Standard Model – and many similar tasks.

But those confusions aren't the purpose of the paper, I guess. The purpose is to

*deny Bell's theorem*. While neither 't Hooft nor Maudlin really understand quantum mechanics and none of these two men is even willing to

*consider*the possibility that quantum mechanics is fundamentally correct, Maudlin is the

*more sensible man*among the two because he at least understands the basics of this dumb industry of "creating classical models crazily claimed to be capable of replacing quantum mechanics". In particular, Maudlin understands Bell's theorem.

In his paper and conversations with Maudlin, 't Hooft

*denies Bell's theorem*. It's incredible that this celebrated Nobel prize winner has dropped to the level of crackpot Joy Christian but it's unquestionably true. 't Hooft is combining various vague new buzzwords in bizarre ways to fool himself into thinking that a local realist (classical) theory may be consistent with all the observations we know.

While Joy Christian has used "mathematically looking" tricks to claim that he could circumvent Bell's theorem, like constructions based on quaternions, 't Hooft uses a new hypothetical law, the

*conservation of ontology*, to argue that a viable theory may be classical and local. What does this new conservation law say and how does it work? Obviously, you can't learn the answer from 't Hooft's paper. The "conservation of ontology" is just a vague qualitative piece of junk that the dimwits among the readers may be impressed with.

But if he were able to argue coherently, the explanation could look like this.

What's new about the quantum entanglement – and what's often incorrectly presented as a manifestation of non-locality – is the fact that the entangled composite system is ready for the measurement of various pairs of observables but in these "alternate histories", the corresponding operators don't commute with the operators from other alternate histories. Classical physics could explain the correlation in one kind of a measurement, e.g. the measurement of \(j_{z,A}\) and \(j_{z,B}\), but if it did so, it would seem unavoidable in a local classical theory that the correlation predicted for another possible future measurement, e.g. the measurement \(j_{x,A},j_{x,B}\), would be zero or much lower than quantum mechanics predicts (and experiments confirm).

However, if you knew in advance what pair of observables describing the composite system would be measured, e.g. if you knew in advance that what will be measured are the projections \(j_{y,A},j_{y,B}\) of the two entangled spins, a classical theory could try to "focus" on the goal of getting these predictions right i.e. equivalent to the predictions of quantum mechanics. If you could know in advance that \(j_{y,A},j_{y,B}\) will be the first observables describing the two subsystems that will be measured, you wouldn't need the predictions for other polarizations to come out correct – they will not be tested, anyway, so there can be no experimental falsification of the theory based on them.

Consequently, you could assume that the observables \(j_{y,A},j_{y,B}\) have some classical values – two classical bits in this case – even before the measurement is made. These two bits could be viewed as some "classical ontology".

As far as I know, 't Hooft has never described things so clearly but I find it obvious that this is the fact he is so attracted by. So he wants to believe in some secret law that actually

*predicts in advance what will be measured*. Well, there's a problem and it's exactly the problem that makes superdeterminism indefensible: in practice and probably even in principle, it's impossible to predict what will be measured.

Why is it impossible? Because a tiny moment before the measurement, the experimenter may change his mind and press a button that will reshuffle the measurement apparatus so that it measures \(j_{z,A},j_{z,B}\) instead of \(j_{y,A},j_{y,B}\). Why did the experimenter change his mind? It's because of some complicated processes in his brain. Equivalently, it's because of the experimenter's

*free will*. As I discussed in this January 2016 blog post about free will, the meaning of "free will" isn't that things must be mysterious such as religions or "divine interventions" ('t Hooft tries to mock the authors of the "free will theorem" just because the whole phrase "free will" looks religious to 't Hooft – but it's 't Hooft's mistake and his attack along this line is wrong and childish).

Free will (e.g. free will of a human brain) has a very clear technical, rational meaning: When it exists, it means that the behavior affected by the human brain cannot be determined even with the perfect or maximum knowledge of everything that exists outside this brain. So the human brain does something that isn't dictated by the external data. For an example of this definition, let me say that if a human brain has been

*brainwashed*or equivalently

*washed*by the external environment, its behavior in a given situation may become completely predictable, and that's the point at which the human loses his free will.

With this definition, free will simply

*exists*, at least at a practical level. According to quantum mechanics, it exists even at the fundamental level, in principle, because the brain's decisions are partly constructed by "random numbers" created as the random numbers in outcomes of quantum mechanical measurements.

What 't Hooft wants to imagine is that the whole world evolves as a whole in some way which is capable of determining the buttons that will be pressed in the future. To explain the observed data and to keep the assumption that the specific values of \(j_{y,A},j_{y,B}\) exist even prior to the measurement, the behavior of the two electrons (two spins) has to depend on the insight – extracted in some way – that it will be the polarizations \(j_{y,A},j_{y,B}\) that will be measured first (and not the polarizations along other axes or some more complicated correlated observables).

Even if you imagined that it's possible to calculate whether the experimenter decides to press the button, it would still be true that the behavior of the two electrons has to be

*adapted*to some of these future complicated properties of the

*experimenters' brains*, and that means – pretty much by definition – that the laws of physics would be non-local. Now and at every moment, the electrons' spins would be affected by some complicated (well, predicted for the future) observables describing the brains. So the brains directly influence the spins. It's an action at a distance. Even if this law with the incredible predictions of the human brains could exist, it would be a non-local law and therefore also a law incompatible with relativity etc.

I am not quite certain because both men are inarticulate but my reading of Maudlin suggests that he understands this basic point. He understands that even if some

*conservation law of ontology*were the principle that could save the logic of classical physics while allowing predictions equal to those in quantum mechanics, the required laws would be

*non-local*in the sense imposed by Bell's theorem. So 't Hooft has only proposed a new

*buzzword*. He hasn't changed the situation. He couldn't have invalidated a theorem by inventing a new buzzword. And he didn't.

These attempts don't have the slightest chance to succeed. But the motivation that drives the people to discuss these attempts for 90 years must be some kind of amazing bigotry. Quantum mechanics is so coherent, crisp, and simple, after you learn a few pages of new rules that are enough. It doesn't need tens of kilobytes of incoherent philosophical tirades similar to Maudlin's or 't Hooft's fog.

In quantum mechanics, you may describe the evolution of the electrons' spins via wave function or a density matrix. That wave function or density matrix evolves independently of the wave function of wave functions for spatially separated (and therefore non-interacting) brains, humans, and apparatuses. But the electrons' wave function is

*ready*for any kind of a measurement that the experimenters may do. A simple application of Born's rule can produce the probabilities of any outcome of any measurement and those agree with the observations. What is so repulsive for the people that they're willing to propose that each electron is capable of guessing whether a human will press a button? And fool themselves into thinking that this amazing divine super-intelligence of every electron can moreover be done locally – that observing and guessing a human's reaction and behaving accordingly isn't an action at a distance? The irrationality of the anti-quantum zealots is huge, indeed.

In quantum mechanics, it's indeed sensible to think that "the observables that are actually measured at the end" are more "ontologically real" than the observables that don't commute with them – and that were not measured. It's always more helpful to decompose the state vector in the basis of eigenstates of the observables that are gonna be measured soon. And this basis may be used to "retroactively interpret" some past, before the measurement. But this ability to reinterpret the past doesn't

*imply*that we should believe that the choice of the type of someone's measurement is clear in advance. It just cannot be known in advance. Instead, this ability to reinterpret the past using the eigenstates of the measured observables

*proves that quantum mechanics is as consistent as classical statistical physics*. Quantum mechanics doesn't introduce any new kind of probabilities that didn't exist in classical physics where you predicted that the dice have probabilities \(p=1/6\) for every outcome. Instead, quantum mechanics only dictates new rules to calculate these probabilities. When we only focus on the relevant measurement, the quantum probabilities are predicted "in a completely analogous way" as probabilities in classical statistical physics. The quantum phases and their interference become just a

*technical detail*in the way how quantum mechanics calculates the probabilities.

Maudlin ends up being more intelligent in these exchanges than the Nobel prize winner. But much of their discussion is a lame pissing contest in the kindergarten, anyway. There are no discussions of the actual

*quantum mechanics*with its complex (unreal) numbers used as probability amplitudes etc. Most of these men's discussions could be locally predicted by appreciating that 't Hooft wants to promote some illusion that some kind of superdeterminism is promising in replacing quantum mechanics while Maudlin is a worshiper of John Bell. None of these two "strategies to kill quantum mechanics" is viable or defensible and none of the men gets modern (quantum) physics but Maudlin at least understands that there are some mathematically demonstrable consequences of the axioms of "not so modern physics", i.e. local realism. 't Hooft doesn't even get this point anymore. Sad.

## snail feedback (0) :

Post a Comment