On his blog Štetl optimized, MIT complexity theorist Scott Aaronson announced an essay he wrote for PBS,

Can Quantum Computing Reveal the True Meaning of Quantum Mechanics?It starts by sketching the description of Nature in terms of probability amplitudes before it switches to the "interpretations". He mentions three – many worlds, pilot wave theory, and quantum mechanics as discovered by its discoverers, in this very order. Bullšiters about "interpretations" never even agree how many interpretations they consider seriously. They only agree that the only correct one – which serious physicists have used from 1925 – isn't at the top. This negative attitude is the only glue that integrates that "community".

(By the way, "Štatl" is a local name for Czechia's 2nd largest city, Brno. The local dialect, "hantec", is a mixture of Czech, Gypsy, German, Yiddish languages and the local argots. "Štatl" is almost certainly a Yiddish contribution and the word is "the same" as the word in Aaronson's blog name.)

Aaronson's main point is that the most important consequence of having quantum computers – if and when they are built – wouldn't be practical ones (simulation or codebreaking). Instead, the top consequence would be that the quantum computer would be "the most dramatic demonstration imaginable that our world needs to be described by a gigantic amplitude wave". Well, I don't think so.

After all, if quantum computers are ever built, they will do exactly the things that are expected from them (plus other things that people will invent later). For all reasonable purposes, those who work in "quantum computation" have already "seen" what will happen. And it wasn't enough for them to get rid of the brutal confusion about quantum mechanics.

Aaronson's PBS text is just another example of this persistent confusion. He has placed two crackpot "interpretations" above quantum mechanics, tried to mock Niels Bohr, and promoted completely wrong "explanations" of quantum computers' power offered in a popular book by a philosopher named David Deutsch.

Richard Feynman is rightfully cited as a guy who has pointed out that quantum computers may secretly manipulate with a "greater amount of information" than their classical counterparts with the same number of bits. To describe \(N\) qubits, we need a \(2^N\)-dimensional Hilbert space i.e. \(2^N\) different probability amplitudes.

(I still think it's more physically correct to say that \(N\) qubits "are" just \(N\) bits whose behavior is predicted by a different theory – than to say that \(N\) qubits "are" \(2^N\) complex numbers. But one may mean different things by these sentences – they are not operationally testable sentences in this form and one shouldn't think that arguments about similar claims are meaningful.)

Note that Feynman has carefully avoided some "catchy" far-reaching interpretations that wouldn't be justified by the actual formalism. However, David Deutsch didn't avoid them and said lots of totally wrong and stupid things about quantum computers. According to Deutsch, the very existence of quantum computers which are so much faster in certain tasks "proves" that there have to be "many worlds".

**Quantum computers rigorously prove that "many worlds" fans are idiots**

This claim by Deutsch is bullšit from all conceivable viewpoints. No one has ever defined a viable (or remotely viable) theory that would actually incorporate the "many worlds" paradigm. "Many worlds interpretation" is just a meaningless banner covering a subgroup of anti-quantum crackpots and their wishful thinking. But even though no technical details about their "theory" have been released yet, it is always the case that they assume that the worlds have to be split (in one way or another, according to some rules that haven't been and can't be specified) at the moment of the observation.

That's why there are "many worlds" at all and that's the MWI babblers' "explanation" why the measurement itself doesn't need to modify quantum mechanics (even though this very point is a proof that they

*need*a new process at the moment of the measurement).

How do quantum computers fit into this picture? They don't fit at all. The very point of a calculation by a quantum computer is that

*there is not a single measurement being done*during the whole process of the calculation. Only at the very end, one makes a measurement of the quantum computer and this measurement is likely or certain to produce the result!

This rule, "no measurements during calculation", is absolutely essential for the functionality of quantum computers. The coherence has to be preserved during the calculation; decoherence is the enemy. The remarkable ability of the quantum computers comes from the interference effects – even Aaronson seems to realize that but only at

*some places*of his inconsistent essay.

Because the number of worlds in the "many worlds interpretation" is increased during the measurements – whatever the measurements exactly are and whatever the mechanism is supposed to be – and there can't be any measurement during a single quantum computation (because the quantum computer would be basically reduced to a classical one), it follows that the number of "many worlds" isn't increasing during the calculation at all. So this number can't be "large enough" to explain some exponential speed of the quantum computers!

Deutsch's claims are plain delusions for several other, related reasons, too. We don't know what the "many worlds interpretation" exactly is but another feature that all of this non-existent theory's advocates will agree with is that the splitting to many worlds – where different outcomes are observed – is irreversible, at least in practice. Once split, two (or many) worlds can no longer talk to each other. So even if the number of other worlds were exponentially large (a large number meant to explain the huge speed of the quantum computers for certain problems), this high value couldn't affect the observations done in a typical world at the end of the computation – due to the irreversibility.

It's really the almost complete and clever "synchronization of the phases" of the different probability amplitudes (which are the only thing that are exponentially numerous, so they have to have something to do with the "many worlds") which may be thanked for the correct result that the quantum computer ultimately spits off. This "co-operation" between the amplitudes is really the point – the new flagship ability – of quantum mechanics. The splitting to "many worlds" is a fairy-tale that is, according to the champions of this would-be "interpretation", needed for a quantum world to resemble the classical world. The splitting is what makes the theory look more classical, in their opinion. (Nothing like that is needed but they believe it is.)

However, quantum computers are fast due to the features of quantum mechanics that are

*not classical*. If you look at it rationally, everything that Mr Deutsch has said is upside down. Quantum computers are another wonderful proof of the unquestionable fact that "many worlds" is just a pile of crap that is utterly incompatible with the observations that force us to describe the world quantum mechanically.

**Quantum computers rigorously prove that "Bohmians" are idiots, too**

Analogous comments apply to the de Broglie-Bohm pilot wave theory. Quantum computers are a sophisticated example that shows that this "interpretation" has nothing to do with the basic character of quantum mechanics and its consequences, either. Why?

In the pilot wave theory, there exists a "pilot wave" – the wave function but one interpreted as a classical wave – along with the actual classical positions \(\vec x(t)\) of the particle(s). If the positions are randomly distributed according to the wave function at the beginning, the laws of motion for the particles (including terms encoding the effect of the "pilot wave") will guarantee that the same holds at the end. The distribution of \(\vec x(t)\) will agree with \(|\psi(x,y,z)|^2\) at all times. It's easy to arrange the forces so that this works.

(What it ironically doesn't solve at all is everything linked to the measurements themselves – at the beginning and the end. To have a complete theory of this kind, one would have to explain how the initial state of the pilot wave and the position is prepared during the initial measurement; and what happens with them after the final measurement. No consistent theory based on the Bohmian paradigm that would answer these questions exists at this point – and none can exist, even in principle. All the materials' low heat capacity is at the core of one completely general proof that no theory of this kind may ever be viable.)

However, you may see that this Bohmian "unnecessary superstructure", as Einstein had called it, only works for variables such as \(\vec x(t)\) which are continuous. In quantum computers, all the relevant observables are discrete – qubits. The Hilbert space is finite dimensional! So a quantum computer may be represented as the operator algebra \(\sigma_i^A\) of \(N\) copies of the Pauli matrices (and polynomials in them) which act on the \(N\) qubits. The spectrum of any such operator is discrete, of course. So there can't be any laws that would dictate the evolution of such observables in time and that would be analogous to the influence of the pilot wave on \(\vec x(t)\) in the usual example of Bohmian mechanics, the spinless particle.

It means that the whole "added value" of Bohmian mechanics – the actual classical value of \(\vec x(t)\) added on top of the wave function – has to be completely omitted when we describe discrete observables such as

*all*observables that are relevant for a quantum computer! When we try to "adjust" Bohmian mechanics to describe a quantum computer, we are only left with the wave function and it spreads just like it does in quantum mechanics. At the end, Bohmian mechanics must give us some

*totally independent*trick to "explain" the measurement. But the qubits themselves cannot have any classical values, like \(\vec x(t)\) has in the promotional example of Bohmian mechanics, because they have discrete spectra.

The point is that during the quantum computation itself, the whole "unnecessary superstructure" of Bohmian mechanics is completely inapplicable, has to be removed, and doesn't help with anything. At the end, the reason why Bohmian mechanics so brutally fails for quantum computers is the same reason why the many worlds have totally failed: a quantum computation is a sensitive quantum process where you simply can't afford to make anything classical or imagine that it is classical because that would totally destroy the quantum computation!

**Summary: the "interpretation" babblers will remain deluded**

That's why

*all*approaches that try to invent some "more classical visualization" what is going on are lethally collapsing if one attempts to use them to describe quantum computers. A quantum computation is a process that depends on the non-classical character of quantum mechanics so finely that any attempt to make physics look more classical unavoidably destroys the whole charm of the quantum computer.

I think that whenever he is rational, Scott Aaronson must realize this self-evident fact, too. But most of the time, he doesn't. He spreads fog about crackpot "interpretations" that absolutely clearly have nothing consistent to offer as explanations "why" quantum computers may be so much stronger for certain tasks than their classical cousins.

He finds it appropriate to place two nonsensical fairy-tales

*above*the correct description of quantum mechanics. Aaronson has quoted Bohr's correct comments about these matters – it is just physically illegitimate to ask questions that can't be settled by measurements, especially questions about "the state of things prior to the measurement" and whoever isn't capable of accepting this fact is a bigot who wants to insist on classical dogma. Aaronson has pretty much faithfully reproduced the previous – important – sentence somewhere in the middle of the article. But he did so in a way that suggests that he was trying to mock Bohr. That's too bad because these insights are the most important foundations of modern physics.

Let me offer you a silly analogy for Bohr's complementarity: Assume that Nature is Liberty. It is "something" and it may be looked at from various (complementary) perspectives. From Manhattan. From New Jersey. And so on. One may build a theory describing it – the Statue of Liberty. But the people who want a description with objective answers prior to the measurement are like those who insist that Liberty also contains the scaffolding and some particular scaffolding must be a part of the "reality". It doesn't and it isn't. Liberty or Nature is the "invariant" object that is left after you remove all the scaffolding, all the detailed answers that depend on the kind of questions that may be asked and the perspectives from which they may be asked. The resistance to embracing quantum mechanics is a pathological chronic desire to place some "scaffolding" everywhere. But Nature doesn't have it. To understand Her properties, you must admit that all the scaffolding that you add in order to "imagine" things has to be removed at the end (or as soon as possible) and declared completely unphysical. In relativity, the scaffolding that you throw out is the dependence on a particular inertial system. In quantum mechanics, one must throw away much more of this scaffolding – all values of all quantities that are independent of an actual measurement. What's left in Liberty after you remove the scaffolding are the results of the observations themselves – which always depend on an observer and his understanding of the word "observation" – and the purely spiritual rules probabilistically linking them. Everything else is a scaffolding that has to be thrown away!

Quantum mechanics may be understood as a kind of a "black box", like a computer that spits the right result (probability of one observation or another). And we may learn how to perform the calculations that exactly reproduce how the black box works. This is a description that Feynman used to say, too. Some people aren't satisfied with that – they want to see something "inside" the black box. But there is nothing inside. The black box – a set of rules that produce probabilistic predictions for measurements out of past measurements – is the most fundamental description of Nature that may exist. Everything else is scaffolding that people add but they shouldn't because it's not a part of Nature.

Quantum computers won't change anything about the desire of the laymen to see something inside the black box. On the contrary, a quantum computer will be an even blacker box! You press a button, it does something that is completely incomprehensible to a layman, and announces a correct result very quickly, after a short time that experts say to be impossible to achieve with classical computers.

Sorry but this is clearly useless for a layman – or an anti-quantum zealot who loves to suggest that he is more than a layman – to acquire an intuitive understanding of quantum mechanics. A quantum computation is even more black-box-like (and therefore "mysterious" for those who only like easily visualizable things) than other processes in quantum mechanics!

At the end, the double slit experiment is a much more transparent setup in which you may understand everything about the character of quantum mechanics. Feynman has said that you may understand everything about the workings of quantum mechanics if you think about the double slit experiment carefully enough. A quantum computer is just more contrived and less intuitive combination of many "double slit experiments".

If you can't accept that the experiments

*force us*to accept quantum mechanics i.e. a totally new theoretical framework that places Bohr's "positivist" disclaimers at the center stage while thinking about the double slit experiment, you will be unable to achieve the same thing once you see a quantum computer, either. Despite years of talking about quantum computers, Aaronson himself has clearly been unable to understand why the actual quantum mechanics – the remarkable, intrinsically positivist theory discovered by the physicists at some important point affiliated with University of Copenhagen whom Aaronson tries to mock – is needed and why the deBroglie/Bohm/Deutsch/Everett/thousands-of-others fairy-tales just can't be enough. So when he actually

*sees*a computer that does exactly what was expected, it won't make him any smarter, either.

Don't get me wrong. I think that Aaronson is smart from

*some*perspective. Perhaps, it's the collective stupidity – group think – of the very stupid people in the community that loves to talk about "interpretations" that makes him behave just like another imbecile when he talks about the foundations of quantum mechanics.

**Group think and collective stupidity in the "interpretations" community**

I am reminded about the shocking stupidity of those people every day. The Romanian crank whom I previously mentioned has impressed his 4 readers by another set of cool claims. Quantum mechanics violates the laws of Boolean logic, he says. Oh, really? No. Quantum mechanics is a formalism probabilistically predicting the value of propositions about future observables from the information extracted from the past measurements of the observables. All these propositions are the same kind of propositions that also existed in classical physics and they obviously obey the same laws of Boolean logic. (Von Neumann also defined "some new logic" but it is in no way needed and the Copenhagen folks have always preferred to use the same Boolean logic as scientists always did – which is OK.)

This Romanian chap also claims that socks in Vienna must violate the laws of quantum mechanics – do you really believe that? Socks in the real world are just other bound states of electrons and nuclei and all of their correlations must be described quantum mechanically, too – and in order to increase the tolerance towards plain idiotic claims such as his one, he quotes a notorious philosopher who was "trashing Heisenberg's uncertainly relations" in 1959. The Romanian chap suggests that he disagrees with the notorious philosopher but he still finds it helpful to spread the fog, to defend the idea that quantum mechanics isn't well-defined and consistent and it is just OK to shout

*arbitrarily stinky šit*. Why does it matter what a clearly deluded "philosopher" crackpot said about quantum mechanics – a theory he didn't understand – in 1959? Yes, the notorious philosopher was named Karl Popper. Loudly defending philosophers' dogma is simply not how science may be done.

Unlike Scott Aaronson, the Romanian crackpot

*isn't*smart from any viewpoint. He is just a moron who can't do any physics or mathematics correctly. But it's the collective influence of people similar to this Romanian chap that must also "infect" folks like Aaronson and make them write all the stupid things about the "interpretations".

People will be saying stupid things about quantum mechanics after quantum computers are going to be built (if they are built) just like they continued saying stupid things after the atoms were found to be stable, electrons were found to produce interference patterns in gratings and double slit experiments, entanglement was experimentally proven, local hidden variables theories and dozens of classes of "competing theories" were experimentally excluded, and after quantum mechanics has passed thousands of new tests. They will continue to say stupid things because the actual reason why they're saying stupid things aren't empirical or scientific in character; they are mental or psychological in character. Enough evidence to understand the need for a new framework we call quantum mechanics – and to eliminate conceivable alternatives – has been accumulated by June 26th, 1925, exactly 90 years ago, by Werner Heisenberg on Heligoland.

People say stupid things about quantum mechanics because they are stupid. It's all about them and not about quantum mechanics or its particular experimental demonstrations. Quantum computers, if built, will have some remarkable practical applications but they won't cure the human stupidity. They won't change the foundations of quantum mechanics, either. The more real quantum computers will become, the more easy it will be to see that they're applied science or engineering – and they have never had anything new to say about the foundational questions in physics.

## snail feedback (0) :

Post a Comment