Friday, December 04, 2020 ... Deutsch/Español/Related posts from blogosphere

Boson sampling vs silly quantum computer skeptics

You've heard about quantum computers, right? Last year, Google achieved the quantum supremacy for the first time – performing a calculation on their quantum computer that would take an insanely long time on any classical supercomputer. Meanwhile, the SJWs have demanded the phrase quantum supremacy to be abandoned and replaced with the "quantum black lives matter" or something like that. ;-)



Now, as Scott Aaronson pointed out, Science has published the second quantum supremacy result which was produced by a Chinese group,

Quantum computational advantage using photons,
which performed boson sampling with a transformation of "50 to 100" photons. Now, boson sampling is a computational task that was specifically designed by Aaronson and Arkhipov (note that Aaronson cleverly chose his name to start with Aa... to be ahead of everyone else in the alphabetical lists), building on the work by Tishby and Troyansky, to prove the quantum supremacy.

The leader of the Chinese team is Pan Jianwei (a brother of Pan Tau) who is also the leader of the Jiusan Society, a minor political party in China. It is as independent and cool a political party in China as you can get; like many Scott Aaronson's comrades, it works under the direct leadership of the Chinese Communist Party. :-) The Jiusan Society dares to say that the Chinese people should be prosperous and powerful, try to appreciate the courage.



The boson sampling is a calculation of the expectation value of the permanent of a matrix that describes some transformation of the Hilbert space of a certain number of initial bosons to a certain number \(n\) of final bosons.

The transformation involves mirrors, beam splitters etc. OK, imagine that you follow the wave function for a single photon. It is easy to simulate the wave function of a "photon in some interferometer-like engine" (like the delayed choice experiment) on a classical computer. However, when you expand the number of photons to the symmetric tensor product, the dimension of the Hilbert space grows exponentially, and so does the size of the matrix. To make things worse, the (tensor product) Hilbert space for bosons is "symmetrized" and the permanent replaces the fermions' determinant. The permanent has no minus sign so while there is a polynomial-time calculation of the determinant of a matrix (don't forget about the operations with the matrix, subtract one row from another, and so on, the determinant doesn't change, and you may make the matrix diagonal at the end and the determinant is a simple product), it is not true for the permanent. The calculation of a permanent is hard. The expectation value here doesn't dramatically change the difficulty in either way, it seems.



Fine, boson sampling is a task that is specifically designed to be hard for simulations on classical computers and easy for quantum computers, and especially quantum computers that actually store the qubits in photons (and not some superconductors, for example). The latter holds because the calculation is pretty much done directly by doing the same manipulation with the photons that we are supposed to "calculate".

A boson sampling computation was previously done with 10 photons or so, when the classical simulation is feasible, and they just extended it to 50-100 photons. The output state space dimension is \(10^{30}\) – you surely don't want to pay for solid state drives of this capacity – and the calculation is estimated to be 100 trillion times faster than on the fastest available supercomputers with the best classical algorithms and strategies – you don't want to wait for this long. That's the kind of safe supremacy where the doubts are erased (recall that I correctly "predicted" that the doubts about the definition of supremacy dealing with "whether a marginal mocking or a devastating mocking" was needed for quantum supremacy would disappear almost immediately because when you achieve the marginal quantum supremacy, you may get the devastating one soon afterwards). You just can't do it on a classical computer in the following 1 million years, period. The terms "advantage" and "black lives matter" are such incredible understatements that their usage is utterly ludicrous. You don't even need to be a white supremacist to agree that "quantum supremacy" is the only appropriate phrase here.

Virtually no practical usage

There is really no good reason to calculate what happens to 50 or 100 photons in such circuits. The practical benefits of such a thing is even smaller than the practical benefits of thousands of Higgs bosons that you produce on the LHC. I will return to this point at the very end.

Silliness of the quantum computation skeptics

But what it does is to show that the quantum computer skeptics were totally wrong. It's a group that largely overlaps with the anti-quantum zealots. And they have said that you couldn't really achieve a quantum computer that boasts about the quantum supremacy. You find people like Gil Kalai – who is predictably clueless about physics so he doesn't matter – or, more embarrassingly, Gerard 't Hooft, a Nobel prize winner for physics. (Already before 2000, I won a bet against B.F. when he argued that 't Hooft's ludicrous anti-quantum papers would have been celebrated within five years LOL.)

You know, I think that these people were always incredibly unreasonable because their opinion was pretty much obviously false. For several photons, the quantum computation was already possible years ago. Even by the late 1920s, it was guaranteed that such small quantum computers could have been built because their correct operation was guaranteed by the pretty much elementary and standard tests of the basic laws of quantum mechanics. Why should the laws of quantum mechanics suddenly break down if you increase the number of components just by an order of magnitude (or two)? Surely if this were so, we would have observed a similar breakdwown in some experiments by now, wouldn't we?

Now, these "skeptics" would argue that the quantum computers couldn't be scaled to 50 or 100 photons. Why? Well, they were imagining that Mother Nature was as dumb as they are and She is actually "reducing" (the correct verbs are "insanely overcomplicating and crippling") the quantum calculation to a classical one. Some dumb brute force classical computers are working underneath the apparently quantum world. And because it becomes so hard for us to build the classical computers that simulate \(n=100\) photons going through the circuit, it must be difficult for Mother Nature, too, they argued (less coherently than I do, I have idealized their propositions which were mostly much less intelligent ramblings).

Well, Mother Nature was obviously never running any classical computer to fake the quantum computations in the world. One method to see why the "faking quantum computation" conspiracy theory is utterly idiotic is to notice that the quantum computers and the classical simulations work totally differently. The quantum computation dealing with 100 photons simply "directly does something with those photons" and the difficulty sort of scales linearly with the number of photons. Even kids in the kindergarten are capable of seeing that 100 photons is 50 photons PLUS 50 photons. ;-) But the quantum computer skeptics can't see that. They imagine that Mother Nature is obliged to politically correctly deal with some exponentially growing classical matrices. No, Mother Nature agrees with me that the quantum computer skeptics are just morons and the scaling is very easy to achieve – using tools that clearly work in Nature, whether they find these (non-classical) tools PC enough or not.

How could She convert the quantum calculation with 100 photons to a completely different, very resources-intensive, classical computer? How could She even find the right classical algorithm to "fake" something as straightforward as a transformation of 100 photons? How? And why? Why would you believe that Nature is doing something prohibitively difficult (and very different) to achieve a task that is self-evidently well-defined and straightforward? The quantum calculations and the quantum behavior of the Universe is obviously "real" and the idea that something totally different and classical must be taking place underneath quantum mechanics is a conspiracy theory on par with the theory of JFK living on the fake Moon in Nevada.

The discussion under Aaronson's report shows very clearly what is the fundamental stupidity driving the quantum computer skeptics. The first two comments say:
Sniffnoy: Wow! How on earth did they get all the way up from 14 to 40…?
Scott: Sniffnoy #1: I mean, it’s less than a factor of 3… :-)
These two simple sentences really summarize the difference between those who just completely fail to get quantum mechanics, in this case Sniffnoy, and those who sort of get it, in this case the far left nutcase. The same theme gets repeated several times. Another quantum computer skeptic is surprised and calls 50 photons "macroscopic". So Aaronson told him that "50 photons aren't macroscopic". It's common sense: 50 photons are clearly a microscopic composite object; if you are imagining some Earth-sized gadget as being "inside 50 photons", the problem is in your skull and you need to see a psychiatrist.

Right, the key point here is that 40 or 50 or 100 photons is still a very small number which is just several times larger than 5 or 10 photons. The gadgets are just being expanded by a factor of ten (or a small positive power of 10), the required accuracy must be improved at most by a factor of 10 (or its small power), the required rate of decoherence must be reduced by a factor of 10 (or its small power), and so on.

The quantum computer skeptics and anti-quantum zealots just don't get the simple points written in the previous sentence – which imply that the switch from 10 to 100 photons is obviously doable. Why? They are still imagining that there has to be a classical computer running beneath everything. And the number of operations on that computer may scale as \(2^n\) where \(n\) is the number of photons (in the final state). \(2^{10}\) operations are doable but \(2^{100}\) operations are not. But Nature isn't doing \(2^{100}\) operations at all. Nature (plus the Chinese comrades in this case) is only performing hundreds or thousands of operations. Only their ludicrous caricature of the "fake Nature" needs to do the prohibitively huge number of operations.

How do you get 42 photons if you only have 14 photons? Well, you take 14. And then 14. And then you add 14. And you have 42. That's it! The size of the gadgets has just tripled which is a minor change. How could someone fail to get the simple point that 42 is 3 times 14? In your mental image of a classical computer that fakes Nature, the transition from 14 to 42 may be a transition from the realm of possible to the realm of impossible. But it is you who is trying to fake Nature. Nature is obviously not faking Herself. The evolution matrices for \(n=100\) photons may be uniquely constructed from those for \(n=1\) photons by a straightforward "symmetrized tensor product" procedure which is what is clearly done in Nature automatically. Whether it's easy to simulate it using a computer that uses different, more primitive, sometimes useful (and approximately correct), 17th century laws of physics is irrelevant for the validity of the laws of quantum mechanics.

How much money could the tests swallow?

Let me return to the question of the practical usability of these calculations. Well, there are almost no applications. But because the classical simulation has gotten impossible, the quantum computer skeptics could claim that Nature is just spitting some wrong result of the \(n=100\) task, a gibberish that would be contradicted by a "proper" calculation of the permanent (and its expectation value) on a classical computer. So they could actually "demand" much larger supercomputers to be built and/or reserved for the verification of the boson sampling calculations.

Aaronson mentioned that he has added a comment about the verification to a referee report. As a result, $400,000 was burned in supercomputer time. And nothing (that is good for you) came out of it! Just imagine how much money is being burned by the likes of Aaronson these days, how easy it has become for them, and how totally useless these expenditures are. And the $400,000 payment was done to verify an \(n=40\) photons calculation. The cost may scale as \(2^n\). So for \(n=41\), you could pay $800,000, for \(n=50\) about $400 million and for \(n=60\) about $400 billion. Before you reach \(n=60\) – which is just a modest 50% increase of the quantum computer – sentences in Aaronson's referee report will become more expensive than the new \(100\TeV\) collider that either CERN or similar minor parties directly led by the Chinese Communist Party will hopefully build.

The difference is that the new colliders are surely answering some "previously clearly unanswered questions" posed by the smartest people on Earth (e.g. whether there are new elementary particles with the mass below \(20\TeV\)). On the other hand, the verification of a quantum computer calculation only answers questions that are already answered – even though the totally misguided quantum computer skeptics don't understand the simple answer (probably because they don't want to understand it: they just won't accept "No" as the answer to their question "please, Nature, can you return to classical physics so that I may stop crying?").

I have said that these boson sampling experiments (and their simulations) are really pointless because every competent and unbiased person (who has followed the 20th century physics) has known the main lessons in advance and the detailed results of the expectation values aren't terribly interesting. But there is a sense in which those aren't even relevant for "quantum computation". Boson sampling is really just some operation with \(n\) or so photons which follows the laws of quantum mechanics (for bosons) and may count as a special example of "quantum computation" if "quantum computation" is defined sufficiently broadly. But this experiment clearly doesn't involve a universal quantum computer that can run Shor's algorithm or any other "somewhat useful" quantum algorithm (it doesn't allow all the basic types of "manipulations with qubits" that are needed to calculate something; especially the correlated operations with several qubits are pretty much absent). So even the inclusion of this boson sampling work into the "quantum computer industry" is largely deceitful (and obviously done for a simple reason: lots of money gets poured into things that are called "quantum computation" these days). What is really going on is just some elementary tests of quantum mechanics (they're only tested as long as the classical simulation may be done and has been done) which just happen to be mentally difficult for an irrelevant group of people, many of the computer scientists!

What is happening is the following: a group of children who were left behind is getting tens of billions of dollars for them to learn basics of quantum mechanics which a normal intelligent kid has learned for tens of dollars (a textbook).

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :

(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1828728-1', 'auto'); ga('send', 'pageview');