At the Quanta Magazine, Philip Ball wrote a cautious article about the dawn of the quantum computing epoch:

The Era of Quantum Computing Is Here. Outlook: CloudyIn a recent year or two, we've been bombarded by news about seemingly decisive advances in quantum computation. IBM offers you to use their 5-qubit gadget through the "IBM Q Experience" cloud service (info). IBM and Intel have some gadgets with 50 and 49 qubits, respectively. Google has joined the battle, too. Various physical representations of the qubit are being probed. Harvard's Misha Lukin seems to be close, too.

All of it may sound as if we're guaranteed to see a useful quantum computer in this year or very soon because there is apparently a brutal competition of many players who seem very close.

I think it may still happen but it doesn't need to happen. Pragmatically, one might say that numerous computer companies decided that they don't want to be left behind so they have simply thrown some millions of dollars to the research of quantum computing. They hired some competent enough folks, they have done something similar to the things that they have tried for years, and they have issued press releases indicating that they're on the verge of the breakthrough.

Note that the critical breakthrough is the beginning of the "quantum supremacy". That simply indicates the first moment when a quantum computer is built and able to solve a problem that would be infeasible for classical supercomputers that actually exist. OK, when will the quantum supremacy arrive?

The dealing with errors in the calculation is probably going to be the greatest obstacle that delays the practical usage of quantum computers. Hardware is imperfect but one needs more clever ideas in quantum computers to fix errors than in classical computers. In classical computers, every bit of your memory may be represented by \(N\) copies. You manipulate them, get some results. Hopefully most of them are correct ones and the majority may vote. The probability that the majority's opinion is a wrong one is tiny. So classical computers almost never do a mistake.

When you're guaranteed that the voltage somewhere is very close to 0 volts or 5 volts, you are basically representing the majority vote in a different way.

However, quantum bits can't be "cloned" in this way due to the quantum xeroxing no-go theorem (or simply due to linearity of quantum mechanics: cloning means exponentiation of the Hilbert space). The methods of quantum error correction are more sophisticated and some of them are really beautiful and linked to some exceptional structures in mathematics (which are also relevant in string theory). Each error correction scheme requires you to replace one qubit with many qubits, make every calculation much longer (it's the "overhead"), and a majority of the quantum-corrected calculation is all about the error correction.

I am confident that the underlying proposals have a rational basis and work at some level. But I am not a sufficient expert to be sure that these sophisticated error correcting strategies actually lead to the desired outcome. Even the system with the error-correcting mechanisms may make errors, right? Is the probability of the error of the "larger system" decreasing despite the growing size? By adding the overhead, don't you actually increase the chance of fatal errors? And so on.

So I propose the BEToL approach instead. It's the Brutal Error Tolerance of Lumo's. (With my knowledge, I am not actually able to prove that the sophisticated methods of quantum error correction give you better outcomes or lower brute force requirements than BEToL.) It simply means to run the quantum calculation and hope that it works fine at least once. Most of the time, it doesn't. But if you repeat the calculation many times, you get the correct result among the candidate results. And you may use some verification algorithm to check that one of these candidates is the correct result.

In particular, I propose to work on a quantum computer design that can complete a calculation with a one-in-trillion chance of the correct result in a microsecond (when decoherence kicks in substantially). You create 1 million copies of this small quantum computer of yours, and you run the calculation on each of them one million times – a second. Then there is a probability comparable to 50% that at least one of the candidate results will be the correct one. And it may be a password or a factor of a large integer that classical computers would need googols of years to find.

For this setup to work, you need the probability of the correct result to be at least "one trillionth". With a sufficiently refined hardware, this could be done. Note that if the probability of the correct (discrete) result is supposed to be "one trillionth", it's equivalent to having the absolute value of the complex probability amplitude equal or greater than "one millionth".

It's my feeling that this brutally simple approach to errors in the quantum computation will actually be the approach taken by those who will demonstrate quantum supremacy for the first time.

Also, I would urge the FBI and others to quickly capture and/or assassinate everyone who could be working on such farms of quantum computers in order to perform criminal acts or undermine the system because quantum computers really have the capacity to destroy lots and lots of things. I do think that every organization that relies on computers or offers computer services should create its own department that is looking into the dangers that quantum computing may pose and that is designing strategies to render the threats harmless. Some of the solutions might reverse years of progress or even requite computer-free, old-fashioned alternative solutions to certain tasks.

A safer system with new cryptographic habits should emerge but the transition era could be chaotic, dramatic, and dangerous. The dangers may be similar to those of nuclear proliferation except that the relevant technology could be available to much smaller players.

## snail feedback (0) :

Post a Comment