**First, news on beyond the Standard Model dark bosons:** In an MIT search (PRL) involving transitions in ytterbium nuclei, a 3-sigma deviation from the expectations was found; while in a Danish search using calcium (PRL), there was no deviation. The MIT excess may be due to some higher order effects, however. See a popular summary.

Some equations in this blog post are long and I don't want to break them to too many short lines. Please switch to the mobile template in this case, thanks. Also, right-click at an equation and set "Zoom trigger" to "click": a left mouse click on the formulae will magnify them afterwards.

Classical computers operate with classical information, e.g. bits encoded in the voltage of a capacitor. \(5 = 00000101\). Their working should be deterministic.

**Additional reading if you wish: books such as "The Physics of Quantum Information" by Bouwmeester, Ekert, Zeilinger (a hyperlink appears later)**

Richard Feynman was the first one to propose that there could also be computers that follow the laws of quantum mechanics instead of classical physics. While a tremendous progress has taken place in the recent year or two, a practical construction of a useful universal quantum computer remains a dream for the future (although the leading teams promise that it may be done within a few years; some limited algorithms or a simulation of quantum chemistry have already been achieved), nevertheless a lot has been learned about the theory and the algorithms and some preliminary steps to overcome the technical difficulties and realize the idea have been made.

**Bit and Qubit**

One bit can either be "zero" or "one". Your computer's, tablet's, or smartphone's memory chips contain a lot of bits like that, and their state should evolve deterministically: the present state is strictly determined by the state in the "previous moment". If your computer is not deterministic, you may want to buy a new one.

Quantum computers would be based on quantum bits, or qubits. While the bits must be either "zero" or "one", quantum bits, also called "qubits", may be found in a quantum state that is an arbitrary linear superposition of two states: \[ \ket\psi = \alpha \ket 0 + \beta \ket 1 \] The states \(\ket 0\) and \(\ket 1\) may be represented by spin-up and spin-down or any other two-level system that we have discussed or we have not discussed on this website or other places.

Instead of one bit, we seem to have two complex numbers. They can't be directly measured; these numbers only determine the probabilities. If you consider \(N\) qubits, there are \(2^N\) different states, and each of them is associated with a complex number. For example, for \(N=2\) we have a state \[ \ket\psi = \alpha_{00} \ket{00} +\alpha_{01}\ket{01} +\alpha_{10}\ket{10} + \alpha_{11}\ket{11} \] where \(\ket{10}\) is the same thing as \(\ket 1 \ket 0\) which is the same as \(\ket 1 \otimes \ket 0\) and so on. The classical computer only allows the states where all the constants \(\alpha\) in the standard basis are essentially equal to zero except for one that is equal to one, and this property must be preserved by all the operations. What are these operations in the case of the classical computer? For example, the \(n\)-th bit can change to the logical sum of the \(p\)-th bit and the \(q\)-th bit.

Quantum computers are not constrained by these classical constraints, and all coefficients \(\alpha\) may be nonzero. This allows much more general operations. Consider the operations that only act on one bit. In classical physics, you can either do nothing; or change the bit to 0 regardless of its value; or change it to one; or negate it.

In quantum physics, there are many more operations that you can do with a single bit and especially with several bits (but, as I won't discuss too much in this text, quantum mechanics also imposes some extra restrictions which means that many operations allowed classically are prohibited on quantum computers; unlike the classical computer operations, the quantum ones are always reversible!). Any unitary \(2\times 2\) transformation will do the job. It is very useful to consider a very special transformation, the so-called Hadamard transformation: \[ U_H = \frac{1}{\sqrt{2}}\left( \begin{array}{rr} 1&1\\ 1&-1 \end{array} \right) \qquad \Rightarrow\qquad U_H\ket 0 = \frac{1}{\sqrt{2}} \left[ \ket 0 + \ket 1 \right],\qquad U_H\ket 1 = \frac{1}{\sqrt{2}} \left[ \ket 0 - \ket 1 \right]. \] In some algorithms we will sketch later, this operation is particularly useful if it acts (in the same way) on many bits simultaneously. For example, \[ U_H \ket{00} = \frac 12 \left[ (\ket 0 + \ket 1) (\ket 0 + \ket 1) \right] = \frac 12 \left( \ket{00} + \ket{01} + \ket{10} + \ket{11} \right) =\frac12\left( \ket 0 + \ket 1 + \ket 2 + \ket 3 \right) \] More generally, for an \(n\)-qubit register, we obtain \[ U_H \ket{0}_n = \frac{1}{\sqrt{2^n}}\sum_{i=0}^{2^n-1} \ket i \] which is a "maximally non-classical state" that is often envisioned to be the initial condition of the quantum computer for many calculations.

As we mentioned, classical computers can change the value \(i=0\dots 255\) of a register to an arbitrary function \(f(i)\). In quantum computing, all the operations done with the state \(\psi\) must be unitary – because any evolution operator in pure quantum mechanics is unitary. This feature does not allow us to use functions \(i\mapsto f(i)\) to change the register directly if the function \(f(i)\) is not one-to-one. It's because functions that are not one-to-one are not invertible – i.e. the process would not be reversible – but as we have said, all operations with the quantum computer must be reversible because they are unitary (\(U^{-1} = U^\dagger\)). The constraint of reversibility usually forces us to preserve the input \(\ket{i}_n\): \[ \ket{i}_n \ket{0}_m \mapsto \ket{i}_n \ket {f(i)}_m \] Of course, the information about the second register is only preserved if \(f(i)\) is a unitary action on the second register for every \(i\). In our case, we only consider the action on \(\ket{0}_m\).

For example, start with \(\ket{0}_n\ket{0}_m\), and apply \(U_H\) to the first register to obtain \[ \left( U_H \ket{0}_n \right) \ket{0}_m = \frac{1}{\sqrt{2^n}} \sum_{i=0}^{2^n-1} \ket{i}_n\ket{0}_m \] Now apply \(f\): \[\frac{1}{\sqrt{2^n}} \sum_{i=0}^{2^n-1} \ket{i}_n\ket{0}_m \mapsto \frac{1}{\sqrt{2^n}} \sum_{i=0}^{2^n-1} \ket{i}_n\ket{f(i)}_m = \frac{1}{\sqrt{2^n}}\left( \ket{0} \ket{f(0)} + \ket{1} \ket{f(1)} + \dots\right) \] Note that the state has simultaneously evaluated the values \(f(i)\) for all possible configurations of the first register. In other words, we have simultaneously calculated \(2^n\) values of the function \(f(i)\) in one step. Indeed, this is what quantum computers could do in one operation. Such observations are behind the statement that quantum computers could be exponentially faster than the classical computers.

However, you should notice that most of the information is inaccessible because the wavefunction can't be directly measured. For example, try to measure the first register. You will obtain, with equal probability, any number \(\ket k\) between \(0\) and \(2^n-1\). Because of the measurement, the wavefunction collapses to \(\ket k \ket{f(k)}\). This allows you to learn the value of \(f(k)\) for one value \(k\) – no improvement over the classical computer – moreover a random value of \(k\) which you may not have cared about in the first place.

**Exploiting the entanglement**

In order to see some progress, we must try to extract the exact information we want. This can be achieved by using some special properties of entangled states.

For example, assume that \(f(i)\) is defined by some specific obscure algorithm but also assume that it is periodic with period \(r\) \[ f(i) = f(i+r) \] and we are only interested in the value of \(r\) which may not be obvious from the definition of \(f\). If \(r\) is comparable to \(2^n\), classical computers would need roughly \({\cal O} (2^n)\) operations to find the periodicity: an exponentially long time. A quantum computer would only require a polynomial time, \({\cal O} (n^p)\) for some power \(p\). In computer science, they discuss the NP-completeness issues and it is widely believed that a classical computer cannot solve certain problems in a polynomial time. Quantum computers could break the limitations for certain (or "many", depending on the measure) computational tasks.

How do they find the period? Consider the last state we mentioned\[ \frac{1}{\sqrt{2^n}} \sum_{i=0}^{2^n-1} \ket{i}_n\ket{f(i)}_m \] and start with measuring the second register. You find a value \(f(k)\). This collapses the wavefunction of the first register to\[ \ket{\psi}_n = A\left[\,\, \ket k + \ket{k+r} + \ket{k+2r} + \dots \right] \] If we now measured the first register, we would obtain just one example of \(\ket{k+nr}\) that gives the \(f(k)\). But one example is not enough, and if you try to repeat the measurement to get another example of an \(k+nr\) to figure out the period, you will fail: next time, you will get a different value of \(f\). Again, you need to repeat the experiment \({\cal O}(2^n)\) times to obtain the same \(f(k)\) again, and no improvement over the classical computer is found.

**Using the Fourier transform**

If you change the value of \(k\) by an additive constant, the Fourier transformation will only change by a phase and this phase will not influence the measurements. In this particular case, we want to use the discrete Fourier transform (DFT): \[ U_{DFT}\ket{x} = \frac{1}{\sqrt{2^n}} \sum_{y=0}^{2^n-1} \exp(2\pi i xy / 2^n) \ket y \] In the limit \(2^n\to\infty\), the variables \(x,y\) behave just like the canonical coordinate and the momentum. It is important to notice that \(U_{DFT}\) is a unitary transformation on the Hilbert space: the norm of the vectors is preserved because you know that the norm of a wavefunction and the norm of its Fourier transform are equal if a proper normalization is chosen. The DFT can be realized in a pretty small number of operations of a quantum computer.

How does it help us to find the period \(r\)? Assume, for a while, that \(2^n\) is a multiple of \(r\). This is a very strong condition because it implies that \(r=2^\mu\) itself. Indeed, we won't strictly need this condition at the end but it makes our reasoning more clear.

If \(r\) divides \(2^n\), then the state of the first register, after we measured \(f(k)\) on the second register, will be \[ \ket{\psi}_n = \sqrt{\frac{r}{2^n}}\sum_{j=0}^{2^n/r - 1} \ket{k+jr} \] which we normalized to the identity. What is the Fourier transformation of this state? \[ U_{DFT}\ket{\psi}_n = \frac{\sqrt{r}}{2^n} \sum_{y=0}^{2^n-1} \sum_{j=0}^{2^n / r -1} \exp(2\pi i (k+jr) y / 2^n) \ket y = \dots = \frac{1}{\sqrt{r}}\sum_{l=0}^{r-1} e^{2\pi i kl / r} \ket{\frac{2^n l}r} \] The dots indicate that we have summed up the phases over \(j\) to find out that the sum is only nonzero for \(y\) being a multiple of \(2^n/r\), i.e. \(y=2^n l / r\), which replaced the summation over \(y\) by a sum over \(l\). You see that only some basis vectors contribute to the state of the first register – the states \(\ket{2^n l / r}\) – and by repeating the "calculation" and the measurement several times, we can determine \(r\).

If \(r\) does not divide \(2^n\) exactly, the "resonances" won't be sharp; instead, they will have a nonzero width. In other words, \(|c(y)|^2\) will be nonzero in a vicinity of the fractional values of \(y=2^n l / r\). But we can still determine \(r\) by a few measurements.

**Draw graphs**: \(|c(x)|^2\) is nonzero for \(x=l,\quad l+r, \quad l+2r, \dots\) after the measurement of the second register, and the Fourier transform's \(|c(y)|^2\) is nonzero for \(y=2^n l/r\pm\epsilon\) where \(l\) is integer and \(\epsilon\) is nonzero but small if \(2^n\) is not a multiple of \(r\).

**Breaking the codes of the Internet**

Why is it useful? It is because finding the period of a periodic function allows one to factorize numbers into prime integer factors efficiently. That's important because all encryption systems used in the computer world are based on the belief – well-established but not rigorously proved – that it is impossible e.g. to factorize the products \(n=pq\) of two huge but unknown prime integers in a reasonable time. This belief makes the codes irreversible. It is easy to find a lot of prime integers \(p,q\) with hundreds of digits or more and calculate their product \(n=pq\), but it is virtually impossible to determine \(p,q\) given the known value of \(n\). Quantum computers would make such a task doable in a finite time (well, even the previous one was finite in principle but it was infinite in practice). Quantum computers would not be helpful as text editors or gadgets to run Pokémon Go, but they would allow its owner to break the currently existing codes on the Internet and steal money from many banking accounts.

In 1994, Peter Shor (MIT) discovered this new possible powerful weapon for international crime. His algorithm for quantum computers is based on a basic theorem from number theory that states that \(f(x) = a^x\) mod \(n\) is a periodic function of \(x\) if \(a,n\) are relatively prime. Our goal is to find factors of \(n\). If \(r\) is the periodicity, then \[ a^r\text{ mod } n \text{ must be equal to } a^0\text{ mod } n = 1 \] In other words, \((a^r-1)\) mod \(n=0\). Assume that \(r\) is even; if you randomly try to find the periodicity, you will be lucky in 50 percent of cases or so and \(r\) comes out even. In that case, the last formula may be written as \[ (a^{r/2} -1) (a^{r/2}+1) \text{ mod }n = 0 \] which means that at least one of the numbers \(\alpha,\beta = (a^{r/2}\mp 1)\) must fail to be co-prime with \(n\). That's pretty clear: if both \(\alpha\) and \(\beta\) were co-prime with \(n\), their product could not be a multiple of \(n\). They're co-prime but they're not multiples of \(n\). For example, if \(\alpha=(a^{r/2}-1)\) were a multiple of \(n\), then the periodicity would be \(r/2\) instead of \(r\).

In other words, not being co-prime means that you may look for the greatest common divisor of \(n\) and \(\alpha\), or the greatest common divisor of \(n\) and \(\beta\), you are guaranteed to find a nontrivial factor of \(n\). The greatest common divisor can be found using the recursive (classical) algorithm due to Euclid. The greatest common divisor of \(n,\alpha\), assuming \(n < \alpha\), is the same number as the greatest common divisor of \(n\) and \(\alpha-n\). Substitute these numbers to the previous sentence instead of \(n,\alpha\) (and order them) and repeat the same step until the "new" values of \(n,\alpha\) are equal. Once they're equal, both of them are equal to the greatest common divisor of all pairs you encountered.

**Example: Factors of 21**

The quantum computers, if ever constructed, would be able to factorize huge numbers, but for our understanding, it is better to start with a small example such as \(n=21\). Choose a random value of \(a\) comparable to \(n/2\), for example \(a=11\). About one half of the choices will give you a helpful result. Calculate the powers of \(a=11\) modulo 21. You will get: \[ 11^0 = 1,\quad 11^1 = 11, \quad 11^2 = 16, \quad 11^3 = 8, \quad 11^4 = 4, \quad 11^5 = 2, \quad 11^6 = 1 \] This step can be done efficiently, for large \(a,n\), by a quantum computer. Without the computer, we see that \(r=6\) because the sixth power agrees with the zeroth power.

Now define \[ \alpha \equiv a^{r/2} - 1 = 11^3-1 = 1330 , \qquad \beta \equiv a^{r/2} + 1 = 11^3 + 1 = 1332, \] and use Euclid's algorithm to find the greatest common divisors: \[\ \begin{array}{rclrcl} 1330 &=& 63\cdot 21 + 7 & \qquad 1332 &=& 63\cdot 21 + 9\\ 21 &=& 3\cdot 7 & 21 &=& 2\cdot 9 + 3\\ \Rightarrow 7 &=& gcd(1330,21)& 9 &=& 3\cdot 3\\ \quad & \,&\,& \Rightarrow 3 &=& gcd (1332,21) \end{array} \] This was probably not the simplest way to figure out that 3 and 7 divide 21, but for large numbers, be sure that this would be a faster method than other methods you may imagine.

**Grover's algorithm**

The second most famous algorithm – and I don't know any third algorithm named after a person (surely there are many) – in which the quantum computer would surpass the classical computer was discovered by Lov K. Grover in 1996. He figured out how one can find a correct entry in an unordered database with \(n\) items very quickly. Classically, you need \({\cal O}(n)\) steps: such a computer would have to ask: "Is this the right entry?" – roughly \(n\) times. Quantum computers with Grover's algorithm could do the same job in \({\cal O}(\sqrt{n})\) steps. Each step is a form of the Hadamard transformation discussed previously, and it systematically "rotates" the initial state \(2^{-n/2}\sum\ket i\) into the right state.

Unfortunately there is not enough room here to explain it in detail; "Grover's algorithm" at Wikipedia may be a simple starting point for you to learn it. The practical importance of such an algorithm, even if the large quantum computers are ever constructed, is uncertain. Modern classical algorithms – such as Google's software – are based on the idea that databases should be ordered (and indexed) after all, and an unordered database may always look like a flawed starting point.

Feynman was also proposing to use quantum computers for simulations of quantum objects and wavefunctions themselves – which seems a natural thing to do – and it is just two weeks ago when the Google experts announced that they can use their state-of-the-art computers to do simulations of quantum chemistry of a molecule and its electronic structure.

**How to build a quantum computer**

Google, along with Amazon, IBM, Microsoft that also want to make their quantum hardware accessible, are probably leading the world's efforts to construct a realistic quantum computer. The best quantum computers constructed so far allow one to manipulate roughly with 50 qubits of "memory" which is not enough to break the codes (especially because some extra increase of the number of bits is needed for the quantum error correction). One certainly needs thousands of qubits; IBM just promised 1,000 qubits by 2023. Decoherence that we discussed elsewhere is the primary enemy.

The technologies behind the possible quantum computers have carried fancy words – such as "quantum dots" (certain clusters of electrons). Other technologies try to operate with the nuclear spins, and so forth. We will look at "ion traps" – or "ion arrays" – which is basically used by the company named Honeywell, another contestant that I omitted in my list above. Google's and also IBM's quantum computer consists of microscopic circuits of superconducting metal that entangle 53 qubits in a complex superposition state.

Four rods and a radio frequency voltage is able to confine ions into two dimensions. Draw a rectangle and 5 small circles inside it (on a line). The spacing between them is natural – due to electrostatic forces. These ions are still free to move in the third dimension. The ground state of this additional motion is \(\ket 0\) while the first excited, metastable state plays the role of \(\ket 1\). The lifetime of this excited state can be about one minute.

We have qubits. What about the operations? One uses lasers at the resonant frequency \(\omega_{01}\) directed at each ion. Such radiation can induce the transitions \[ \ket 0\quad \leftrightarrow \quad \ket 1 \] and we have discussed the two-level systems earlier in the course. Recall that if the pulse lasts for \[ t = \frac{\pi}{2\omega_R} \] where \(\omega_R\) is the Rabi frequency, it will change \(\ket 0\) to \(\ket 1\) and \(\ket 1\) to \(\ket 0\). A pulse that lasts for \(\pi/4\omega_R\), one half of the previous time, will act much like the Hadamard transformation. One can also change the phase of the state if the laser frequency is detuned away from \(\omega_{01}\). These two operations cover all manipulations with individual qubits.

**Vibrational modes and coupled qubits**

In order to create a useful quantum computer, we also need operations where the fate of the qubits depend on each other – where the qubits are coupled. This can be achieved by vibrational modes. Consider a pair of ions. They can vibrate in two energy eigenstates\[ (\rightarrow\rightarrow);\qquad (\rightarrow\leftarrow) \] The first one is the center-of-mass vibrational mode and it is the vibrational mode with the lowest energy. Recall that the vibrational modes contribute much less energy than \(\hbar\omega_{01}\). By exciting the modes, however, we can entangle the qubits.

We have qubits and methods how to evolve them depending on their state. Actually people have designed a system of ideas and tricks showing that these basic operations are sufficient to combine into the nontrivial algorithms discussed previously. The final thing we need is to be able to measure the state. This can be done by stimulating the qubits by the frequency \(\omega_{0m}\) between \(\ket 0\) and a short-lived state \(\ket m\). Such a frequency will drive \(\ket 0\) into \(\ket m\) but it won't affect \(\ket 1\). When we see a photon, it proves a decay of \(\ket m\to \ket 0 + \gamma\) which means that the qubit was in the state \(\ket 0\) to start with. Don't forget that once we emit and see the photon, the wavefunction is collapsed and the quantum calculation must be finished. The decay is irreversible.

**Unhappy end – so far – may become happy rather soon**

Errors in the calculations and decoherence make all existing prototypes of a quantum computer unusable for Shor's, Grover's, and perhaps similar characteristic and famous algorithms. So far it is the case. Maybe some TRF readers will still join the industry – that is consuming a huge amount of money right now – and will help to solve the technological obstacles. But when you succeed, be careful about breaking the secret codes of the banks. Even if you have the most advanced technology, CIA may be able to find you, even with some low-tech technology and force you to confess using a mostly classical gun pointing to your head.

## snail feedback (0) :

Post a Comment