Monday, February 25, 2019 ... Deutsch/Español/Related posts from blogosphere

Good quantum error correction vs Lie symmetries: a trade-off

Quantum Frontiers, a Caltech-based blog written by the folks around John Preskill, published a review

Symmetries and quantum error correction
by Philippe Faist of a fresh, 72-page-long quant-ph preprint written by Faist and 6 co-authors including Preskill (a Caltech-Stanford-Berlin collaboration)
Continuous symmetries and approximate quantum error correction.
It's looks like a rather neat paper about the quantum information. I normally don't watch the quant-ph archive on a regular basis, silently assuming that place to be dominated by various Maudlin-like crackpots who have trouble with the postulates of quantum mechanics, or by the likes of Renner and his pregnant girls who play with some 2-qubit exercises as if they were in the kindergarten.



But this preprint seems smarter than that – a material that might belong to hep-th as the primary archive, regardless of the authors' default cultural habits. (Well, Preskill has been trained as a hep-th person so this proximity to hep-th may never be considered surprising.) After all, the paper discusses some tricks that are relevant in AdS/CFT, surely a topic owned by the physicists. But there's something more elementary that would make it natural for this paper to be a hep-th paper: the continuous symmetries acting on quantum systems.

If you're a hardcore "quantum information" person, you aren't really dealing with physics-like continuous symmetries – except for the trivial \(U(N)\) group acting on your whole Hilbert space. Why? Because quantum computers use qubits instead of bits – but those are still "digital" and "discrete". The set of operations that may be performed on the qubits by an actual computer is supposed to be finite – or at most countable – just like on the classical computer. These operations do something else than the operations on a classical computer but the set of the allowed operations is still discrete, if you get my point.



This is my reason why I would count a paper about "continuous symmetries within a quantum mechanical system" to be "physics", not any kind of "computer science". Great.

Now, symmetries are beautiful and have been crucial in the top physicists' thinking about the state-of-the-art laws of physics – I think that Albert Einstein and the main symmetry he introduced, the Lorentz ;-) symmetry, should be credited for this new perspective. (Hendrik Lorentz was the first man who wrote down the Lorentz transformations – but sold them as some boring "coordinate transformations", failed to realize they made up a group, and of course failed to see any relevance of the group to the choice of the inertial system.)

Symmetries are great, may be local or global, and only the latter are real physical symmetries. However, there's still some relationship between global and local symmetries in most cases – the global symmetries we allow seem to be leftovers from local symmetries that allow some non-singlet or asymmetric behavior in the asymptotic regions. But there's something about our conventional thinking about the presence of a symmetry in a physical system: it still seems to be a discrete, Yes/No question. QCD either has the \(SU(3)\) or it doesn't (Yes!) and the Standard Model either has or doesn't have the \(G_2\) automorphism symmetry of the octonion algebra (No!).

But for years, I have been convinced that this treatment of symmetries as discrete, Yes/No questions is lame. It is an example of a thinking "inside the box". When you interpret the symmetries as qualitative traits, you are really building a cardboard in between all the theories whose symmetries differ. You can't really combine "several classes of theories" associated with different symmetries as elements of an ensemble, as parts of a bigger picture. But string theory does allow vacua with lots of different symmetries that the effective theories make manifest. And those are connected on the stringy landscape.

As I discussed e.g. in The Monstrous Beer Conjecture and many other texts, a more conceptual physical treatment should allow all symmetries and include lots of facts about symmetries that seem to apply in general. Such as identities and inequalities that involve the dimensions of the symmetry groups, the orders of finite groups, and other things.

In many older texts, I have conjectured that there's some new "complementarity" between the amount of symmetries on one side; and the amount of smooth geometric dimensions on the other side. So in the landscape of string/M-theory, we have e.g. the 11-dimensional M-theory vacuum which seems to have the minimal amount of extra gauge (continuous or finite) symmetries, but a maximum number of decompactified dimensions. On the other hand, the pure 2+1-dimensional gravity has to be realized by a quantum mechanical theory with a monster group, the largest sporadic finite simple group. It looks like the numerous dimensions of M-theory may be "morphed into" or "traded for" the huge, monster-like symmetries in the lower-dimensional vacua.

The paper by Faist et al. is surely going in a similar direction and proves some inequalities. They're primarily the inequalities (2) and (3) in their paper. The equations (1) and (2) – the latter isn't an equation because it's an inequality but you surely understand me – says\[

\sqrt{1-f_{\rm worst}^2} \equiv
\epsilon_{\rm worst} \geq \frac{\Delta T_L}{2n\mathop\max_i \Delta T_i}

\] where the \(\epsilon\) symbol represents some minimal error or "infidelity" of the quantum error correction scheme. The numerator \(\Delta T_L\) encodes the strength with which the symmetries act on the logical subsystem while the denominator is maximized over the physical subsystems. Another inequality, equation (3), says\[

\epsilon_{\rm worst} \geq \frac{1}{2n \mathop\max_i \ln d_i} + O\zav{ \frac{1}{nd_L} }

\] The error of the error correction scheme may be made very small by including a subsystem of a very large dimension – if the dimension is infinite, the lower bound on the infidelity may be zero.

So you can see that \(\Delta T_{L/i}\) are some intensive quantities and this "theory" deals with the logarithms of dimensions of representations of a symmetry as if they were a new extensive "commodity". There seems to be some trade-off that might be, if you're very optimistic, analogous to Heisenberg's uncertainty principle.

As in the case of all inequalities, I always add that a truly fundamental inequality is one that is just a promotion for an identity. So the uncertainty principle\[

\Delta x \cdot \Delta p \geq \frac{\hbar}{2}

\] is just a corollary – and a popular promotion for – an identity, namely for\[

xp - px = i\hbar.

\] I will only be "comparably excited" as I am about quantum mechanics once someone finds a new surprising identity – analogous to the commutator above – that directly "explains" the inequality or inequalities by Faist et al. But even now when they haven't gotten anywhere close to the Heisenberg category, their results look intriguing and they might even have something to do with my "spacetime-dimension vs huge finite symmetry group" tradeoff.

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :