Comments on some extra chapters were added at the end
This reading report will be much more favorable than the previous one, one on minds and machines.
Chapters 5, 6, 7 etc. are dedicated to the classes of problems that may be solved in a reasonable time. They are full of arguments showing that if one type of a problem may be solved [at all AND/OR with some limitations on time AND/OR with some limitations on memory AND/OR with some probability of success AND/OR with some secret hints etc.], then another type of a problem may be solved at all.
This leaves a network of classes of algorithmic problems that are not known to coincide. This network is a result of simplifications resulting from some known proofs – proofs demonstrating the equivalence of some classes etc. – which have been taken into account. There's a lot of mathematical arguments that I have only partially verified but it seems clear that they have no good reason to be wrong because the researchers doing them aren't stupid (they're rather rigorous mathematicians) and there aren't any real subtleties that would require some "special kinds of intelligence" behind conventional math skills and rigor.
As an introduction to complexity, it's a wonderful resource. But I don't plan to memorize all the things that are known about the classes and particular problems and algorithms; instead, I know a good source where some basic things may be found if needed. ;)
The last prequantum chapter is dedicated to cryptography. You learn about various simple and hard codes; functions that are easy to compute but hard to invert; various proofs that various coding systems may be cracked in one way or another, with or without some extra information, with or without a quantum computer.
One of the holy grails of the chapter is the explanation of the RSA algorithm. You want to send some information – e.g. your credit card number – somewhere – e.g. to amazon.com (by the way, the company just decided to build its 1st2nd largest European logistical center in Czechia, not Poland; the other one is in Germany) so that no one except for amazon.com may decode it. How is it done? Amazon.com picks random large prime integers \(p,q\) which may be done quickly and computes \(N=pq\). It sends just \(N\) to you. You don't know \(p,q\) yourself. Your computer computes \(x^3\mod N\) where \(x\leq N1\) is an integer representing your secret message (or credit card number).
It's probably impossible to efficiently revert this exponentiation i.e. to compute the cubic root modulo \(N\) directly. It's also impossible to quickly enough find factors \(p,q\) of the composite number \(N\). It's probably impossible to compute \(x\) out of \(N\) and \(x^3\mod N\) even indirectly – such a task probably can't be done without the impossible tasks in the previous two sentences. However, amazon.com knows \(p,q\) as well and with this extra knowledge, it may calculate \(x\). For technical reasons, \(p1\) and \(q1\) shouldn't be multiples of three. Safe. Probably. Only you and amazon.com can know \(x\): you know it because you used it to calculate \(x^3\mod N\); amazon.com knows it because it's the only server that knows \(p,q\) which is probably needed to compute the cubic root.
There is a lot of stuff of this kind in the book and if you're interested in cryptography, the book is recommended to you. However, I want to say a few words about
The quantum mechanics operating system
There are lots of the "discrete bias" in the book; I will mention it again below. But I was impressed by Scott Aaronson's first approach to quantum mechanics because it largely agrees with mine – and some detailed uniqueness claims and arguments supporting them are almost exactly equivalent to my text arguing that quantum mechanics can't be any different.
So Aaronson says that the conventional undergraduate approach to teaching of quantum mechanics deals with lots of technicalities like differential equations and ends up viewing the new conceptual framework as an ultimate mystery. He takes a different approach – quantum mechanics is presented as a new generalization of the probability theory.
It is correctly said that there are just three frameworks that are worth considering: classical deterministic systems; classical probabilistic systems; and quantum mechanics. Arguments are given why quantum mechanics has to be based on squaring the (absolute values of the) amplitudes; why the complex numbers are the only right, "Goldilocks", number system for quantum mechanics (their being algebraically closed is also mentioned, much like the natural counting linked to the fact that complex representations of groups are the default ones); why operators have to be linear, and so on.
Some of the arguments are literally the same as mine. For example, the reason why the probabilities are given by the second power is explained by the fact that only for the exponents \(p=1\) and \(p=2\), one may find some unitarylike transformations of the probability amplitude vectors that preserve the norm and that are nontrivial i.e. more complicated than permutations with sign flips.
The unitary matrices are cleverly presented as the quantum answer to the stochastic matrices in classical physics. There isn't any other comparably interesting and nontrivial class of matrices with similar properties which is a part of the reason why classical and quantum physics are the only two frameworks to deal with probabilities in physics.
Of course, this introductory quantum chapter also discusses the cloning nogo theorem, tensor products, entangled states, and similarly basic notions, along with the quantum teleportation and even quantum money and other less rudimentary concepts in applied quantum physics (including references to some papers by Aaronson himself).
I don't really believe that there will be a single person who doesn't know quantum mechanics in advance but who will learn it from the book – the introduction to quantum mechanics in this chapter is arguably too concise – but I do think that it could be helpful to reorganize the teaching of quantum mechanics along these lines.
At the beginning, the author also says that quantum mechanics (the general postulates etc.) is "somewhere in between maths and physics in the hierarchy of scientific disciplines that continue with chemistry and biology". It's more physical than just maths and the ordinary probability theory in maths; but it's less physical than particular physical theories. It's an operating system on which particular physical models run as applications. I couldn't agree more.
Aaronson's computerscience bias only begins to emerge – at least I hope so – when he discusses what the actual applications are. So he apparently believes that all the applications, including those defining the fundamental laws of physics, should be of discrete nature which ain't the case. The quantum mechanical operating system perfectly allows the observables with continuous spectra and some operators of this kind are always needed to define the physical laws for the adults.
His flawed perception that bits, and not nats, are physically natural units of information and entropy is also imprinted in at least one characteristic error. The book doesn't avoid mathematical formulae and they're largely accurate. However, he applies the prime number theorem incorrectly to compute how many prime numbers with at most \(N\) bits are there. What do you think the result asymptotically is, including the right prefactors of order one? ;)
Such annoying prefactors have to occur everywhere simply because \(2\) is an unnatural base for exponentials and logarithms much like a bit (and qubit) is an unnatural unit of information and entropy. But I am learning how to sharply separate these misconceptions of folks like Aaronson from the "operating system"level of knowledge (including the quantum one) that they apparently understand correctly.
Quantum computers
Chapter 10 finally jumps at the inevitable union of computation and quantum mechanics, quantum computation. My intro to quantum computation is here. Sadly, one of several promising quantum computers that exist on Earth, one inside a diamond, was eaten by an 80yearold grandma yesterday.
The author marvels that only in the 1990s, people started to ask which problems may be feasibly solved by computers allowed by Nature's being quantum mechanical. I also find it amazing that this branch of computer science wasn't born earlier.
He talks about the Hadamard gate etc. (replacing the universal NAND gate of classical computers), reasons why quantum computers may do everything that classical (even classical probabilistic) computers can do, and so on.
There are interesting two sections at the end of the chapter in which Aaronson debunks the supernaive "quantum computer as exponentially parallel classical computer" meme and the related misconceptions promoted largely by David Deutsch that quantum computation "proves the many worlds interpretation". Scott doesn't say as much as I did e.g. in this criticism of the many worlds but he does say certain things, for example that MWI can't say what is the preferred basis into which the world "splits" (it also can't say when and how etc. and the very claim that it does irreversibly split is wrong because at least in principle, there's always a nonzero chance of "reinterference").
The related idea that the "quantum computation is a massively parallel classical computation" is flawed and this big blunder may be illuminated in many ways. First of all, it's not true that the streams in the "massively parallel set of classical calculations within a quantum computer" are independent. Quite on the contrary, the result announced by a quantum computer depends on interference i.e. "collaboration" between these streams or "different histories".
Moreover, the textbook example of a problem you may want to massively speed up – the search of something in a database – can't be exponentially sped up by a quantum computer. Instead, Grover's algorithm only provides us with a powerlaw speedup. In fact, it's been proved before the algorithm was found that a faster quantum algorithm than the power ultimately realized by Grover's algorithm can't exist! As always, any attempt to sell quantum mechanics as some "kind of a classical theory" (whether it's parallel or not, whether it has hidden variables or not, whatever are its preferred observables etc.) may ultimately be proven deeply misguided. Quantum mechanics isn't classical, stupid.
Aaronson also extends his collection of classes of problems waiting for algorithms with classes that allow some/complete solution using a quantum computer. He reinterprets Feynman's path integral as the proof that one class belongs to another. That's what Feynman got his Nobel prize for, we learn. It's an amusing attempt of a computer scientist to "devour physics" but at least historically, it's completely wrong because Feynman shared their Nobel prize for QED, an application of his computational methods (including the path integral), not for the path integral itself (although his Nobel lecture essentially was mostly about the path integral).
Moreover, Aaronson only presented one particular application of Feynman's path integrals. Any physically meaningful question about a quantum system may be answered using Feynman's path integral which makes it kind of silly to associate this method with some "very special problems".
Chapter 11 is about Penrose, namely his opinions that Gödeltheoremsbased reasoning can show that artificial intelligence is impossible and the human brain has some superremarkable abilities (apparently exceeding any omniscient quantum computer or God you may think of) that can't be simulated. I agree with 100% of what Aaronson says here (why Penrose's arguments are ultimately irrational, illogical, unstable, not solving the original "problems", and wrong) – and there's a lot he says.
Initially I thought that I would fully endorse every sentence of Chapter 12, one on decoherence and hidden variables. In the early passages, it says all the important right things. Decoherence is the right phenomenon that contains all the genuine processes people call "collapse" sometimes and its right description is the hopeless entanglement with the environment which converts quantum bits to effectively classical bits. It's irreversible and from some viewpoint, decoherence is just a new nearly equivalent way to describe the second law of thermodynamics. The reversal is as difficult as Pamela Anderson's attempts to regain privacy by targeting all the hard disks in the world that contain her photos – I liked this joke, indeed. ;)
In the following sections about hidden variables, it lists most of the reasons why they're wrong – they only allow one to discuss measurements of particular observables (positions) but we need to predicts others and it's not possible to have classical "beable" states for all of them (KochenSpecker theorem). Hiddenvariables fundamentally violate relativity because they need superluminal communication to explain the correlations, whether or not it may be used in practice. Some important criticisms are omitted – like what happens with the insanely propagating, diluting pilot wave after the measurement or absorption of the particles etc.
However, the final "positive" sections on the hiddenvariable theories have dramatically lowered my opinion about the chapter. He discusses "flow of oil", Schrödinger's, and Bohm's theories. Aaronson fails to notice that (and why) some of them are really equivalent to each other. He seems ignorant about the Bohmian and equivalent theories' history – it was started by Louis de Broglie in 1927. Most importantly, he suddenly seems to forget all the right criticisms he previously raised: violations of relativity; need for privileged observables and bases, etc. So at the end, I think that the chapter is a rather eclectic mixture of "mutually contradicting but not confronted" right and wrong statements that suggest that Scott hasn't thought about these matters too carefully (despite his having published papers on hidden variables). I was also irritated by the criticism against the Bohmian hidden variables (which are really equivalent to the "flow of oil" when looked at properly) for their infinite dimension of the Hilbert space. Every realistic theory of the "real world" has an infinitedimensional Hilbert space; this is surely not something one could use as a criticism (although the existence of discrete degrees of freedom such as the spin is another way to see the inadequacy of the hidden variable theories). Again, we see the limitations of a "discrete computer scientist" Aaronson here. Also, I don't believe that a reader who hasn't studied decoherence (or hidden variables) previously will learn what it is (or they are).
Sunday, April 28, 2013 ... /////
Democritus on the QM operating system
Vystavil
Luboš Motl
v
8:55 AM



Subscribe to:
all TRF disqus traffic via RSS
[old]
To subscribe to this single disqus thread only, click at ▼ next to "★ 0 stars" at the top of the disqus thread above and choose "Subscribe via RSS".
Subscribe to: new TRF blog entries (RSS)
To subscribe to this single disqus thread only, click at ▼ next to "★ 0 stars" at the top of the disqus thread above and choose "Subscribe via RSS".
Subscribe to: new TRF blog entries (RSS)
snail feedback (22) :
I think even computer scientists can draw the correct conclusions when they see a formula with ln(x) ;)
Doing the actual maths would obviously be cheating, so I'll guess the prefactor is e/2.
Glad you finally found a chapter you liked, Lubos!
As it happens, I have a new blog post up today about my contribution to MIT's Annual LatkeHamentaschen Debate ("Superiority of the Latke: The Unexpected Convergence of Quantum Mechanics and Common Sense"), where I make very much the same points about quantum mechanics that you liked in Chapter 9 of QCSD.
Now, regarding my alleged math error: are you referring to page 76? If so, then yes, I'm perfectly aware that there's a prefactor of 1/ln(2) that I omitted. I should clarify that, when theoretical computer scientists use the word "about" (as in, "about 2^n^2 / n^2"), we generally MEAN "ignoring irrelevant constant prefactors." To us, constant prefactors are just "obnoxious, bureaucratic details" that we barely even notice. But you're right: we should probably put them back in sometimes, to facilitate communication with our more plodding, hairsplitting friends in physics. ;D
You could always hit back with "At least I don't omit factors of c^2 without mentioning it"
hmm... sounds like you were using nats after all! :) Looking forward to get my hands on the book
I am severely spoiled by enjoying too much learning more about HEP as best as I can since some time ago now :):
At my defence back on 11.11.2011 ;), I omitted some powers of c in Planck's law but was not witty enough to tell the committee that I am just using natural units (c=h=G=1) ... :P
No, the missing prefactor in such situations is of course ln(2) = log(2)/log(e) where "log" may be the logarithm with any base (2, 10, e, anything: which must be used both in the numerator and denominator).
log2(n) / ln(n) looked like e/2 at a glance ;) For some reason I always forget my browser has the Wolfram Alpha plugin that could save me from some embarrassment.
Am only starting chapter 6 so I am happy that Lubos is finding your earlier promises on what lay ahead rewarded. I am out of sync only end chapter five and will read on before remarking.... am heartened to know that the answer of the meaning of everything in Hitchikers Guide should be 42 except for a constant prefactor addressing obnoxious beurocratic details.
Nice to learn about nats. I will have to get back to the book to catch up with Lubos.
BTW, Andrew Ng is offering a good AI and Machine learning course on Coursera.
Anyone using the Kindle version of this book? If so is it readable? I have been holding off getting it and considering getting the dead tree version.
I got it on Kindle and find that just fine. Prefer paper but that one touch instant gratification proved too tempting.
The Kindle edition looks OK, unfortunately only when it comes to the typographical issues.
Lubos, I'm glad you liked the material on Penrose and decoherence. I didn't expect you to like my treatment of hiddenvariable theories: we have a fundamental difference in attitude. You see the problems with such HVTs, and are satisfied that the only people who would study them further are morons who never understood QM. I know the problems just as well as you do (and explain them in the book), but then I still find questions that demand to be answered about how well you can construct HVTs in the teeth of those problems ... if for no better reason than that they're extremely nice mathematical questions (and their answers are sometimes highly nonobvious ... e.g., can you satisfy the indifference, product commutativity, and robustness axioms simultaneously, or can't you?).
However, I think the following claim of yours is straightforwardly in error:
"Aaronson fails to notice that (and why) some of them are really equivalent to each other."
Of course all the HVTs discussed predict the same singletime probabilities as ordinary QM, but there's no other sense in which they're "equivalent": Bohm / deBroglie, the Schrodinger theory, the flow theory, and the product theory can all give rise to completely different multipletime probabilities. What on earth were you talking about?
Dear Scott, I didn't write that you failed to notice the equivalent of HVTs with QM. Of course that I don't think they're equivalent and I don't even think that you think this incorrect thing.
What I wrote is that you failed to notice that the "flow of oil" HVTs are the same theories as the Bohmian ones  the "flow of oil" is just one way to motivate/derive the equations for the Bohmian HVT.
I know the problems just as well as you do (and explain them in the book), but then I still find questions that demand to be answered about how well you can construct HVTs in the teeth of those problems ... if for no better reason than that they're extremely nice mathematical questions (and their answers are sometimes highly nonobvious ... e.g., can you satisfy the indifference, product commutativity, and robustness axioms simultaneously, or can't you?).
Are you equally fascinated by the question how well one may design an Intelligent Design theory as an explanation for the origin of species? And if you aren't, why do you apply these double standards to these two totally equivalent issues?
Sorry, I was trying to be alittle cute. I meant only to point out that all HVT theories are wrong. I did not mean to suggest or think that they reach the same wrong predictions. After the experimental confirmations of Bell from Aspect forward it is hard to think any HVT theory will explain what is easier explained by accepting QM as correct without adding something to please our classic senses. .
Lubos, the flow theory is defined only for finitedimensional Hilbert spaces and discrete time, and the very calculation of the probabilities requires running the FordFulkerson algorithm. Also, there's no particular notion of "locality" (the n basis states can be ordered randomly, or however else you like). By contrast, Bohm/deBroglie is (as the chapter points out) defined only for the infinitedimensional Hilbert space of particle positions and for continuous time, and the transition probability density can be computed immediately if you know the wavefunction and the Hamiltonian. Furthermore, the very requirements of continuity and differentiability (which have no analogue for the flow theory) imply that you do have a notion of which basis states are in the "neighborhood" of which other ones. So I really don't see that these are the "same thing." If you want to claim that the latter is some sense a "unique continuous limit" of the former, that's interesting, but I'd need to see more details!
Now, as for your rhetorical question: if there were nontrivial mathematical open problems about how well you could construct an Intelligent Design theory that mimicked the successes of evolution, then you better believe I'd want to study those problems! Even if I were never inclined to take ID seriously, I'd still consider studying those problems an important part of understanding evolutionary theory itself. The difference is that I've never seen any such problems. As Richard Dawkins and others have pointed out, the hypothesis of an "intelligent designer" is so nebulous that the claim of being able to explain some biological adaptation using that hypothesis, even if true, doesn't rise to the level of being scientifically interesting. By contrast, the hypothesis that Nature is classical is false, but it's interestingly, falsifiably false! And unlike in the ID case, questions about exactly where the hypothesis breaks down can get mathematically interesting.
Also, rather than calling me "schizophrenic," I'd simply say: I have a certain worldview and aesthetic, and in some particulars it agrees with yours, and in other particulars it obviously doesn't agree. But it's not like I consciously switch between my "proLubosian" and "antiLubosian" personalities. ;D
Dear Lubos,
I can't understand how you cope with all these people who just fail to understand quantum mechanics. Respect!
Of course, we can have different views or ideas on new physics or the future of physics but to not understand the basics is just incredible. Just to mention one thing: as you know in QM a particle (now called virtual particles) can exceed the speed of light, due to the uncertainty relation, threatening to contradict relativity. To solve this apparent paradox the only possible fix is that particles have
antiparticles, therefore a particle like the electron needs a positively charged counterpart. Later this particle (the positron) was discovered!
Need I say more ...
Dear Scott, it's silly. Every mechanism of this sort may be generalized to infinitedimensional Hilbert spaces and continuous time. Continuous space and time allow you to do more things, not less. You will get Bohmianlike laws.
It's pretty obvious that you are switching between rational, proLubosian modes of thinking and antiLubosian, irrational modes of thinking  although you may fail to realize these switches.
I finished the proofs, size of quantum states, and skepticism of QC chapters, too. Lots of fun, applied maths, somewhat contrived questions about all these oracles, advices, statistical hints, successes etc. that are interesting to investigate as recreational mathematics but I think you must agree they're not fundamental.
I agree with the key claim of the "size" chapter than "n qubits" is more like "n bits" and not "2^n bits". They're still the "same" n bits that just follow different, quantum laws, but it's just wrong to try to map quantum mechanics to a classical system, especially stupidly a classical system with a different number of degrees of freedom. Concerning the QC skepticism, I agree  all these arguments are silly. The only nontrivial one is about the quantum fault tolerance but by checking the proof, one may convince himself that this complaint is wrong as well. It's interesting stuff  I wonder whether some more complete explanation why quantum fault tolerance is enough to fix the faults will be presented. It would arguably be more appropriate for a serious book on QC than the numerous arguments with the (often universityaffiliated) crackpots who invent all these silly "QC can't work" claims.
Thanks for your kind words, Marcel!
And absolutely: the possibility to exceed the speed of light limit is an inevitable consequence of the uncertainty principle. It can't be avoided. For example, in the path integral approach to nonrelativistic QM, most trajectories contributing to the path integral are superluminal and nondifferentiable for almost all times "t". In fact, this simply has to be the case in any relativistic version working with trajectories, too  path integral evaluation of the propagators, for example. Still, the result must perfectly obey the Lorentz symmetry and the particleantiparticle cancellation in all observable quantities is actually the only mechanism how to avoid the Lorentz violation.
This argument may be equivalently phrased in the operator formalism, too. The Dirac sea arguments are a special version of these arguments for fermions. Relativity combined with the Heisenberg principle really does imply that antimatter has to exist. And it does.
You always state it much better than I can! Indeed, I learned to
distinguish between continuity and differentiability already at high
school and it all looked pretty unimportant to me in the real world and I
thought "who cares except for some crazy mathematicians". But with your
explanation of how noncommutation (which is at the basis of QM) can be
explained within the pathintegral method (that I read),
nondifferentiability has been shown to be really important also in the
real world. So the mathematicians were right after all!
Of course the good question is when will we end the tyranny of digital entropy? Continuous variables can have entropy defined upon them as well, it just isn't as convenient. Our preference or digital and subsequent desires to explain the universe digitally is just a crude approach to anthropomorphism. We want a universe we can conceivably compute, so we have a subgroup of people who want to put the universe in that box. Experience at a practical level tells us that not every question can be broken down to yes and no statements, much to Wheeler's chagrin. The answer to the question of "Is person X a genius?" is context dependent whose ultimate assessment depends on a the fundamental definition of the normal distribution across some set of solutions to some set of potential problems and some population. How can you accept a binary answer to the question barring any of the information regarding its context? The whole notion of the digital universe fails simply because the derivation of the question is not digital nor can it be. We are forced into digitization because of our limited capacity, projecting that limitation onto the whole universe is clearly an exercise in hubris.
http://en.wikipedia.org/wiki/Differential_entropy#Differential_entropies_for_various_distributions
http://hornacek.coa.edu/dave/Tutorial/notes.pdf
Post a Comment