Tuesday, April 23, 2013 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Minds and machines

Let me say a few words on the fourth chapter of Scott Aaronson's book, Minds and Machines.

It's about complexity of problems as well as the differences (or equivalence) between human brains and artificial thinking machines.

When one reads this fun stuff, he may see how these complexity folks are thinking and – even if he knows the principles – he realizes which of them are considered important (and, in many cases, more important than they deserve) by the complexity theorists.

I would say that not even Aaronson would claim that this chapter has anything to do with physics but I have been surprised many times in my life so I am not certain.

Mathematical problems may be easy-to-solve or harder. We may order their difficulty in a way. Two problems may be equally (qualitatively) difficult. If one is harder than another, it means that an algorithm to solve the former implies the existence of an algorithm solving the latter but not the other way around. And so on. Are there intermediate difficulty problems between two known classes? You can imagine what he writes about.

A central thesis worshiped by these complexity theorists is the Church-Turing thesis: algorithmically solvable/calculable problems are exactly those that may be solved/calculated by a Turing machine – a particular structurally simplified model of a universal enough computer.

The complexity theorists including Aaronson are clearly impressed by this thesis. Your humble correspondent – and, I believe, most theoretical physicists – would decide that the thesis is mostly ill-defined or a vacuous tautology or potentially wrong, depending on the definition of "algorithmically solvable problems". The complexity theorists' obsession with the thesis and similar theses may be a part of the reason why they haven't done too much genuine progress in artificial intelligence and other things, as discussed below.

Their excitement isn't too difficult to understand. The key insight is that all intelligent enough thinking is in principle the same activity. Even if you have a different architecture of a microprocessor (80386, 6510, ARM...), you may still employ the chip to interpret a programming language. An interpreter of another programming language may be written in your preferred programming language which runs on any computer deserving the name; the task to write these interpreters is more or less straightforward. So all computers and languages may simulate each other so they're equally potent. To some extent, this must hold about differently thinking human brains, too.

It's great to realize this fact about the universality of computation but once we realize it, we should also realize it's vacuous.

(The Church-Turing thesis may be more than a vacuous statement but one would first have to give an exact definition to the "algorithmically calculable functions/problems" that don't refer to a particular machine. Clearly, this required definition will refer to some particular machine or some particular machines, and for different choices, the Church-Turing thesis may be both right or wrong. If you build the definition on Turing-equivalent machines, it will be right and vacuous. If you insert some other hypothetical machines, probably physically impossible ones, it may be wrong. Garbage in, garbage out. You don't learn anything.)

Physicists are also enthusiastic about similar equivalences of something – namely theories – but they only become elevated in the case when the equivalence is more surprising and nontrivial than a straightforward translation of a program from one programming language to another. When the equivalence between these theories is surprising and hard to prove, physicists call it a "duality". Exactly because it's not trivial, a duality is very potent. It seems to me that the complexity theorists never switch to the mode of thinking in which they call a spade a spade – in which they decide that a trivial insight is trivial – and that's why they sometimes remain stuck in the quagmire of trivial and vacuous insights.

After all, the chapter also mentions that Turing correctly predicted that the capacity and performance of the computers would grow by a huge (nearly realistic, it turned out) factor within 50 years. But he incorrectly predicted the rise of artificially intelligent machines that would become indistinguishable from humans.

Another idea that Aaronson's criticizes is "meat chauvinism", namely the assumption that thinking machines composed of proteins (a set that includes a majority of commenters on this blog) are more conscious or intelligent etc. than artificial machines etc. If a computer program simulates a human, it or he or she must be given the same status and "approved consciousness" and human rights, Aaronson thinks.

Well, I have mixed feelings about that. More precisely, I agree that 1) the inner functioning is a more essential part of an intelligent device and its "identity" than the precise nature of the transistors or cells or technical architecture, but 2) the clever computers clearly don't have to be given human rights. It's up to people or a society – at least as long as the people are in charge – whether they respect the human rights of such computers. As I have argued many times, science cannot answer moral questions.

The very usage of the word "chauvinism" shows that Aaronson is prejudiced in these matters – and his opinion differs from mine. This word "chauvinism" is arguably meant to sound negative – at least that's my conclusion after years in which extreme leftists were surprised when I felt proud after they identified me as a chauvinist of any sort. But let me tell you something: chauvinism is often a great and essential policy. For example, computers may be equivalent to humans in many respects but they're not equivalent in others and it's legitimate for me and every other voter or politician to consider the replacement of humans by computers undesirable. So from my viewpoint, it's right to "discriminate" against computers.

But I don't want to talk about these "political questions" only – even though the number of essays criticizing PC, egalitarianism, and similar junk ideologies is never high enough. The very idea that a smart enough computer could be "really equivalent to a human mind with all of its consciousness etc." could be wrong. In particular, if you create a classical simulation with a random generator that "behaves" indistinguishably from a quantum system, it could be fundamentally a different thing when it comes to its consciousness etc.

I am referring to the Conway-Kochen free will theorem. The randomness implied by quantum mechanics may be shown to arise locally in the very region where the value of an observable is decided for the first time – that's what is meant by the "free will" here. However, this property is violated if you supplement an artificial engine with an external random generator. In that case, the decisions are really made by this generator where a big part of the "identity" lies. And you don't want to give human rights to a random generator because it's just too dull. Again, science doesn't imply what the right policies are but because there is a fundamental feature in which the actual quantum object and its classical simulations are inequivalent, it's surely legitimate to treat them differently. (Every ideology claiming that A and B should be treated equally in situations C and D is based on the fact/claim that A and B are equivalent in some respects E, F while totally overlooking the fact that they're inequivalent in other respects G, H: it is always an example of an ideological cherry-picking.) I don't say it's desirable or necessary. It's just a sensible possibility that some people may prefer.

At the end of the chapter, the author solves a problem from the previous chapter. Spoilers follow. The \(n\)-th Busy Beaver number \(BB(n)\) grows faster than any computational function, it may be proved. This number is defined as the maximum finite number of steps after which an \(n\)-state Turing machine terminates. The sum \(\sum_{n=1}^\infty 1/BB(n)\) over \(n\) is an incalculable real number as a consequence.

To partially summarize, these are interesting topics although – I believe – most of us have been exposed to computational complexity and artificial intelligence at various points and the most general impressions we get after a few hours arguably agree with what the experts believe (in this sense, the complexity science is a much less esoteric science than theoretical physics). It seems to me that these topics have very little to do with physics.

There's one point at which Aaronson may have tried to suggest that physics is relevant. He wrote that the black holes (and their singular spacetime geometry) could be used to construct computers whose speed doubles after every step and they could quickly calculate problems that require infinitely many steps etc. (The black holes were added as a "hope" to "sex his argument up".)

However, this is not the case. If such gadgets were possible in principle, the world could probably be shown to be inconsistent. Quite generally, the laws of physics tend to avoid such traps. So if you ask how the black holes may affect the speed of computation – I mean practical computations that the observers outside it may want to perform – their influence is exactly the opposite. They slow things down. Objects in a gravitational field display red shift, not blue shift, much like objects that are drifting away from us. This "prevalence" of red shift relatively to blue shift in physics is exactly what prevents moving computers from becoming too fast, from becoming the "NP-completeness oracles" or, using Aaronson's terminology when he was a student, "fairies". In most cases, physics has limitations that make the maximally efficient physical computer less efficient than the fastest design you could think of if you knew no physics.

This chapter also touches the question whether computer-assisted proofs may be counted as proofs. Well, I agree that people want to "feel" that they understand the reasons why something is true. A very large number of steps or tests that may only be done by a computer limit this "perceived direct contact with the truth".

At the same moment, I need to emphasize that this perception is subjective. And if someone doesn't "feel" a proof, it doesn't mean that the proof doesn't exist. After all, someone may "feel" it because he or she is a better mathematician. It's extremely important for people to understand that if they don't understand something, it may be just due to their personal limitations and not because there is something wrong with the thing they don't understand! Someone else may be smarter and understand it but even if there's no one like that in the world today, someone may emerge in the future who will understand it! The truth isn't the same thing as the subjective feelings of the truth; the latter are less eternal and less accurate and the former – the actual truth that is independent of individuals – should be considered superior.

Personally, if I may write down a program that verifies some steps that are needed to complete a proof (think about the Four Color Theorem as an example), I will believe the proof. A more direct proof that doesn't need a computer may be preferred. But let me admit, my motivation to search for such a computer-free proof diminishes rapidly once I know a proof that the statement is true – and I do count computer-assisted proofs among proofs. After all, I realize that computers are less likely to make well-defined particular errors than my brain because sharply cut silicon is more reliable than fuzzily connected proteins. In this sense, I believe the steps that were done by a computer more than I believe the steps that could have been plagued by a human error!

But anyway, the knowledge of the truth is far more important for me than the efficiency with which the truth may be reached or proved. Some important truths simply are just hard to find and hard to understand – and this is what actually makes them more important than the low-lying fruit-truths in average.

It seems to me that Scott Aaronson and others disagree about these points, too. In my opinion, they're victims of irrational emotions. Instead of looking for the cold hard truth, they may be looking for some emotions. As they're doing that, they may completely miss rather important meta-facts such as the fact that their brains are more likely to make errors in executing an algorithm than a silicon-based computer! And they may miss many other facts, too.

Complexity and P vs NP

Chapters 5 and 6 are dedicated to "Paleocomplexity" and "P probably isn't NP". These pieces of text are wonderful discussions about the scaling of the time and memory needed to solve various algorithmic problems. I have never been any professional expert, of course, but I got educated in this background many years ago. Aaronson wittily discusses many algorithms, their clever conversions to each other, and so on.

The problem promising you $1 million is the "P is or is not NP". Here, P are the problems that may be solved in polynomial time; NP are those whose solution may be verified in polynomial time. If the classes are equal, "P = NP", it means that every problem that can be "checked quickly" may also be "solved quickly". Because the finding of solutions looks "more creative" than their verification, people generally believe that "P is not NP" but a proof doesn't exist yet.

I am not so sure that "P is not NP", although I find it a bit more likely that the classes are not equal. However, if "P is NP" according to the definitions, it still doesn't imply that the procedures needed to solve a problem are really practical. The exponents in the power laws may be very large, and so on.

As a guy who's had some successes in math olympiads etc., I could have some probability – like many others – to solve similar problems, e.g. "P is or is not NP". However, the fame and perhaps the money is the only major motivation here for me, and it's too little. The reason why those things don't really excite me at the visceral level is that they're not really unique. They're elements in an infinitely tall power of ever more refined and complex mathematical claims that may wait for their resolution.

Imagine you prove that "P is not NP". But new problems are immediately waiting. One may divide the problems into finer classes and new sets and lots of new questions about their identity arise. Aaronson discusses many examples. In principle, this is a neverending sequence of questions. However, one can't really say that the more complex problems are negligibly important because they're qualitatively the same thing as the "P is not NP" question you decided to solve.

In physics, the search for a theory of everything is a much more unique and "terminal" process. And even some more modest goals in physics seem more unique and "terminal" than the problems in the complexity theory.

Add to del.icio.us Digg this Add to reddit

snail feedback (36) :

reader Scott Aaronson said...

Just a quick point of information, for anyone debating whether to read my book. As far as I can tell, ALL of the interesting points that Lubos implies in his post that I "completely missed," are points that the book itself chews over in detail! Examples: the idea that amplified quantum fluctuations might somehow be necessary for consciousness, so that a computer wouldn't be conscious if it used a mere pseudorandom generator (but what if it, too, used, genuine quantum effects?) -- see Chapters 11 and 19. What breaks down when you try to use a black hole to solve the halting problem? That one's right in Chapter 4, which Lubos supposedly read (for more see my survey article "NP-complete problems and physical reality"). Aren't humans at least as likely as computers to make mistakes when verifying proofs? See Chapter 13.

Regarding the triviality of the stuff introduced by Chapter 4: I politely suggest that Lubos withhold his judgments about the triviality of CS theory until he's read a bit further! Then, hopefully, he can add "great complexity theorist" to his list of distinctions by telling us his trivial answers to all the trivial open problems mentioned. :-) (Alternatively, he can provide his own solutions to all the already-solved problems, without consulting the references.)

reader Clayton said...

How do you decide when one "equivalence is more surprising and nontrivial than a straightforward translation of a program from one programming language to another"?

reader cynholt said...

It seems to me, though, that to evolve a machine to mimic humans is
much, much harder to do than to just engineer a machine to mimic humans.
Engineering involves understanding, evolution doesn’t. In fact, the
longer it takes for something to evolve, the harder it is to understand,
which may well lie behind Moravec’s paradox ( see link below). Of
course, for us as humans, it’s hard to admit we may not be able to
understand something -- even though all of us on a daily basis use
things we don’t understand at all, and likely no single person will ever
understand them in their entirety, but at least we have a warm
knowledge that potentially someone might be able to in the near to
distant future.

Of course, the other drawback is that evolution takes time, so
trying to evolve a machine to mimic a human (MTMH) is not something that
will ever happen overnight. But saying it’s fundamentally impossible is
way, way too strong a claim. If nothing else, it begs a definition of
machine. Will we accept only silicon as machine? Or, will other
compounds, up to and including organic, be prohibited?

Ultimately, the only argument as to why we shouldn’t be able to
evolve MTMH is a religious one, i.e., there’s something out there that’s
much more than just matter and energy, which makes it fundamentally
unknowable/un-manipulable by us. And then it comes down to an


reader enthrense said...

This chapter also touches the question whether computer-assisted proofs
may be counted as proofs. Well, I agree that people want to "feel" that
they understand the reasons why something is true. A very large number
of steps or tests that may only be done by a computer limit this
"perceived direct contact with the truth".

spybubble avis

reader Luboš Motl said...

Well, I decide unambiguously because the statement is *literally* true: the surprising dualities in string theory (or some QFTs) are more nontrivial than a dictionary translating between two programming languages because it doesn't allow any dictionary that would translate every known and useful concept on one side to a known and useful concept on the other side!

The theories on the two sides of dualities use completely different concepts that aren't quite mapped to each other and only the final observable predictions can be mapped and they happen to exactly agree!

That's why dualities are very different from the relationship between the Schrodinger and Heisenberg picture because to relate these two pictures, one may simply transform the state vector and operators via a totally explicitly known unitary operator. So the relationship between the two pictures is as trivial as the relationship between two programming languages because it's straightforward to design the procedure that translates programs in one to programs in the other. One knows that it will work.

On the contrary, if you consider e.g. S-dualities, the map between the variables at strong and weak coupling isn't explicitly known and it probably isn't any simple even if one could write it down. So to the very last moment, whether such a duality will pass all tests is an open question. But it does pass the tests. And one may also offer some kind of more or less rigorous and more or less heuristic proofs that the duality holds.

Well, T-duality is more "trivial" because it holds order by order in the string perturbative expansion. The dictionary between the two or more T-dual descriptions is known explicitly in the world sheet CFTs. However, the relationship still at least *looks* totally shocking from the spacetime physics viewpoint.

But even if you decided that the gap between surprising and non-surprising relationships isn't qualitative and depends on subjective choices, the subjective difference is just huge.

One must understand what the Church-Turing thesis is being used for in the computer science mathematical literature. It's used to replace totally boring and straightforward segments of proofs saying "one could rewrite this algorithm to another language, thus establishing the equivalence of the complexities". The idea behind this step of the argument is totally simple and comprehensible, even to a schoolkid. The only reason why some people may call it "powerful" is that there is a protocol in mathematics that forces people to write down every step explicitly, so in principle, the precise translation between the languages should be a part of all proofs of this type in the complexity literature.

But the Church-Turing thesis has been accepted as an allowed "meta-step" that allows authors to omit this totally straightforward and obvious part of the proof. So the thesis is just a loophole that allows one to circumvent some totally unnecessary, obnoxious, useless bureaucracy.

Now, in physics, this bureaucracy - ultimately justified by rigor - doesn't exist at all. If a physicist can use *any* rational argument to prove something, it's accepted. So physicists are using things analogous to the Church-Turing thesis all the time and they don't need to apologize for that and they don't need to give these "meta-arguments" special names. In fact, they use much more convoluted chains of reasoning with a much longer hierarchy. In fact, the proofs in physics don't really have to be that rigorous, anyway.

So physicists are allowed to ignore the bureaucratic requirements which gives them much more creativity and freedom. Correspondingly to that, their cherished principles are often much more nontrivial and hard-to-see than some celebrated relationships in disciplines such as computer science.

reader Luboš Motl said...

Dear Scott, I appreciate your explanations of the computer science concepts etc. What I disagree is your suggestion that those things decide about questions related to the foundations of physics. They don't. If you can't hear a clear explanation why they don't from Ori, Juan, Steve, Lenny, or others, it's just because they haven't thought about these matters independently themselves and they allow you to abuse your self-anointed status of a person who is proficient in both fields and their relationships which you aren't.

reader Luke Lea said...

"As I have argued many times, science cannot answer moral questions."

True. But even so it makes sense to limit questions about moral rights to entities that can feel pleasure and pain.

reader Scott Aaronson said...

LOL, that's one of the most dishonest comparisons I've ever heard! Yes, despite the Church-Turing Thesis's historical importance at the beginning of the computer age, and despite the number of doofuses who still refuse to accept it, Lubos is entirely right that it's simple and obvious once you understand it. For that reason, it does indeed seem fair to compare the CT Thesis to, say, the equivalence of the Schrodinger and Heisenberg pictures, or even (say) the equivalence of inertial frames, or other equivalence principles that were recognized earlier in the history of physics.

String-theoretic dualities, by contrast, are extremely modern. So if one wants to have a "CS/physics pissing contest" at all -- notice how Lubos said just hours ago that he lost interest in such things in kindergarten, but is now energetically pissing! :-) -- then the only intellectually honest thing to do would be to consider some modern dualities in theoretical computer science. What are some examples? Well, how about dualities between the existence of one kind of algorithm, and the nonexistence of another kind -- as you see in the Razborov-Rudich natural proofs framework, or in Mulmuley's Geometric Complexity Theory program (which explains how you could someday prove things like P!=NP, if you could *find algorithms* for certain problems in algebraic geometry and representation theory), or most recently and spectacularly, in Ryan Williams's use of a faster satisfiability algorithm to settle the NEXP versus ACC question? Or how about the duality between derandomization (i.e., making randomized algorithms deterministic) and proving lower bounds on circuit size? Or the duality between quantum query algorithms and span programs (and various other types of semidefinite programs)?

Like the string dualities, these are all examples of totally unexpected connections that emerged in the 1990s and 2000s, that don't completely 100% work (but do work with some loss of parameters, under plausible conjectures, etc. etc.), and that aren't yet fully understood and exploited. I suppose that, in order to enlighten us about why these sorts of dualities are just totally uninteresting "loopholes for circumventing obnoxious bureaucracy," Lubos will first need to learn what they are! :-D

reader Scott Aaronson said...

(More generally: I find it bizarre how Lubos imagines that the **first few chapters of my book** should represent the most modern, state-of-the-art thinking in theoretical computer science. IIRC, the first few chapters of Brian Greene's "The Elegant Universe" were about Newton, Maxwell, and Einstein. Is it fair to assume that today's theoretical physicists haven't progressed too much beyond them?)

reader cynholt said...

This mind vs machine debate reminds me of a very costly and
ill-conceived thing that is now being done at the teaching hospital I
work at. Six or so nurse practitioners there are being paid between
$100,000 to $130,000 a year JUST to monitor and treat the glucose levels
of diabetes patients through the hospital. That's all they do all day

I can see paying these advance-practiced nurses six figures as long
as they are having to use their assessment skills on diabetic patients
and teaching them something about their diabetes, but that NOT the case.
They have zero contact with patients. Their job only consists of
plugging in glycemic orders based on a small handful of simple algorithms.
It's literally the Betty Crocker of cookbook medicine -- something
which would be ideal for a mindless robo-nurse to do.

Think about this the next time you pay your monthly health insurance premium and you will know one reason why it has trended up!

Also, if you happen to have diabetes, think about this the next time
you visit a newly-minted primary care doc. Since probably none of them
received any training in diabetes management (unless they happen to
spend some of their residency time with an endocrinologist, which I
doubt), they won't have the slightest clue on how to monitor and treat
diabetes. I suppose they can always get their hands on these idiot-proof
algorithms from the teaching hospital where they received their
residency training, then their job monitoring and treating their
diabetic patients will be a piece of cake!

reader victor said...

Hi Scott,

You wrote:
"amplified quantum fluctuations might somehow be necessary for consciousness"

I have three quick naive questions:
а) What is consciousness?
b) Is quantum mechanics involved in the computational processes in a computer?
c) Could 'quantum fluctuations' explain IT-scientists' believe that they better know about physics or neuroscience than physicists and neuroscienctists?

reader Jiri Moudry said...

Is "intelligence" hardware or software? Of course you need both. There is no reason why today's hardware could not emulate (extremely slowly) the work of a massively parallel system known as an ant's brain. In principle we don't have the software. Are ants intelligent? Where do we draw a line?

The notion of complexity - or even an existence of a solution - depends entirely on a framework we work in. A squaring of a circle or a trisection of an angle are impossible with Euclid's tools, but in practice they pose no problems. In both physics and mathematics breakthroughs are usually achieved when a genius establishes a new framework. There is no reason why a computer could not accomplish it, but fortunately we are not there yet.

reader Bob Felts said...

1) Consciousness is the flow of electrons in certain patterns through the neurons in the brain.
2) Yes and no. Transistors use quantum effects, but you don't need transistors to do computations. You could use logic gates that use water.

reader Bob Felts said...

All software is hardware.

reader Scott Aaronson said...

a) If there exists a thing that the philosophers have been referring to for millennia by words like "consciousness," then consciousness is that thing. I can't define consciousness much better than that, and neither can you, and neither can Lubos. But despite the lack of a satisfactory definition, it can sometimes be fun to speculate about what sorts of physical conditions might be necessary or sufficient for consciousness, as I do a bit in my book, and as Lubos also does in the above post.

b) Quantum mechanics is certainly relevant to the individual transistors, but no, in existing computers it plays essentially no role at higher levels of organization. Of course, many people hope to build computers for which QM *would* be relevant at higher levels of organization; such machines would be known as "quantum computers."

c) Sure, for all I know quantum fluctuations might explain such a thing if it were true. They might also explain Lubos's belief that he knows *everything* better than *everyone*, or your own annoying habit of asking questions for which you're not actually interested in the answers.

reader victor said...

Thank you!
And I'm really interested in your answers! I wanted to hear how you smach and nullify all my knowledge of 'classical' molecular biology, biochemistry and neurobiology. Unfortunately...

Glad you know my habits so well.

reader victor said...


reader Shannon said...

Scott, do you always communicate using subliminal messages ?...

reader Scott Aaronson said...

I wasn't aware that I ever do!

reader Robert Rehbock said...

Am reading it. You cannot and do not define consciousness or intelligence rigorously. Hard to fault LM or me for forming opinions before chapter 11 or 19... Of course I would also fault Penrose for the opposite extreme in his " Maze (sic) to reality". His style sends one to and for with forward, backward external and indirect references to concepts addressed elsewhere. Your book can be read front to back. I think that will make it more likely that more buyers will actually read your book in major part.
i noticed that LM has now posted regards a new book by Lee Smolin. Just a guess before I look at his post, that that book, unlike yours, will not be worth the effort to read even to chapter four. I do agree one should keep an open mind but that gets back to defining intelligence. I will be surprised if that book demonstrates any evidence of intelligent life ... But with open mind I will turn to that review :-)

reader Scott Aaronson said...

Robert, I'm glad you've found my book worth reading, at least so far! I agree that I don't rigorously define consciousness or intelligence, but I also made no claim to have done such a thing.

reader Giotis said...

Regarding PW, I checked his post today and it seems funny to me that he praises QFT and SM and on the other hand he rants against M- theory; the only existing natural extension of QFT/SM that could include Gravity and respect all their principles by taking into account all the lessons learned from QFT.

And you have to include Gravity rather you like it or not.

You want a proof? Besides String theory and related work nothing much is happening on hep-th.

reader Anthony said...

LOL Thanks to all for an entertaining conversation, I did
not read the book yet but its coming. Hope I'll get it with a different cover

reader Scott Aaronson said...

Lubos, I just saw your added material on P vs. NP. I'm sorry to hear that, even though you'd have "some probability" of solving the P vs. NP problem if you tried, you choose not to work on it since it doesn't really excite you at the visceral level. I'm tempted to reply that I, too, have "some probability" of making a breakthrough in string theory, but I choose not to work on *that,* partly because I find the stuff I DO work on to be more interesting! :-) Seriously, the world is a big place, bursting with enough fascinating open problems for many lifetimes, and in my opinion, no one should have to justify their choice to focus on only some of the problems by explaining what sucks about all the other ones.

But to respond briefly to Lubos's arguments:

Yes, there are many other unsolved problems in complexity theory besides P vs. NP -- for example, P vs. PSPACE, NP vs. P/poly, BPP vs. BQP, permanent vs. determinant, the exponent of matrix multiplication, etc. etc. But contrary to what Lubos says, these problems don't really seem like an "infinitely tall tower": it feels much more like we're struggling toward more-or-less the same conceptual breakthrough for all them. And while we're still extremely far from that breakthrough, my personal guess is that, once someone manages to solve any one of the "strong" complexity lower bound questions, the others will fall like dominoes. (Also, of course, the whole point of NP-completeness is that problems like Traveling Salesman, Max Clique, 3SAT, Subset Sum, Ising spin glass minimization, and tens of thousands of others can't be hard for "accidental" reasons specific to each problem. If these problems are hard at all, then all of them must be hard for exactly the same reason.)

Having said that: yes, it's true that theoretical particle physics is inherently a "bounded" subject (there's only one universe, so discover its laws and you're done), whereas math is inherently "unbounded." And yes, some people have seen that as a reason to go into physics rather than math, and I don't hold it against them. On the other hand, others have seen the same fact as a reason to go into math rather than physics! ("Why concentrate so much brainpower on such a narrow, already-picked-over set of questions, when there are so many other beautiful questions?")

An anecdote: Einstein himself said that he went into physics rather than math for basically the reason Lubos gives, that there are too many *different* beautiful problems in math and he wouldn't know which one to pick. (Not surprisingly, Einstein was more diplomatic than Lubos! :-) ) However, late in his life, Einstein also said that his friendship with Godel had taught him that in math, just like in physics, there are *central* conceptual questions, ones with little or nothing arbitrary about that. In my opinion, P vs. NP (which, as it happens, was first posed in a letter from Godel to von Neumann) is the example par excellence of such a question.

reader James Gallagher said...

Jesus Christ

I've been away a few days and come back to this.

Scott, first of all you are too YOUNG to be writing such an ambitious overview of the history of maths and science, even if restricted to your area of turtle step algorithms.

Write this book in your fifties, when you understand the world better.

But my main point is you are simply wrong to think that these "basic" "foundational" questions in computer science are worthwhile to anyone but people on your field and other "couldn't do proper science and maths" people, like philosophers and the like. (sorry, not being rude, just being factual, your book is good as entertainment like a ballet or symphony - but not much beyond that)

You rope in some famous science names like they have also studied this shit to the same extent as you deluded people have, do you claim they have published papers on it - NO THEY HAVEN'T - name one person considered a great recent physicist who has publish a paper in the same area as you computer science guys. They are humouring you nerds.

The reason they haven't posted papers is the same reason Galileo, Newton, Maxwell, Boltzmann, Einstein, Heisenberg, Schrodinger, Dirac, Feynman etc etc haven't - YOUR ABSTRACT ARGUMENTS ARE BOLLOCKS (Don't even try to recruit Feynman as a computer nerd)

You are maybe too young to understand this Scott, which is not so bad for you, but a lot of people telling you something is cool doesn't make it cool.

reader Scott Aaronson said...

ROTFL! John Preskill, Ed Farhi, and Ray Laflamme are extremely well-known former high-energy physicists who not only have "published papers" in quantum information, but have been working full time in this field for 15-20 years, because they personally found it more exciting than HEP. Was Alan Turing's work also "ABSTRACT," and therefore "BOLLOCKS"? Also, maybe you should *learn* a bit more about this subject (say, by reading my book...) before you mouth off about it so doofusly? I suppose I wouldn't know; I'm too young.

reader James Gallagher said...

You mentioned completely different names previously when try to "big up" your subject, those names are hardly in the same hard-core physics class as likes of Susskind, Bousso, Maldacena et al

In any case, they're all over 50, and by then, as you now know, you should just by writing historical surveys of stuff, not attempting original research.

I should know, I'm in my 40s, and can feel my capabilities failing daily, and I'm probably one of the greatest intellects ever to walk the face of the Earth

The likelihood of me buying your book is about the same as you getting +100 entangled quibits to fail to do anything nearly as interesting as you hope they can by christmas (I may peruse it in a library)

reader Luboš Motl said...

Dear James, Maldacena is 44.5 and Bousso is 41. WTF?

reader Scott Aaronson said...

I know Lenny Susskind well; not only is he a huge fan of quantum information (having worked with QI researchers like Patrick Hayden), but right now he's energetically working to help make Stanford more of a presence in that area. As for Raphael Bousso, a few years ago I spent an extremely enjoyable week with him at Berkeley; he invited me there specifically because he wanted to learn more about complexity theory and considered it relevant to what he was working on. You're probably right that Maldacena's interest is more limited (I only know that he evinced curiosity when I met with him, while I was a postdoc in Avi Wigderson's group at IAS).

In case you're wondering: yes, it does feel strange to "name-drop" like this, and no, I don't acknowledge string theorists as the sole arbiters of what's interesting or important in all other parts of science! :-) However, *because* Lubos has been denigrating the entire area I work in as boring, bureaucratic, parochial, etc. compared to HIS favorite areas, it did seem relevant to point out that many of his colleagues not only sharply reject that assessment, but (in some cases, like Susskind's) are now emphatically "putting their money where their mouth is."

reader James Gallagher said...

I was referring to Scott's meagre list of "star" names

reader James Gallagher said...

I was kinda joking, but the point is that history tells us that the breakthroughs will come form hotshots in their 20s and 30s and maybe early 40s (Planck, Born ...)

Quantum Computing is a sexy subject that really should attract the young'uns, and it's easier than QFT and String Theory. But, after many years, there aren't any breakthroughs vis a vis nature. The most likely reason being that nature doesn't do quantum computing..

QC is a fun branch of pure maths.

reader Luboš Motl said...

Dear James, it's a misconception. If you go through a sufficient number of top scientists' biographies, e.g. those on this blog, you will find out that about a half of them did their most important work after 40 years of age, often in their 50s and sometimes 60s.

In physics, one needs some knowledge and experimence - arguably an increasing number of it as time goes - and the optimum age is well above 20 and probably well above 30, too.

reader Luboš Motl said...

Dear Scott, obviously, Lenny has been keen on quantum information - partly because of the black hole quantum information puzzles - at least since the 1970s.

I wasn't denigrating a whole field when talking about the bureaucratic character of proofs. I was just pointing out that certain results aren't really deep - they're trivial - and their importance boils down to the green card allowing people to circumvent the most nitpicking criteria for a rigorous proof. This sometimes overconstraining rigor is pretty much a feature of the whole field of mathematics - surely relatively to physics - and it has both advantages and disadvantages. An advantage is that one may become "really sure" about proofs in this strict rigorous way. The disadvantage is that the rigor sometimes prevent one from getting too deep or too far from the status quo.

My point was really more special, namely that dualities in string theory or QFT are much deeper than some translation of algorithms from one programming language or one type of Turing machine to another because those things are pretty much straightforward - not too difficult thinking may quickly convince one that it's possible and in some time, it can indeed be done. Dualities in string theory are surprises that remain, from most viewpoints, shocking.

reader Scott Aaronson said...

I think that to a large extent, the level of rigor in a given subject is determined by the subject matter itself. Physicists generally haven't *needed* as much rigor as mathematicians, both because (at least until recently... :-) ) they've had constant guidance from experiment to tell them if they're on the right track, and because they have the "luxury" of making simplifying approximations left and right, with the confidence that Nature won't maliciously thwart their assumptions. In algorithms, complexity, and computability theory, we typically don't have these luxuries. For example, if you're trying to prove that no polynomial-time algorithm exists to solve a given problem, then your experience with "typical" algorithms might be almost useless, since an algorithm discovered 500 years from now might look nothing like the "typical" algorithms of today.

Having said that, it's a very interesting question whether theoretical computer science could be done in a less rigorous, more "physics-like" way, and whether it could move faster if it did. Ketan Mulmuley's GCT program (which I mentioned in a previous comment) could be seen as a super-ambitious attempt to do complexity theory in just such a "pre-rigorous physics style," and it's a program that many of us "rigorous" folks follow with respect and interest! On the other hand, I think it's fair to say that so far, the "rigorous" approach (which includes conjectures, but always clearly labelled as such) has had overwhelmingly more success at moving the field forward.

reader Luboš Motl said...

Dear Scott, I agree with you that the level of rigor dynamically depends on the discipline's needs.

The Church-Turing thesis is just an example of a simplifying metasteps and many more such metasteps could be used in computer science papers without any genuine risk that the proofs would be invalidized. So I am probably suggesting the same thing as the "physics of computer science" approach you talk about.

reader Luboš Motl said...

Good to hear, Scott. Good for your field. I was giving a talk today but finished the Proofs chapter - fun but lots of technicalities. I will probably only comment on the next, quantum things...