Scott Aaronson has posted at least two new comments here in which he demands to be considered a "possible co-author" of a paper by Ewin Tang that has found a recommendation algorithm that Aaronson insisted to be impossible.

I think that given Aaronson's frantic efforts to discourage this kind of research, it would be insane if he were a co-author of Tang's paper. Aaronson is testing the waters – could he get away with stealing one-half of the credit, after all? I sincerely hope that Tang already has enough freedom not to allow such a shocking development.

But Aaronson discussed our main dispute, whether complexity theorists should be open about the truth value of \(P=NP\). Aaronson claims that they shouldn't be open-minded. Even without a proof, the non-existence of polynomial algorithms for \(NP\) problems should be considered an "established wisdom", the phrase that Aaronson *still* uses for the now demonstrably untrue, unsubstantiated, irrational belief that Tang's algorithm cannot exist.

In an effort to sling some mud on Tang's work, Aaronson pointed out that Tang's algorithm could be slower than you think – because of some polynomial factors. But they're *exactly* the possibility that's been used to argue that \(P=NP\) could be true – without changing the world. The time needed to solve an \(NP\) problem could scale like \(720 N^{196,883}\). It's polynomial in principle but very large for any \(N\geq 2\). So even though \(P=NP\) would be true, you couldn't really do practical things with this fact.

Tang's paper may be considered a *toy model* showing that exactly this scenario seems to be happening in some clever algorithms – at least in the special case of the Netflix recommendation problem. Aaronson illogically span the polynomial factor as an argument for *his position*. But there's simply no valid logic (not even "fuzzy logic") by which this polynomial factor could be used to *strengthen* the belief that \(P\neq NP\).

The only sensible interpretation is one that makes \(P=NP\) more plausible than before. (Well, if you think it makes sense to quantify probabilities of unproven mathematical statements at all. I don't think it makes sense in discrete mathematics because all arguments "for" and "against" should be considered cherry-picked – by appropriate cherry-picking, you can fool yourself into thinking that the probability is whatever you like.) Tang has shown that the "dequantization" of a previously written quantum algorithm may be made and you simply pay some polynomial (or polylog, depending how things are counted) factors that slow down the calculation. In the same way, the "de-NP-ization" (conversion of an algorithm that verifies a solution to an algorithm that finds it) could be done if you increase the computation time by an analogous (large but) polynomial factor!

The punch line is that Tang's algorithm could be considered a toy model for a proof of \(P=NP\). One just needs to be a bit more general, solve a grander and more universal task than the Netflix task. Can one extend Tang's construction to a proof of \(P=NP\)? I don't know. I don't know how it should be exactly done. (However, if you read this line and find it, I do think that my leadership will have been crucial for you.) But it's *plausible* so we have some very rough sketch of a proof of \(P=NP\) – an example of the fact that arguments pointing towards the answer "\(P=NP\) is true and will be proven" exist, too.

Aaronson made his new comment more intriguing because he brought Edward Witten to the discussion:

Finally, regarding the “50:50” view of P vs NP: a couple weeks ago I had the pleasure of teaching at an IAS summer school on quantum information and quantum gravity, co-organized by Maldacena and Witten. It was my first serious interaction with Witten. At some point during dinner, I brought up your views about P vs NP. Witten asked me what reasons you could possibly have for thinking that the best possible guess was 50:50. So I told him, as faithfully as I could: because discrete math, unlike continuous math or physics, is just a mess of disconnected bureaucratic propositions, none of which gives much meaningful insight about any other. And also, because computer scientists are not very smart, so their intuitions should be entirely discounted. On hearing this, Witten laughed and shook his head.Let us discuss these two paragraphs in the rest of this essay. First, the correct reaction to the last paragraph is obvious: Indeed, it was rather important in one of my arguments that I divided mathematics to the "continuous mathematics" that is close to physics and where "partial certainty" – and probabilities like 99.9999% - may legitimately arise (e.g. as 5-sigma evidence in favor of some assertion about a real number); and "discrete mathematics" where such partial certainty is impossible.

I mention this not to invoke Witten’s authority, but simply to illustrate why I reject your “CS vs physics” or “discrete vs continuous” framing of the issue. As far as I can tell, even among your fellow string theorists, and even among the great “continuous” mathematicians, your “50:50” view regarding the discrete math questions that haven’t already been answered puts you into a crazed minority of approximately one.

Witten is a top scientist in theoretical or mathematical physics and "continuous mathematics" but he has little understanding for the "discrete mathematics" similar to the "complexity theory" which is why it's unsurprising if he holds the flawed view that one may become 99.9999% certain about unproven propositions about "discrete mathematics". Complexity theory has almost nothing to do with string theory which is indeed a reason to think that Witten's arguments about the complexity theory aren't terribly important, especially if the arguments are "laughter and his shaking head". Sorry but laughter and shaking heads aren't arguments, certainly not deep ones, and nothing changes about the fact when it's Witten's lips or Witten's skull that are involved. It's very bad if

*a professional scientist*believes otherwise.

(Update: Incidentally, I got almost equivalent messages from 3 independent people, telling me that I might be misinterpreting Witten's laughter and shaking head – which could have easily meant his disagreement with Aaronson, among other things, too. Yup, I admit it's possible and I may have been sloppy while buying Aaronson's "obvious" interpretation.)

Now, let us turn our attention to the first paragraph.

Aaronson has summarized my arguments why one should be open-minded about \(P=NP\) "as faithfully as he could". He apparently reproduced one sentence – which is basically my sentence, with some catchy words that are guaranteed to entertain physicists at a dinner table. In that sentence, you may see that the phrases "bureaucratic propositions" and "not very smart computer scientists" are heavily overrepresented.

They're my phrases but Aaronson's suggestion that this is a fair summary of my arguments is a plain lie because in my actual blog posts about \(P=NP\), such as this 2014 text and some others, the technical and rather solid arguments vastly outnumber the witticisms that may entertain people at a dinner table.

It reminds me of the basic school: Some teachers sometimes inserted a funny story that was loosely related to the topic they were teaching. During the history classes, we learned about the nude ancient Greek Olympic runners whose unpaired limbs were swinging back and forth, left and right, thus proving the beauty of the human body. (The number of such examples was much higher, of course. Teachers, like bloggers,

*should*entertain, at least sometimes.) Needless to say, many schoolkids

*primarily*remembered these cute insertions. But a schoolkid who is supposed to get a good grade should "also" (well, "primarily") know the remainder of the material – all the "relatively boring" things. During his dinner with Witten and others, Aaronson behaved as a schoolkid who can only remember the sentences about swinging penises.

So yes, I think that it's fair to describe \(P=NP\) as a bureaucratic proposition – it's some complicated, unnatural, artificial proposition with many arbitrary choices (like a tax form) that can't be identified with the practical implications e.g. because the polynomial times may still be extremely long. And I think that the likes of Witten are generally smarter than computer scientists. (Incidentally, I think that Witten agrees but he wanted to be diplomatic which may have contributed to the laughter.) But those comments are

*not*the core of my arguments why one should be open-minded about \(P=NP\), and Aaronson tried to suggest to the contrary which made him a dishonest demagogue.

If you want to have a fair idea about my arguments, you should at least quickly read my 2014 text about the open-mindedness towards \(P=NP\). And if you want to know about the other computer scientists' arguments leaning towards \(P=NP\) or at least the agnosticism, you should look for Moshe Vardi, Anil Nerode, and Richard Lipton on the page I just linked to – and elsewhere.

(Update: Guest has posted a rather fresh slightly informal but technical paper by Ryan Williams of MIT, an achieved complexity theorist – who lists similar reasons to be open-minded and quantifies the probability of \(P\neq NP\) as 80 percent. He included estimates for less famous similar hypotheses and reasoning behind them, too: all his numbers are generally much more comparable to 50-50 than the numbers by the folks in the group think. Williams and Aaronson must know each other well – MIT etc. – and Williams is respectful towards Aaronson's comments. But Aaronson pretends that Williams and similar people don't even exist and in February, I found out, Aaronson ignored the question #84 about Williams' paper. Those are some of the extra reasons why I think that Aaronson behaves as a nasty, dishonest, prejudiced bigot.)

But if I need to quote "authorities" – and it seems that some people just can't possibly think rationally without them, without role models – it seems optimal to embed a 5-minute video of Donald Knuth, an icon of computer science. The video was posted to YouTube in September 2014, i.e. half a year after I wrote my 2014 text. Just a week ago, Knuth got a bad TRF press for claiming that \(0^0=1\) isn't indeterminate so it's an excellent opportunity to give this giant of the "discrete mathematics" some good press, too.

(Yes, Knuth's choice \(0^0=1\) is less surprising because he is a "discrete mathematician" and doesn't fully allow the exponent to be a general continuous number, just like Witten's misunderstanding that one must be open-minded about unproven assertions in "discrete mathematics" partly follows from the fact that Witten is a giant in the "continuous mathematics".)

Among other things, Knuth is the author of highly authoritative texts about programming – and the author of \(\rm\TeX\), this is the language with the backslash-based commands that you're using rather often, Ed. And as you can see, Knuth's monologue looked a review of my March 2014 blog post.

Fine. Knuth says that he believes that \(P=NP\) is probably true – the opposite truth value than what Aaronson and other members of the same group think present as the established wisdom although no evidence is known that would logically favor one answer or the other. But, Knuth adds, there's no contradiction and \(P=NP\) doesn't have any nearly contradictory or world-changing implications because the algorithms could still be very complicated, slow, and the proof that \(P=NP\) that someone could present could be non-constructive. Knuth adds an example of a non-constructive proof about some linear algorithms dealing with graphs.

At the end, the major observation by Knuth and others is that

*the set of all algorithms is an incredibly large (obviously infinitely large) and diverse set*. So they can do lots of things. It's a rather bold (and perhaps "unlikely to be true") statement if you claim – without a proof – that

*no algorithm in this very diverse, infinite set of algorithms*can solve a certain problem in a time that is in principle polynomial.

You may also read some written summary of Knuth's reasons to believe that \(P=NP\) is probably true.

I could survive to be the only thinker who finds it important to be open-minded about \(P=NP\). But be sure about it, I am extremely far from being the only one. Even if it were just Donald Knuth, I would be in a rather good company. We're far from being the only two who find it perfectly conceivable that \(P=NP\). It is true that the Aaronson-style group think "politically controls" much of the field, however. See e.g. these comments by Richard Lipton from 2009: the field has effectively made a huge bet on \(P\neq NP\) because the research that wants to prove \(P=NP\) or even "weaker assertions in a similar direction" is almost guaranteed to be defunded.

That's extremely dangerous, sick, and unscientific.

There are other blog posts that focus on the technical arguments. But here I want to add some sociological comments. At the dinner table, it's normal not to contradict other people too much. You still want the dinner to be a friendly event, it's natural. In many cases, people find out that they have similar opinions about many issues. Some people who disagree with it may always see group think in that synergy.

For example, I've had dinners with string theorists – surely many dinners with Andy Strominger and others at Harvard, plus visitors of Harvard – where they wouldn't be afraid of using very critical words against loop quantum gravity, among other things. It was refreshing to see these criticisms because they rarely said such things in public – well, I dislike these double faces, at the end, but if someone is frank at least in

*some environment*, it might still be better than if he's never frank.

Just to be sure, I don't think that the arguments used to support these positions during dinners were usually deep and rigorous. They have almost never been: it's just a fudging dinner, you're supposed to eat and enjoy the event. But yes, I have always preferred some rigor

*even during dinners*. So I always tended to contradict arguments that I found too oversimplified or demagogic.

But most people in the Academia just don't do any of these things. At least when it comes to questions outside their rather narrow expertise, the depth of the group think is shocking. The group think strengthens primarily by the way how the communities are being built. People prefer others who agree with them around – and they know that they will enjoy advantages if they agree with the people around them. So most of the scholars are just doing so.

This is why there's so much group think about politics, climate change, and other brutally politicized issues. The people who aren't compatible with this extreme left-wing environment get harassed, demoralized, repelled, ostracized, defunded, or fired. (Today, I was pleased to see that the climate alarmist Lisa Randall (whom I know well) was among the critics of a really ludicrous alarmist NYT article about the climate. But maybe Lisa is rebranding herself as a Republican now, what do I know?)

(By the way, try to read the incredibly hysterical rant that Leonard Susskind wrote about Donald Trump just two years ago. At least now, will Susskind – and other people in that group think – admit that they were wrong? America and the world haven't ended; they're almost certainly in a better shape than 2 years ago. Just look at the damn stock markets and indices, the number of wars, and other things.)

The story about Aaronson's review of my arguments supporting the open-mindedness on \(P=NP\) and Witten's laughter and shaking head is an excellent example of group think mechanisms in the real life of the researchers. Ed, Aaronson made you think that my arguments boil down to some sociological observations and colorful adjectives and they had no technical content. But you have been deceived by Aaronson and your laughter shows that you are a very easy person to be deceived. My arguments against the \(P\neq NP\) dogmatism are very strong, technical, and logical.

Ed and others, you should try to critically think in which other dinner conversations you are being deceived. A hint: The list includes all the conversations that touch Donald Trump and almost all conversations that touch politics. The degree to which your community has detached itself from the political reality is incredible.

**Bonus: cherry-picking and open-mindedness in discrete mathematics**

I want to review the basic argument why one (and especially the research community as a whole) should be open-minded (basically believing 50-50) about \(P=NP\) and similar unproven propositions in discrete mathematics – and what's the difference from the claims about continuous mathematics and physics.

In particle physics, we may discuss questions such as whether the Higgs mass obeys \(m_H\leq 200\GeV\). There are really a "finite effective number of relevant, independent, continuous questions about the Higgs mass" that one can ask. Some measurements at the particle accelerators have disproved the Standard Model (and perhaps more general theories) with \(m_H\gt 200\GeV\) at the five-sigma level, and because "that dimension (along the mass axis)" in the space of possible properties of Nature is far more important than others, we can translate a 5-sigma falsification of a working hypothesis as a 99.9999% certainty about the statement \(m_H\leq 200\GeV\) or something like that.

Even when the propositions aren't inequalities, we may find analogies in physics and we're dealing with a "finite space of ideas" because the number of possible analogies we can build is limited, at least now, which is why analogies may often legitimately influence our beliefs in new questions.

However, in "discrete mathematics", the number of "vaguely analogous statements" to an unproven proposition such as \(P=NP\) is always infinite and there's no canonical way to choose which of them are "more analogous" than others. Clearly, in this set of analogous statements, some point towards \(P=NP\), like Knuth's speech above, others point to \(P\neq NP\), like some arguments that Aaronson would love to repeat often. If I simplify just a little bit, all fast enough or surprisingly fast algorithms (and even non-constructive proofs of their existence) may be presented as arguments supporting \(P=NP\) and all the proofs that some algorithms (doing other things) don't exist may be viewed as analogous to \(P\neq NP\) and therefore morally supporting \(P\neq NP\).

You may always cherry-pick your beloved arguments – for and against \(P=NP\) – or cherry-pick the analogous statements. The percentage of "arguments supporting \(P=NP\) in your cherry-picked set" may be basically whatever you like. There's no "comprehensive set of all arguments and analogies touching \(P=NP\)" and there's also no way to "calculate how one fuzzy argument is far more powerful than others".

For this reason, the opinion about \(P=NP\) (the subjective probability that the proposition is true) is a matter of pure prejudice, not a situation in which one may be legitimate "almost certain" about one answer, like in the case of the Higgs mass. If Edward Witten cannot understand this bonus explanation and admit that he was deceived by Scott Aaronson concerning the existence of a technical content in my arguments, I will revise my estimated IQ of Edward Witten by some 20 points in the negative direction.

And incidentally, I would choose to bet that Juan Maldacena understands my arguments.

**Bonus II: How I needed time to fully appreciate Michael Crichton's comments about prejudices**

I knew about (late TRF reader) Michael Crichton's 2003 Caltech Michelin Lecture "Aliens Cause Global Warming" soon after he delivered it, and I almost completely identified with everything he would say.

But for some time, there was still some sense in which I thought that he exaggerated. In that lecture, he said that the Drake equation to "calculate" the number of extraterrestrials was the beginning of the pseudoscience in which predetermined conclusions were "rationalized" by equations that are actually useless to collect any evidence – and these lower standards have almost directly led to the global warming hysteria, too (that's what explains the title of the talk), where some predetermined conclusions are just being rationalized by jargon and equations that look "sciency" but are useless to get any real information about the relevant question.

Crichton said:

In 1960, Drake organizes the first SETI conference, and came up with the now-famous Drake equation: \(N=N\cdot f_p\cdot n_e\cdot f_l\cdot f_i\cdot f_c\cdot f_L\).The analogy with the \(P\neq NP\) "informed guess" is self-evident.

[where \(N\) is the number of stars in the Milky Way galaxy; \(f_p\) is the fraction with planets; \(n_e\) is the number of planets per star capable of supporting life; \(f_l\) is the fraction of planets where life evolves; \(f_i\) is the fraction where intelligent life evolves; and \(f_c\) is the fraction that communicates; and \(f_L\) is the fraction of the planet’s life during which the communicating civilizations live.]

This serious-looking equation gave SETI a serious footing as a legitimate intellectual inquiry. The problem, of course, is that none of the terms [LM: Crichton meant "factors"] can be known, and most cannot even be estimated. The only way to work the equation is to fill in with guesses. And guesses – just so we’re clear – are merely expressions of prejudice.

Nor can there be “informed guesses.” If you need to state how many planets with life choose to communicate, there is simply no way to make an informed guess. It’s simply prejudice.

As a result, the Drake equation can have any value from “billions and billions” to zero. [...]

You know, for a while, I wanted to be a bit more sympathetic to the Drake equation. The factors in the equation

*could*be estimated, at least some of them. But it just can't help to change Crichton's conclusions, I appreciated a bit later. There are many factors in the Drake equation and to estimate the product, you need to know all the factors accurately enough. So it's enough if one of the factors is uncertain and you're uncertain about the product.

Moreover, the number of factors is rather high (let us say seven) so even if each of them were known "up to a factor of two", the product could only be known "up to a factor of 128". So because you rewrote the unknown quantity as a product of many things, you need the individual factors to be

*even more precise*than what would be enough if the number of factors were low. What's going on is that Drake chose many factors – describing "small steps" needed for intelligent life – and in this way, it becomes more plausible for every

*single step*(every fraction, every factor) that its probability is rather high, not much smaller than 1. However, there's a trade-off that guarantees that you haven't made the ETs more likely, after all: because there are many steps now, it becomes more likely that for at least for one of these steps, the guarantees that the probability is comparable to one fails! As in the look-elsewhere effect, it's rather likely that at least one of the factors among many actually

*is*vastly smaller than one, close to zero, and we may be the only ones in the visible Universe, after all.

At the end, it's rather clear that at least some of the "fractions" in the equation (and probably all of them) are so artificial quantities describing complex astrobiological questions that it's obvious that the rewriting of the number of extraterrestrials as a product doesn't help you to count them at all.

Now, in practice the Drake equation was used by the ideologues who claim that the intelligent life must be almost everywhere. That's what you would get if all the fractions were "somewhat comparable to one" – not "insanely smaller than one". But you would really need

*all these fractions*to be reliably "remotely comparable to one". Because the number of these fractions (factors) is so high, it becomes very unlikely that you may find guarantees that

*each single one of them*is at least vaguely comparable to one (and not much tinier).

I actually believe that the number of intelligent civilizations in the Universe is very small (which is surely compatible with the direct empirical evidence). But I don't want to defend it too strongly. What's more important is a weaker statement – the statement that a "nearly complete proof" that ETs are almost everywhere, a "proof" based on the rewriting of the number using the Drake product, is irrational. One just translates his prejudice into some new variables but the degree of uncertainty doesn't decrease at all.

Aaronson et al. could be doing something analogous to the Drake equation when they claim that \(P\neq NP\) is almost certain – well, I haven't really seen any persuasive arguments at all, so I may be overestimating their mental activities. But even if they reorganize the truth value of \(P=NP\) in some way, there are still some unknown bits in the expression and that's simply enough to make the odds of \(P=NP\) uncertain and comparable to 50-50.

## snail feedback (0) :

Post a Comment