Friday, March 07, 2014 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

\(P=NP\) is conceivable; there is no partial evidence in purely discrete mathematics

Scott Aaronson of MIT has finally written his new essay

The Scientific Case for \(P\neq NP\)
where your humble correspondent is appointed as the spokesman of the people who suggest that
\(P\neq NP\) is just “a fashionable dogma of the so-called experts,” something that’s no more likely to be true than false.
A fine summary, by the way.

We the doubters can even point to at least one accomplished complexity theorist, Dick Lipton, who publicly advocates agnosticism about whether \(P=NP\), Scott says. (Check that my text on \(P=NP\) and the Erdös problem that was written two weeks earlier is nearly isomorphic to Lipton's.)

I was highly amused by Scott's introduction. His cute formulation that Lipton "publicly advocates agnosticism" tells you something about the atmosphere in the field. What is it? It tells you that he should probably keep these blasphemies in his bedroom instead of coming out of closet and threatening the public morality! ;-)

Dogmatic irrational bullies like Aaronson himself seem to rule their field and the statement that "\(P=NP\) could very well be true" is the same kind of heresy as "climate change may not be a problem" among the climate cataclysmic crackpots.




Having implicitly declared Lipton a heretic, he still wants to give him an opportunity to repent (and separate him from the true infidels like us), so Scott adds:
Of course, not all the doubters reach their doubts the same way. For Lipton, the thinking is probably something like: as scientists, we should be rigorously open-minded, and constantly question even the most fundamental hypotheses of our field.

For the outsiders, the thinking is more like: computer scientists are just not very smart—certainly not as smart as real scientists—so the fact that they consider something a “fundamental hypothesis” provides no information of value.
The two formulations differ by the exact wording but their logical content is the same – and I would bet that Richard Lipton would agree. After all, my wording was actually closer to the proposition attributed to the insiders (except that I would kindly stress that computer science isn't a science according to the normal definition of the scientific method: it is a subfield of mathematics, a point that will be important below) while Lipton's was closer to the version of the outsiders.




It may be useful to copy-and-paste Lipton's paragraphs about the relevance of the Erdös discrepancy result and similar result for the beliefs about \(P=NP\):
\(P=NP\)?

Their paper can also be read at another level, below the surface, that reflects on complexity theory. Their paper shows that there are SAT solvers capable of solving hard natural problems in reasonable time bounds. What does this say about the strong belief of most that not only is \(P\neq NP\) but that the lower bounds are exponential?

I wonder sometimes if we in complexity theory are just plain wrong. Often SAT solver success is pushed off as uninteresting because the examples are “random,” or very sparse, or very dense, or special in some way. The key here, the key to my excitement, is that the SAT problem for the Erdős problem is not a carefully selected one. It arises from a longstanding open problem, a problem that many have tried to dent. This seems to me to make the success by Konev and Lisitsa special.

John Lubbock once said:
What we see depends mainly on what we look for.
Do we in complexity theory suffer from this? As they say across the Pond, this is bang-on. According to our friends at Wikipedia, Lubbock was:
The Right Honourable John Lubbock, 1st Baron Avebury MP FRS DCL LLD, known as Sir John Lubbock, 4th Baronet from 1865 until 1900, [a] banker, Liberal politician, philanthropist, scientist and polymath.
I know he lived many years ago, but note he was a polymath: hmmm… Meanwhile I remember when another Liverpool product came across the Pond 50 years ago, which some say “gave birth to a new world.”
I think that you will agree that the spirit is the same as in my articles about \(P=NP\): some apparently difficult problems suddenly find doable, efficient, realistic solutions and the belief that a polynomial solution for the \(NP\) problems such as the "traveling salesman" might be just a belief imposed by a group think. There is no genuine evidence that would make \(P\neq NP\) more likely than \(P=NP\). A fast solution for the "traveling salesman problem" hasn't been found but a proof that it doesn't exist hasn't been found, either.



Traveling salesman problem, the solution for the European capitals. You don't have to be Mozart to become good in searching for a good route; being a good traveling salesman is enough. A Monte-Carlo-like (but pseudo-random) self-correcting search algorithm for the fastest route, along with a boring technical proof that this algorithm succeeds quickly enough, could be enough.

Some readers may be unfamiliar with the \(P=NP\) debate so let me offer you a few introductory words. The \(P=NP\) problem is one of the 7 problems (see a comparison of the depth of all 7) where you may win $1 million from the Clay Institute. You will acquire this allowance if you either prove or disprove that the class of problems \(P\) is the same class as \(NP\).

What are the classes? They're problems for which you're supposed to find an algorithm. Each problem is labeled by an integer \(N\), e.g. the number of countries in the "traveling salesman" example above. And the \(P\) class ("polynomial") is composed of problems that may be solved by a computer in a polynomial time (=number of instructions), i.e. time \(\O(N^m)\) for some exponent \(m\). The \(NP\) class ("nondeterministic polynomial") contains problems for which the solution (that an ingenious woman or a superfast computer donates you) may be verified to be true in a polynomial time (defined in the same way).

The class \(NP\) is known to contain the "traveling salesman problem": find the shortest trajectory that visits all cities in a list (like the capitals of Europe above). This \(NP\)-character of the problem is a bit nontrivial by itself. There was a time when people didn't realize that the fastest route may be verified this quickly. The cheapest, brute-force algorithm to actually find the best route could check all \(N!\) or so orderings of the cities and compare their total length. But the "factorial time" is clearly too high for relatively small values of \(N\) – think how much is \(69!\) – and we want a more clever algorithm that requires a much smaller number of instructions if \(N\) becomes large.

The determinant of an \(N\times N\) matrix may also be written as a sum/difference of \(N!\) products but this factorial-like speed is too pessimistic. You may easily bring the matrix to a triangular form which requires just a polynomial number of operations (the determinant is the product of numbers on the diagonal). This speedup may be attributed to the identity \(\det(AB)=\det(A)\det(B)\). For the problems that people (still) can't solve quickly, the "analogous" identities or tricks are not known and it's possible that they don't exist. For some problems, like the determinant, it's known that the polynomial speedup is possible. For others, like the permanent (determinant but without the alternating signs), it is "more likely than \(P\neq NP\)" that the speedup is impossible because they're \(NP\)-hard problems, i.e. at least as difficult as the hardest problems of \(NP\) (but quite possibly, the permanent may be strictly harder/slower than the \(NP\) problems). For some group "in the middle", like the traveling salesman problem, it is just not known whether the polynomial speedup is possible.

There probably exists something that may be called the "fastest algorithm to solve the traveling salesman problem". How does its required number of instruction depend on \(N\)? Well, it could be a polynomial function, in which case the traveling salesman problem would belong to \(P\): no one knows. Attempts to find such a surprisingly speedy algorithm have failed, but so did the efforts to prove that such an algorithm can't exist. It's enough for you to do one of these things to pocket the million of dollars.

Many other problems have been proven to be "equivalent" to the traveling salesman problem when it comes to the complexity, so they are also in \(NP\) but it is not known whether they are in \(P\). If one of them is in \(P\), then all of them are in \(P\). These equivalent problems include e.g. the decision whether two graphs are isomorphic and the computer scientists' most popular representative, the boolean satisfiability problem (SAT). In the latter, you write some complicated proposition (function) containing \(N\) different boolean (true/false) variables \(p_i\) connected by "AND", "OR", and "NOT". Your task is to find out whether the function is tautologically "TRUE" (or "FALSE", it doesn't matter). The brute-force, "dumb" solution is to check all \(2^N\) arrangements of the truth values to the variables \(p_i^2=p_i\) and to verify whether all of them produce "TRUE". It is not hard to see that you can usually find a much faster way to decide – but is there a universal way to decide?

Partial evidence in science

Scott quotes some otherwise mentally healthy people who, while endorsing my opinions, speculate that "Luboš is difficult to deal with". Too bad that some people can't resist to contaminate their wisdom by some malicious crackpottery. At any rate, Scott's main mission is to support the idea that mathematics is full of "partial evidence". He is partly responding to a comment of mine. You're gonna be frustrated because he named the file by a meaningless tautology.

So is it possible to have "partial evidence" that makes one answer to a "Yes/No" question much more likely to the opposite answer? Yes, in natural science – and everything that is an informal version of a natural science – partial evidence is what all the knowledge is all about. Recall what Feynman said about flying saucers. (Scott mentions the LHC alarmist Walter Wagner from Jon Stewart's show who believes that the probability that the LHC destroys the world must be 50% because there are two options, yes and no, and 100%/2=50% LOL.)

Science isn't really proving things possible and impossible. It is showing that some things are less likely and some things are more likely. In principle, one never reaches a 100% certainty but we often get super-extremely close to it. That brings me to another intermezzo: a reader of Resonaances claimed that the new major WIMP dark matter paper can't be right when it claims that the model with the dark matter is preferred over the dark-matter-less models at the significance level of 40 sigma. He or she thinks that not even the existence of the top quark is known at 40 sigma. You can find an answer of mine over there. I neglect some systematic errors over there. But millions of top quarks are enough to be certain about the "reality of the bump" at thousands of sigmas, of course. It corresponds to 99.999... percent certainty where the number of digits is comparable to a million, too. That's the normal certainty we reach about things we "really know". Just to be sure, we never really know with certainty that a new theory is exactly right. After all, it never is exactly right. We only reach this certainty when we rule out (i.e. falsify) an older, inadequate theory. A null hypothesis that has failed.

I hope that these things are clear to you. The truth in science boils down to the experiments and observations – the empirical evidence – and it is always noisy and must be evaluated in terms of statistics.

Analogous partial evidence in mathematics

What is the character of the "elementary step" that allows us to prefer one explanation/theory/hypothesis and disfavor another? Well, try to think about it. What is really happening when we are disfavoring a hypothesis is that the observed values are "pretty far" from the predicted ones (in the units of the error margin, either the theoretical or the experimental one, or some combined one). Large deviations are extremely unlikely, so the more the predictions deviate from the observations, the more we disfavor the hypothesis, roughly speaking.

But you can see that to proceed like that, the predictions have to be essentially continuous numbers that are peaked around some value. There must be some reason to believe that the distribution is continuous or known or Gaussian or effectively similar to a Gaussian distribution and such reasons only exist for continuous quantities or quantities that are continuous in some approximation scheme.

Alternatively, you may also disfavor a hypothesis that only predicts the observed data if it assumes too many details (e.g. it assumes that 6,000 years ago, someone placed the dinosaur bones exactly in a way that fools you into believing that the dinosaurs actually lived tens of millions years ago). If an alternative theory can yield equally good or better predictions without this pile of arbitrary assumptions, it is preferred.

But to discriminate the hypotheses in this way, you require the existence of an exponentially large number of (wrong) hypotheses in which the bones or something else could be placed totally differently; that's what we mean when we say that the "someone arranged the bones" theory is not simple.. The "right arrangement of dinosaur bones by God" is a theory that must share its prior probability with an exponentially large number of analogous hypotheses that may be easily falsified (e.g. the theory that a Bible may be found at each cubic meter of soil because that would be rather sensible for God to make His important text more available). So the prior probability for the "artificially arranged bones" is already exponentially low – it's been de facto adjusted after the evidence was found. Needless to say, it's still not enough and when you find some more detailed evidence, your original theory fails again unless it is adjusted again, and so on. The theory never "wins" a verification or a "check" in the unchanged form – a signal that you may be on a completely wrong track (well, the track is known as creationism in this case).

It is not hard to see that some variations of this "partial evidence" may be used in mathematics, too. For example, I do believe that the Riemann Hypothesis is far more likely to be true than to be false – even though it has not been proven yet. Why do I believe it? Because the Riemann Hypothesis deals with continuous functions; it is analogous to natural sciences in some way.



Recall that the Riemann hypothesis says that whenever \(\zeta(s)=0\), then either \(s\) is real or \(s=0.5+it\) where \(t\) is real. Take all the nontrivial roots i.e. values of \(s\) which are not real and for which \(\zeta(s)=0\). Sort them increasingly according to the imaginary part \(t\); those with the positive \(t\) are enough (the negative \(t\) roots are just complex conjugate, mirror images).

The Riemann Hypothesis says that the real part of all these nontrivial roots \(s\) is \(0.5\). It's demonstrably true for the first "lots and lots of trillions" (some large number) of the roots with the smallest values of \(t\). There are many other proven restrictions that make the theory "Riemann Hypothesis is true" remarkably consistent and "passing nontrivial checks" much like physical theories do. But let me just focus on the first example: the first trillion of roots \(s\) have \({\rm Re}(s)=1/2\). If the Riemann Hypothesis is false, then there has to exist the "smallest positive \(t\) nontrivial root that violates it". In the ordering according to \(t\), it is the \(N\)-th root where \(N\) is known to be greater than the trillion – in fact, some much higher integer I don't want to look for.

So it's sensible to assume that there has to exist an explanation why the Riemann Hypothesis works "so well" whenever tried, and the simplest explanation is that the Riemann Hypothesis is completely true. It is analogous to the "mathematical induction" applied to the sunrise. The Sun rose today, yesterday, in the previous trillion of days, so it's probably because the Sun rises and will rise every morning.

However, in the case of the Sun, we know that the conclusion we proved by the (fake) mathematical induction is invalid. The Sun will go red giant in 7.5 billion years and the dangerous climate change prophets will finally be proven right at that point after some waiting. We can figure out that the Sun's lifetime is finite by watching other stars; or by studying the finiteness of energy produced by the thermonuclear fusion.

But this conclusion about the Sun seems to be due to the "messiness of nuclear physics and astrophysics". If we were looking at the quality of the sunrise defined in some way, it would change a little bit every morning. However, the real part of the roots \(s\) in the Riemann Hypothesis case is exactly equal to \(1/2\) for all the roots. The roots don't seem to "age" like the Sun. There seems to be no imperfection in them. That's why it's more sensible to believe that the roots of the Riemann zeta function just never deviate away from the critical axis and the nontrivial checks that the Riemann Hypothesis has passed rightfully suggest that there actually exists a proof – a so far unknown one – that all the nontrivial roots actually do lie on the critical axis i.e. that the Riemann Hypothesis is right.

\(P=NP\) hasn't really passed any checks

The Riemann Hypothesis is a powerful statement that makes "many predictions" and these predictions may be tested against the "data" and so far all of them have worked. The explanations that are needed to be consistent with the data as well as the assumption that the Riemann Hypothesis is false are much more contrived. For example, the invalidity of the Riemann Hypothesis requires that there exists the "first, smallest offensive root" and it's the \(N\)-th one where \(N\) is Hugo Antiriemann's constant, an integer that will be found in the 27th century. This fundamental constant of mathematics is equal to \(N\approx 2.014\times 10^{2,014}\) or so.

Well, it's perfectly plausible. High "fundamental" numbers like that (and very long proofs) are actually occurring in many relatively simple problems and sequences etc. in mathematics. Still, I think it seems awkward to assume that such an outcome involving Hugo Antiriemann's giant constant. In mathematics, such super-high integers only occur in "purely discrete things" while the problems which are essentially continuous (like problems involving the Riemann zeta function) do seem to respect some naturalness, at least to some extent.

Hugo Antiriemann's constant \(N\) is a large integer but there's also the corresponding root \(s\) for which \(\zeta(s)=0\). This \(s\) lies somewhere in the critical strip, very very... far from the real axis, and slightly away from (and probably extremely close to) the critical axis, too. The existence of such a special complex number \(s\in\CC\) seems more contrived than the "very large integers" one may generate in mathematics. It would be a continuous fundamental number in mathematics analogous to \(\pi\) (although a less important one) that should be "natural" and therefore of order one but that ends up being gargantuan, anyway.

So I am not 100% sure whether the Riemann Hypothesis is true; or whether there exists a huge Hugo Antiriemann constant like that. But it is fair to say that I feel more than 99.9% certain that the Riemann Hypothesis is exactly true – because the partial evidence supports this belief in an analogous way as partial evidence in natural sciences with continuous numbers. And I haven't even discussed different kinds of evidence that the Riemann Hypothesis is true and that a proof of it could exist – like proofs of its weakened versions or variations over different fields than complex numbers etc.

But what about \(P=NP\)? I don't really think that the prevailing belief \(P\neq NP\) has passed any nontrivial checks analogous to those in the case of the Riemann Hypothesis. It is a belief that the fastest algorithms to solve the "traveling salesman" and related problems have to be slower than polynomial. In effect, we are asking whether one particular object – a polynomially fast algorithm to solve one particular problem – exists.

There can't be any "partial evidence" supporting one answer or another simply because this assertion, \(P\neq NP\), is not making many predictions. It only makes one prediction, namely itself. It doesn't say something about an infinite number of roots that we may start to check one by one (or in some groups). It only predicts that a proof of \(P\neq NP\) should finally be found – so far it hasn't been found, so this slightly weakens the belief in \(P\neq NP\) – but it also predicts that all people trying to find the polynomially fast algorithms will always fail – yes, so far they have failed which slightly increases the belief in \(P\neq NP\) but this increase is really negligible because there exist no "deadlines" by which you're expected to solve any mathematical problem. It's clear that there exist many mathematical propositions that are true but unproven at this point – but they will be proven sometime in the future. Moreover, this fact, as stressed in the previous sentences, is completely symmetric in \(P=NP\) and its negation.

But that's it. There are really no continuous "predictions" coming from \(P\neq NP\). And the claim \(P\neq NP\) cannot be "split" to infinitely many statements that could be verified separately. And in the same way, we can't use the same method to "disfavor a contrived hypothesis" as we did in the case of the creationist dinosaur bones simply because \(P=NP\) is pretty much as simple and non-artificial as \(P\neq NP\). In fact, I would argue that \(P=NP\) is more natural than \(P\neq NP\). Well, \(P=NP\) seems more analogous to the Riemann Hypothesis, \(E=mc^2\), the charge conservation law, or \(xp-px=i\hbar\) – and all claims in mathematics and science that say that something is equal to something else.

I realize that the odds are comparable to 50-50 but I think that mine is a natural approach of a natural scientist. If there is a hypothesis that two things (or sets of problems, in this case) are equal to each other, it is a very natural, simple hypothesis and one should only abandon it if and when it is falsified.

Scott Aaronson tries to humiliate my open-mindedness:
Discrete math and computer science, you see, are so arbitrary, manmade, and haphazard that every question is independent of every other; no amount of experience can give anyone any idea which way the next question will go. No, I’m not kidding. That’s his argument.
But I don't feel humiliated at all. This quote describes how things exactly are. Yes, mathematics is man-made. People, and not Nature, are inventing the rules. The right laws of physics are "out there" and we must find them; we don't have the freedom to decide what the right axioms are. But mathematicians do have this freedom; this has been the defining difference between mathematics and physics since their divorce. Some deep maths is also essential in physics etc. but that's just not the case of generic discrete mathematics.

Two statements in discrete mathematics may either be (rigorously) shown to imply each other in one direction or another or both (or one may imply the negation of the other etc.) – in which case you may say that the probability of one is greater or smaller than or equal to the probability of the other (or one minus it) – or they cannot be shown to be related in this way. In that case, you cannot say anything about their probabilities at all. They are completely independent of each other.

You may become an "expert" who is able to guess whether a large integer is prime. Usually, it's not, the "expert" may say. You can say "it's surely not" if it ends with an even digit or if the sum of digits is a multiple of three, and so on. But all this "expertise" only exists because the occupation – determining whether integers are prime – is composed of many analogous problems. Well, the rules for the divisibility by two, three etc. may be rigorously proven. If you can't prove them, you may still maintain your expertise about the "decreasing density" of primes etc. And your expertise may also contain lots of theorems implying that numbers with some other, complicated properties are prime (or are not prime). You know at least something. But in the \(P=NP\) case, there is nothing like that. There is no experience, there is no expertise. It's one binary question that may be either be proven or disproven or nothing. There is nothing in between.

\(P=NP\) is the claim that a particular fast enough algorithm exists. It's like asking whether an asteroid is there in a volume inside the Solar System. But the volume is chosen to be at such an unknown yet unique or special place that you can't have any idea about the "density of asteroids" and other things over there. Preferring one answer over another is pure prejudice.

We are talking about a particular fast algorithm for the traveling salesman problem that may be found in the year 2350. It will be first written on a particular future "computer" by a particular person (with some CPU implants) etc. It doesn't matter. But whether such an event will happen is one discrete question and every question that cannot be proven to be equivalent (or at least weaker or stronger) is completely uncorrelated to this one. Whether Beethoven had some special tumor in the brain that made him a better composer is a different question. One may demagogically suggest that \(P=NP\) is equivalent to some other question but the fact is that it is not and Scott Aaronson is a bigot if he is incapable of seeing this simple fact. Moreover, Scott is rather likely to be wrong about the composers, too, so they wouldn't strengthen his bold claims that \(P\neq NP\) is "almost proven" even if the analogy with music were legitimate which it's not.

His inability to see that he has no real evidence is sort of striking.
But then what about Goldbach’s Conjecture? Is Luboš 50/50 on that one too? Better yet, what about continuous, analytic problems that are closely related to \(P\) vs. \(NP\)? For example, Valiant’s Conjecture says you can’t linearly embed the permanent of an \(n\times n\) matrix as the determinant of an \(m\times m\) matrix, unless \(m \geq \exp(n)\). Mulmuley and others have connected this “continuous cousin” of \(P\neq NP\) to issues in algebraic geometry, representation theory, and even quantum groups and Langlands duality. So, does that make it kosher? The more I thought about the proposed distinction, the less sense it made to me.
Goldbach's conjecture is "probably" true for completely analogous reasons as the Riemann Hypothesis above. It makes lots of predictions and they have passed many tests. The smallest integer violating the conjecture would have to be "unnaturally large". One may talk about the densities of numbers obeying the proposition etc. that are continuous. But yes, Goldbach's conjecture is number theory not translated to complex analysis, as far as I know, so indeed, my degree of "certainty" that Goldbach's conjecture holds is smaller than in the case of the Riemann Hypothesis. It is much more plausible in Goldbach's case that there is some large integer that is the "first violating one". Similar examples are abundant in number theory!

So I would say Goldbach's conjecture is true at 95% for me. It may mean 80% or 99%; it doesn't really make sense to insist on a huge precision of these probability estimates. The right probability that God knows is either 0% or 100% and nothing in between, anyway! Although I tend to prefer to believe that Goldbach's conjecture is right, I would never think that a person claiming to prefer the opposite answer is a hack just because the answer is opposite because I don't really have any 5-sigma-like evidence or proof of the conjecture and he may have some stronger evidence that I am not grasping.

But Goldbach's conjecture is in no way equivalent to \(P\neq NP\) so he is just spreading fog when he is identifying them. Scott is showing that his brain isn't operating properly. One of the conjectures may be right, the other may be false, or vice versa, or both may be right, both may be false. One conjecture may be proven soon, the other may resist for a long time, one of them may admit (and does admit) partial evidence, the other one doesn't admit anything like that, and so on.

But that was still OK. The way how Scott continues shows his utter inability to think rationally. Valiant's conjecture has been claimed to be "analogous" to \(P\neq NP\) – the "analogy" is somewhat less sloppy than Scott's "analogies" with Mozart – but it is neither stronger nor weaker than \(P\neq NP\) so they're still in principle unrelated. Much more seriously, it remains a conjecture! So whether Valiant's conjecture holds is indeed as open as \(P=NP\) itself. The paper on Valiant's conjecture that Scott linked to works on a "program to prove it". But before such a program succeeds, the evidence supporting your prejudices is zero. An impartial mathematician – or at least an impartial group of mathematicians – should actually participate in two programs, one to prove the conjecture and one to disprove it (again, the negation of Valiant's conjecture is actually more natural than the conjecture itself). If you are asking whether I am claiming that these mathematicians are biased jerks, aßholes, and victims of group think, I must remind you that it is you and not me who chose the particular wording for the valid observation. ;-)

Concerning the inequivalence of determinants and permanents, great, we may show that the same simple tricks to quickly calculate determinants don't work for permanents. But that doesn't mean that there are no other tricks that work for the permanent, or the traveling salesman, and so on.

One may rephrase \(P=NP\) in many equivalent ways but none of them has been proven and none of them has been disproven. One may incorporate comments about \(P=NP\) into papers on the Langlands program or any other deep or shallow research direction in mathematics but none of these programs has actually produced any genuine argument for or against \(P=NP\). So Scott's arrogant references to some other people who irrationally assume that the answer is \(P\neq NP\) without really knowing is showing that he completely misunderstands the depth and strength and character of my (our) criticism, namely that these are prejudiced people controlled by irrational group think.

Scott Aaronson claims that \(P\neq NP\) has passed tests. However, it may be seen that his logic in all of these "tests" is defective. More precisely, his logic always suffers from circular reasoning.
By now, tens of thousands of problems have been proved to be \(NP\)-complete.
That's great but as long as we correctly follow rational thinking or Bayesian inference, if you want to be more specific, it has no implication for the validity of \(P=NP\) whatsoever. I will discuss the fact that it's more conceptually accurate to talk about "one" and not "tens of thousands" below.

The relationship between the sets \(P\), \(NP\), \(NP\)-complete, and \(NP\)-hard looks like one of the two pictures here:


You see that the right diagram is a bit simpler – you might say that \(P=NP\) is more economic and \(P\neq NP\) is a bit more contrived (much less contrived than the seeded dinosaur bones, however) – but even if you avoid the bias that favors simplicity (with the opposite sign than Scott says somewhere), you see that they're two qualitatively different diagrams showing the relationship between the two sets. The tens of thousands of problems proven to be \(NP\)-complete either belong to the intersection of the \(NP\) circle and the \(NP\)-hard "parabola" in the left case \(P\neq NP\), or they belong to the circle \(P=NP=NP\)-complete which is subset of the \(NP\)-hard "parabola" in the right diagram assuming \(P=NP\). Which of the two diagrams looks right to you and which of them (the remaining one) is left? ;-) There's no way to say.

(Just to be sure: I emphasize that the "simpler" theory is not necessarily right. Science just doesn't work like that. But \(P=NP\) is simpler pretty much in the same sense in which simpler theories in physics are simpler. One may also express the advantage in a different way, as an unusually sensible reader of Scott's blog did: the burden of proof should be on those who claim \(P\neq NP\) because it's enough for them to find one example of a problem that is in \(NP\) but demonstrably not in \(P\). This argument is somewhat weakened by the fact that all the decisive problems in \(NP\) are "speedwise equivalent" but if you neglect this "speedwise equivalence", \(P=NP\) may naturally be called the "right default null hypothesis" because by realizing that the sets contain many problems, it should be easier to falsify it than to falsify \(P\neq NP\).)

When similar "mild" asymmetries or biases are classified as "not decisive", the two conjectures must be assumed to have comparable prior probabilities. None of them contains lots of "fudged artificially seeded dinosaur bones", so you can't disfavor any of the two hypotheses on this basis. And all the "data" that Scott presents are compatible with both hypotheses, with both diagrams! So the observation that tens of thousands of problems have been shown to be \(NP\)-complete clearly changes nothing about the probability that \(P=NP\). That's what Bayesian reasoning says. Both hypotheses pass the check – no contradiction emerges – so both hypotheses have the same posterior probabilities as the prior probabilities.

One may try to fool himself with some "biased description" of the facts but nothing changes about the fact that the probability of \(P=NP\) is unaffected by the evidence because both hypotheses are compatible with the "empirical data". One could try to spin the insights in the opposite direction. Lots of ways to reduce one problem to another have been found, and \(P=NP\) is effectively saying something of the same sort, namely that verification algorithms for the specific problems may be "decorated" to yield (not insanely slower) algorithms to solve the problem, according to some dictionary. I would claim that many "similar" reductions have already been made but the word "similar" is a matter of subjective feelings. What is important is that there is no proven equivalence in one way or another, so the "similarity" of the question whether \(P=NP\) to any problem that has already been answered is completely inconsequential!

The same comments apply to everything that Scott presents as "partial evidence". He describes a "close call", some two sets that seem to be nicely touching. But the detail he is unwilling or incapable of seeing is that this touching of the two sets is totally compatible with the \(P=NP\) diagram of the relationship between the sets, too! The only difference is that \(P=NP\) actually simplifies the shape of the boundaries between the sets because it identifies \(P\) with \(NP\) and \(NP\)-complete. It hasn't been proven that the sets are "simplified" in this way but it hasn't been disproven, either, and this assumption contains no "dinosaur bones".

Now, a comment about the tens of thousands

I think that Scott is also trying to make these "equivalences" look reliable and difficult at the same moment. If two descriptions are found equivalent in physics (duality), physicists are just speaking about the theory as if it were one thing because it is really one thing. If you only care about the polynomial character of the solving algorithms, the traveling salesman problem is the same thing as SAT. They're expressed in two languages but the beef is the same because the equivalence/conversion has been found. So it's really demagogy to talk about tens of thousands of problems. When it comes to the polynomial speed (not) achievable by the fastest algorithms, they are one thing, not tens of thousands of things. There is no analogy of the large number of the truly distinct roots of the zeta-function that have passed the consistency check. There is just one damn thing – the traveling salesman problem – and the question is a purely binary inequality-like question whether the minimal required time is \(f(N)\lt N^m\) for some \(m\) or not.

The right explanation why all these 10,000 problems have the same speedwise properties is known; the right explanation are the known "conversion" proofs of their speedwise equivalence. It is therefore silly to suggest that "another explanation" is helpful or "almost needed". In particular, it is silly to say that \(P\neq NP\) is helpful or "almost needed" for the equivalence – the actual reasons of the equivalence are known and more mundane and \(P\neq NP\) is neither needed nor helpful at all.

It is just completely compatible with all the evidence that, using Scott's language, he is looking at one group of frogs that belong to one species and he is fooling himself into believing that there must be many species just because he is not able to prove that they are one species. He has not found and no one has found any evidence that they actually cannot have sex with each other; they have only found out that frogs in the smaller groups may have easy and fast sex with each other, because they're siblings, but that doesn't make the sex with others impossible. In fact, the sex with non-siblings that can't be easily seen to "carry the same DNA" is the only sex that really matters (sex has developed because the randomly hybridized DNAs from parents produced new organisms that were more immune against parasites), and the evidence whether it is possible here is non-existent in either direction (yes, don't worry, sex in the real world is possible "even" among non-siblings, enjoy it).

The whole point of Scott's demagogy is that he assumes \(P\neq NP\) and leads his idiotic readers to think in terms of a model where \(P\neq NP\), and then see how the people who say \(P=NP\) look stupid. But what this demagogue doesn't seem to realize is that one may equally well assume \(P=NP\), create a mental model based on this assumption, and "prove" how idiotic are the people who believe that \(P\neq NP\). Both approaches are demagogy plagued by circular reasoning so I won't even show you the version assuming \(P=NP\). It's really like looking at the morning star and the evening star and claiming that they must be two different celestial bodies blah blah blah when in fact, a slightly more sophisticated approach to astronomy is enough to see that it's the same Venus, a point that idiots like Scott can't ever comprehend. You get the point. I won't write dozens of paragraphs of this worthless demagogic junk because I am no Aaronson. He can't distinguish demagogy and circular reasoning from a rational argument or evidence; I can.

At some point in the middle of Scott's text, I stopped reading because I think it's a waste of time. Based on the partial evidence I have seen, it seems extremely likely that the whole text written by Scott Aaronson is the same pile of crap as the pieces that I have already read. The hypothesis that there is something valuable forces one to assume that there exists the first valuable and meaningful sentence and it is the \(N\)-th sentence where \(N\) is demonstrably a very large number. Such a large \(N\) makes the hypothesis more contrived. So the probability of some valuable content in the rest of Scott's text is too small for me to waste more time with that.

That was an example of a situation in which partial evidence is possible. For \(P=NP\), partial evidence is not possible and all the "nontrivial checks" that \(P\neq NP\) has passed according to Scott are actually trivial checks and tautologies because \(P=NP\) passes them just as well. Some people are irrational bigots and bullies and I recommend Prof Lipton to buy a firearm that he may need if he ever runs into Scott Aaronson.

And that's the memo.

P.S.: In order for Dr Lipton and me not to be alone, let me quote two more established \(P=NP\) agnostics:
The main argument in favor of \(P \neq NP\) is the total lack of fundamental progress in the area of exhaustive search. This is, in my opinion, a very weak argument. The space of algorithms is very large and we are only at the beginning of its exploration. [...] The resolution of Fermat's Last Theorem also shows that very simple questions may be settled only by very deep theories.
—Moshe Y. Vardi, Rice University

Being attached to a speculation is not a good guide to research planning. One should always try both directions of every problem. Prejudice has caused famous mathematicians to fail to solve famous problems whose solution was opposite to their expectations, even though they had developed all the methods required.
—Anil Nerode, Cornell University
Amen to that.

Add to del.icio.us Digg this Add to reddit

snail feedback (31) :


reader bg2b said...

Graph isomorphism is one of those rare problems that is obviously in NP, but is not known to be either NP-complete or in P.


reader Luboš Motl said...

Well, yes, but your wording seems biased to me, too. Why do you call it "rare"? What is the measure on the space of problems that you are using to quantify this "rarity"?


The SAT or traveling salesman are known to be speedwise equivalent to one another and many others - but that's a reason why this whole equivalent class should be called "one speedwise problem", and it is exactly as "one" as the "one" graph isomorphism problem. So it would be completely idiotic to consider SAT or salesman more "true" or more "representative" or more "typical" than the graph isomorphism problem. Their "testimonies" clearly must be attributed comparable strengths. It's really common sense that there is no "qualitative" difference between them that would make one important or representative while the other would be "worth overlooking".


Some problems are known to be equivalent in their membership to the sets. Some other problems are not known to be equivalent. All the evidence is totally compatible with the assumption that P, NP, and NP-complete are ultimately the same set. It is not a contrived theory that could be legitimately ascribed a small prior probability; and the probability can never get reduced by Bayesian reasoning because there is no single piece of "data" that would contradict this P=NP assumption. So it's common sense that any rational thinking must end up with comparable probability for P=NP and P!=NP.


reader bg2b said...

By rare I just mean there are few natural problems that are in NP but not known to be either NP-complete or in P. Wikipedia lists 25 for example. That's in contrast to thousands of natural problems found to be NP-complete and thousands more known to be in P.

BTW, while we don't know that P is a proper subset of NP, or that NP is a proper subset of EXPTIME (problems requiring at most an exponential amount of time by a deterministic computer), we do know that P is a proper subset of EXPTIME. So at least one of P vs. NP or NP vs. EXPTIME has to be a proper inclusion as well. If my prior probability for both P != NP and NP != EXPTIME was 50%, then learning P != EXPTIME would lead me to conclude that P != NP with probability 67% :-).


reader Klum said...

Very nice post.
Though I disagree that "mathematics is man-made", or that "man-made" is something which is "non-objective" for that matter.


reader Luboš Motl said...

Thanks, Klum, also for your cautious words at Scott's website.


I am actually not really in a contradiction with you. As a Platonist, I believe in mathematics as a conglomerate of pre-existing structures and relationships.


reader anna v said...

Well it seems that Russia is keeping the standard of coherence of the dissolution of Yugoslavia by unilateral NATO decisions. It is remembering Kosovo. What is sauce for the goose is sauce for the gander.

I am sorry to say that "democratically" is a word used by the west as a battering ram. What can be more democratic than real demonstrations of thousands on Meidan? Or what can be more democratic than a referendum of real people in Crimea? It is the voice of the Demos. All else is lawyer like rules and regulations to be manipulated as the ones with the upper hand need for their own interests.


reader John Archer said...

This topic is fun business!

Luboš,

A decent respect to the opinions of mankind compels me state a fundamental truth that I just discovered:

Scott Aaronson isn't all bad — the fellow has sense of humour!.

And all people with a sense of humour get into Heaven. So that's makes him good enough. QED

The fellow's OK. :)


reader henrik.l said...

Ukrainians are blackmailers.. And without wanting to insult stupid, they never should have left the Russia Federation.

Being an Autonomous Republic within the Federation could have been a good business for them, see today, will be marionestes to the sinister US for a bloody conflict against Russia. Sad, theye are brothers.


reader Shannon said...

True. Amongst the 17 millions people killed during the Holocaust 6 millions were jews, 11 millions were not jews. They were : Soviet and Polish civilians, Soviet prisoners of war, Romani people, people with disabilities, Jehovah's witnesses, homosexuals, mixed race children.
We should never forget that indeed.


reader Shannon said...

Nnon, it is for the nations in the world to recognise a past massacre as a genocide, or not. It takes time and a lot of lobbying which often involves political and diplomatic issues. For ex. the Armenian genocide has now been recognised by a lot of nations. Even Bachar El Assad recognised it in January of this year...


reader RM said...

No polynomial time travelling-salesman algorithm has been found in the collection of all programs of size less than N bits, for N much larger than that needed to express several equivalent exponential+ time algorithms. That's really no different than your example of roots: no counterexamples below some very large size of tests (and yes there are tests; those would be the algorithms/programs themselves just as the RH tests are the roots themselves) provide bias toward evidence of truth. That seems to be the entirety of your argument for the RH, and I don't see where you point out a reason why it wouldn't also work for P=/=NP.


reader nnon said...

"Massacre" or "genocide" are just some words. Orphaned children do not care if their parents were killed because of a single murder, mass murder, pogrom, ethnic cleansing or genocide. Of course, the greater the number of victims, the greater impact it would have on international community, but good law systems should reject concept of collective victims in the same way they reject the notion of collective guilt. Some Jews should receive money or real estates not because they are Jews (unfortunately some jewish organisations would like it to work in this way) but because they or their particular ancestors were victims of particular crimes of Nazi Germany. And similarly, when I write about Tatars, I don't want to say that they all should receive Crimea because they are members of one particular nation. I just say that some Crimean Tatars should obtain land on Crimea from Russia, because they are victims or descendants of victims of Soviet Union. Obviously, each case should be examined individually.
One can say that by definition in a state of law victims get their satisfaction, or otherwise, one can speak not about a state of law, but a state of lawlessness. Russia is a state that didn't break ties with their communist past (for many Russians, including many members of the Russian government, Lenin or Stalin were cool guys, good gospodins, and the actions of NKVD were the proper way to introduce order) and, in the same time, haven't even tried to repair all the wrong and unjust things that have been done by Soviet Union. Thus, Russia is a state of lawlessness and should be considered in this way on the global political scene, exactly in the same way, the global community treats Somalia or Burundi. No country has to recognize any events as genocides or anything else to put a pressure on Russia, it would be enough to simply stick to the definitions of basic concepts like "law", "justice" or " a crime".


reader Luboš Motl said...

Could you please explain how much is N and what is the reference behind your bizarre claim?


My observations about RH had nothing whatsoever to do with the "search through all possible proofs".


The latter is clearly a meaningless strategy to prove things. Even proofs that require just 50 characters would need to go through something like 26^50 strings to be found, and that's more than the number of electrons in the Universe or the age of the Universe in Planck times.


The classification of finite groups with its strings requires something like a million of characters, so you need 10^million operations to go through equally long proofs. And the proof of the Erdös discrepancy result includes data with billions of characters so you really don't want to search through all strings of the same length as the existing proofs of rather ordinary things, do you?


WTF?


reader nnon said...

So if modern Germany wasn't able to miss any occasion to show the world how much it respects achievements of Third Reich, and it would profit from properties stolen from Jews during Holocaust, you would call them a state of law?? You would think that other countries should feel OK making business with them? If so, it's certainly you who is totally brainwashed here.


reader PlatoHagel said...

Is this an answer to the Firewall Paradox?


reader cynholt said...

The Ukraine may need Russian force to prevent another Yugoslavia disaster from occurring. The place is an ethnic powder keg -- one that has been engineered by Globalist dark forces and their fascist allies. The head puppet Obama has warned the Russians to back off. That should be proof enough. Not saying the Russians are good guys here. Only that some early intervention may save many lives in the region. The Ukraine may need to be Balkanized to prevent a war before it starts.


reader cynholt said...

These acts of "meddling" all cost money, lots of money. Meddle here, meddle there, meddle, meddle, everywhere. The only question is when, when will the US no longer be able to afford to meddle again, anywhere. As China grows, the US implodes. All good statistics continue to decrease, while we can only claim number one in prison populations and military industrial complex. Remember those pictures of abandoned Soviet ships rusting in abandoned military ports after the collapse of the Soviet Union? The cost of our meddling will accelerate that likely scenario for America – we´ll be selling our ships and planes as scrap to China. Meddling is an accelerator.


reader cynholt said...

Putin is a player, Obama is a puppet. There really is quite a
difference. Putin can speak for himself, while Obama can only mime the
orders given to him by his handlers. And Obama handles his staff with
the same fury that his handler's handle him, so a lot of what we're
getting from the White House staff is just a coverup of the stupid
decisions being made. A lot of strings are being pulled by the masters
of wealth to make sure Obama keeps this train on the tracks, but I am
afraid he is no better of a driver, and cares about as much for his
passengers, as the engineer of that Spanish train that derailed recently.
When I see that train come around the corner and start to flip off the
tracks onto its side, I see the Obama presidency, with the Shrub as his
co-pilot, doing the same thing to America. America has derailed folks.
Were off the tracks!

Criminals like CIA operative Robert Seldon Lady are adored by
our present administration, but heroes to the American people like NSA whistle-blower Edward Snowden are
considered criminals. Definitely way off the tracks!


reader Dilaton said...

I agree that a more efficient way to make the world better would be to immediately stop current rascal states who are doing big wrongs against humanity today, instead of exclusively looking or pointing at admittedly awful single bad things that happend in the past. I also agree that there are victims of other past crimes, done by aggressive dictatorial regimes all over the world, who should obtain compensation for there sufferings too apart from the Jews.

Sometimes I am even getting the impression that some people keep exlusively talking about Holocost to dwarf all the other bad things that have been and are still going on everywhere in the world to motivate their looking away from and doing nothing about those other crimes against humanity.


reader cynholt said...

Our Hawks are especially disturbed because all of this is their fault. They are desperate to escape from the consequences of what they did.

Putin has a red line, a very reasonable and very clear red line. Nuland and her ilk danced gleefully and recklessly across that red line, and suddenly found out that it was a real red line. Now they must blame someone, anyone, everyone, and make us "fight back" to cover their blunder.

Russia fought for the Crimea with our own John Paul Jones as their Admiral in 1783, and they fought for it again and again after that, losing and regaining it with our help in both World Wars. They now have the same sort of long term lease we have on Guantanamo and had on the Panama Canal Zone. Their thinking on it is clear.

Nuland and her neocon cronies reigniting the Cold War were arrogant fools.


reader cynholt said...

Ukraine is just a side show and sneak preview of things to come in the East, in particular, the disputed island in South China Sea. China is watching carefully and will follow the same playbook as Russia.


reader Shannon said...

Yep. China must be learning loads these days from Putin's outstandingly clever strategy. Russia and China are investing a lot to modernize their armies... when the US (and France) are doing just the opposite. Who are the visionnaires?...


reader Shannon said...

Gene, I also do believe that mixed ethnicities can live peacefully together as long as the new host accept full assimilation.


reader cynholt said...

The lies that our government and corporate press are spewing about the Ukrainian "crisis" are so obviously transparent that I don't see how any thinking person could not see through them. The Obama Administration shamefully and cowardly used the Sochi Olympics to further destabilize Ukraine while Russia was otherwise occupied. Instead of attending and throwing an olive branch to President Putin, Obama tried to rub his nose in it by sending gay athletes in his stead. Instead of embarrassing Putin, he embarrassed himself and the nation by attempting to ruin the Olympics, which Russia worked so hard to make a success.

Obama should have attended the Olympics. What better chance to have a meaningful dialogue? Obama acted, and is still acting, like a spoiled child. Wars start when diplomacy fails. JFK faced much more formidable challenges against an aggressive USSR and was able to finesse them without a shot being fired. The declining US no longer produces leaders of such stature and the press is no longer free.


reader cynholt said...

China sees the writing on the wall and doesn't want to rock the boat.
They are exchanging US debt for gold at a pace that doesn't send prices
sky rocketing. They know their best bet is to take advantage of the
situation to buy hard assets like gold while the dollar still has value.
Until they're devested, the last thing they want is a dollar collapse.
They have their own well being in mind, not some diabolical scheme to
upset the system and rule the world. I think they have logically come to
the conclusion that a collapse of the current paradigm will not be a
controllable or predictable event and are hedging accordingly.


reader Honza said...

As I read your comments, you might like this. (Text says:
-I will destroy America!
- Too late!)


reader John Archer said...

Gene,

You seem to be a man who knows his own mind. Maybe you could answer a few questions and thereby throw some light on things for me.

What's your stance on a minority (the political class, say) imposing mass alien immigration on the majority against their wishes? Actually, in asking that, I'm thinking much more about my own country, rather than yours, as it's the only one in which I feel I have any right to a say. I'm looking for some guidance. Here's a little background.

The UK, but England in particular, was never a "nation of immigrants". Immigration barely touched us before 1948. After that it went nuts. I know. I have lived through it and seen it. But that never stopped outrageous fcuking liars like Barbara Roche, Labour's Minister of State for Asylum and Immigration, from loudly proclaiming otherwise, and opening the floods gates wider yet.

It is relevant here, I say, that she is NOT one of us — she is not a Briton, she is Jewish so naturally does not share our ancestral interests, indeed very far from it. Of course it's generally considered impolite, or worse, to mention this simple fact. I am outraged by her and her kind and those of my fellow countrymen who support her actions in undermining us as a nation.

We now have no-go areas in parts of Britain where it is dangerous for a Briton to go. Yes, it's barely believable, yet true. The media keeps it well under wraps though.

Now, I detect you to be to be a very moral person, Gene, and ready to 'speak up for what's right' and all that, especially for the glories of all this delightful righteous racial mixing you mention. So perhaps you'd be kind enough to help me out here. I imagine you might regard my outrage as being ill-founded. If so, would you mind explaining where I'm going wrong, if you consider I am that is? I suspect you do.

By the way, in doing so, if you employ any moral 'axioms' could you kindly say where you get them from and justify their use. I ask because in my experience the preaching classes just pull them out of their Platonic hats and expect others not to blink at their sheer brass neck and the hubristic flamboyance of such a crude exercise in intellectual legerdemain. I'm trying to put it politely here.

Maybe you could also address the question of where 'wise leaders'—elected luminaries, no less, and so-called representatives of the people—override the clear wishes of the majority of the very people they are elected to represent. I've never quite got a handle on this Papal stuff.

One other thing. Does pointing out that Barbara Roche is not one of us, not a Briton, but is Jewish make me an anti-semite, in your opinion? She's clearly an anti-Briton in Britain though, which naturally enough for me is something far worse than being an anti-semite here. Or is there some divine order of precedence in these things that I'm missing?

Anyone else want to join in? Feel free. Luboš permitting, that is — this blog being his property.


reader Yuriy said...

Unaminity is NOT a common style in Kiev, FYI.


reader Solo712 said...

What puzzles me is the eagerness of the EU to get into the deep with Russia over Ukraine which is an economic basket case on the order of Greece. What's in it for Merkel ? Does Germany have some much money that it does not mind extra few billion in handouts to another spectacular economic failure ? If that is the case, I would recommend building a large toilet in Berlin and flush the extra money down in it. For the sake of peace in Europe !


reader lukelea said...

I read somewhere that Riemann, who was on the faculty at Gottingen (sp?) with Noether, while admitting that she (Noether) was a great mathematician, said that there was some question whether or not she was a woman.

I think he was just joking.


reader tajir said...

we need your support according to new development in business in ukrain.you can get here business reputation
of ukranian companies
.we needs funds at reputation on kickstarter regarding to new development is business in urkraine