Thursday, April 07, 2016 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

\(P=NP\) and string landscape: 10 years later

The newborn Universe was a tiny computer but it still figured out amazing things

Two new papers: First, off-topic. PRL published a Feb 2016 LIGO paper claiming that the gravitational waves (Phys.Org says "gravity waves" and I find it OK) contribute much more to the background than previously believed. Also, Nature published a German-American-Canadian paper claiming that supermassive black holes are omnipresent because they saw one (with mass of 17 billions Suns) even in an isolated galaxy. Also, the LHC will open the gate to hell in 2016 again, due to the higher luminosity (or power, they don't care). CERN folks have recommended a psychiatrist to the conspiracy theorists such as Pope Francis. But God demands at most 4 inverse femtobarns a year because the fifth one is tickling Jesus' testicles.
Ten years ago, the research of the string landscape was a hotter topic than it is today. Because of some recent exchanges about \(P=NP\), I was led to recall the February 2006 paper by Michael Douglas and Frederik Denef,
Computational complexity of the landscape I
They wrote that the string landscape had lots of elements and, using the numerous computer scientists' \(P\neq NP\) lore, it's probably going to be permanently impossible to find the right string vacuum even if the string landscape predicts one.

The authors have promised the second paper in the series,
[48] F. Denef and M. R. Douglas, “Computational Complexity of the Landscape II: Cosmological Considerations,” to appear
but now, more than 10 years later, we are still waiting for this companion paper to appear. ;-) What new ideas and views do I have 10 years later?




Douglas and Denef are extremely smart men. And they started – or Douglas started – to look at the landscape from a "global" or "computational" perspective sometime in 2004 or so (see e.g. TRF blog posts with the two name that started in 2005. These two authors co-wrote 6 inequivalent papers and you may see that the paper on the complexity was the least cited one among them, although 56 citations is decent.




Papers trying to combine \(P\neq NP\) i.e. computer science with string phenomenology may be said to be "interdisciplinary". You may find people who are ready to equate "interdisciplinary" with "deep and amazing", for example Lee Smolin. However, the opinion of the actual, achieved physicists is different. They're mostly ambiguous about the "sign" of the adjective "interdisciplinary" and many of them are pretty seriously dismissive of papers with this adjective because they largely equate "interdisciplinary" with "written by cranks such as Lee Smolin".

So whether or not you view "interdisciplinary" things as things that are "automatically better than others" depends on whether or not you are a crackpot yourself. Don't get me wrong: I do acknowledge that there are interesting papers that could be classified as interdisciplinary. Sometimes, a new discipline ready to explode is born in that way. However, I do dismiss the people who use this "interdisciplinary spirit" as a cheap way to look smarter, broader, and deeper in the eyes of the ignorant masses. The content of these papers is rubbish almost as a matter of a general law. The "interdisciplinary" status is often used to claim that the papers don't have to obey the quality requirements of either "main" discipline.

But I want to mention more essential things here.

If you read the rather pedagogic Wikipedia article on \(P=NP\), the first example of an \(NP\) problem you will encounter is the "subset sum problem". It's \(NP\) because it takes at most \(n^p\) steps to verify a solution; \(n\) is the number of elements in the set.
For instance, does a subset of the set \(\{−2, −3, 15, 14, 7, −10\}\) add up to \(0\)? The answer "yes, because the subset \(\{−2, −3, −10, 15\}\) adds up to zero" can be quickly verified with three additions. There is no known algorithm to find such a subset in polynomial time (there is one, however, in exponential time, which consists of \(2^n-n-1\) tries), but such an algorithm exists if \(P = NP\); hence this problem is in \(NP\) (quickly checkable) but not necessarily in \(P\) (quickly solvable).
If the set has \(n\) elements, there are \(2^n-1\) ways to choose a non-empty subset because you have to pick the value of \(n\) bits. It's not hard to convince yourself that you should give up the search for the right subset of numbers – whose sum is zero – if \(n\) is too large and the numbers look hopelessly random. It looks like the manual verification of all possible \(2^n-1\) subsets is the only plausible way to approach the problem.

But the fact that you don't know where to start or you have convinced yourself to give up doesn't mean that no one can ever find a faster solution to find the right subset. You may want an algorithm that doesn't "discriminate" against some sets of \(n\) numbers. And indeed, it seems plausible that no fast non-discriminatory method to find the right solution exists.

However, the faster solution to the "subset sum problem" may very well be discriminatory. It may be an algorithm that chooses and cleverly combines very different strategies depending on what the numbers actually are. Imagine that some numbers in the set are of order \(1\), others are of order \(1,000\), others are of order \(1,000,000\) etc. With some extra assumptions, the small numbers don't matter and the "larger" numbers must approximately cancel "first" before you consider the smaller numbers.

In that case, you start with the "largest" numbers of order one million, and solve the "much smaller" problem, and you know which of these largest numbers are in the right subset and which of them aren't. Their sum isn't exactly equal to zero – it differs by a difference of order \(1,000\) – and you may pick the right subset of the numbers of order \(1,000\), and so on. With this hierarchy, you ultimately find the right subset "a scale after scale".

This doesn't work when the numbers are of the same order. But it's conceivable that there exist other special limiting situations in which you may find a method that "dramatically speeds up the search", and the patches of these special conditions when a "dramatic speed up is possible" actually cover the whole set \(\RR^n\). There are many clever things – dividing to many sub-problems, sorting but also Fourier transform, auxiliary zeta-like functions, whatever – that other people may come up with and get much further in solving the problem.

There's simply no proof that a polynomially fast algorithm finding the "subset with the sum equal to zero" doesn't exist. There's no proof that the algorithm exists, either; obviously, no one knows such an algorithm (which would be even more ambitious because the proof that an algorithm exists may be existential).

OK, Douglas and Denef were piling a speculation upon another speculation so they were surely \(P\neq NP\) believers. But the new "floors of speculations" (more physical speculations) they added on top of \(P\neq NP\) was even more controversial. Their basic opinion – with defeatist consequences – was simple. The search for the right "flux vacuum" in the landscape is analogous to the "zero subset problem". And there is an exponentially large number of ways to assign the fluxes – just like there is an exponentially large number of subsets. It seems that "checking the vacua one by one" is, similarly to the subset, the only possible strategy. So if the number of vacua is \(10^{500}\) or so, we will never find the right one.

Maybe. But maybe, this statement will turn out to be completely wrong, too.

Even if one believes that \(P\neq NP\) and there's no "polynomially fast" solution to the "zero subset problem" (and analogous but harder problems in the search for the right fluxes and the correct string vacuum), it doesn't imply that physicists will never be able to find the right vacuum. The reason is that it's enough to guarantee \(P\neq NP\) if there exists no algorithm that can solve the "zero subset problem" for any set of \(n\) real numbers in the whole \(\RR^n\), including the "most generic ones".

However, the analogous problem in the string landscape is in no way proven – or should be expected – to be "generic" in this sense. The vacua may always exhibit e.g. the hierarchy that I previously mentioned as a feature that may dramatically speed up the search. Or, more likely and more deeply, the numbers may have some hidden patterns and dual descriptions that allow us to calculate the "right choice of the bits" by a very different calculation than by trying an exponentially large number of options, one by one.

Denef and Douglas basically assumed that "all the numbers in this business are just hopeless, disordered mess", and there is an exponentially large number of options, so there's no chance to succeed because "brute force" is the only tool that can deal with generic mess and the required amount of brute force is too large. But this is an additional (and vaguely and emotionally formulated) assumption, not a fact. All of science is full of examples where this "it is hopeless" assumption totally fails – in fact, it fails almost whenever scientists succeed. ;-)

I am constantly reminded about the likely flaws of the "defeatist" attitude when I talk to the laymen, e.g. relatives, who are absolutely convinced that almost nothing can be scientifically determined or calculated. We play cards and I mention that the probability that from a nicely shuffled pack of cards, someone gets at least 8 wild cards (among 14) is 1 in half a million. I get immediately screamed at. How could this be calculated? It's insane, it's an infinitely difficult problem. They're clearly assuming something like "playing zillions of games is a necessary condition to estimate the answer, and even that isn't enough". And I am like What? I could calculate all these things when I was 9 years old, it's just damn basic combinatorics. ;-) The Pythagoriad contest 32 years ago was full of similar – and harder – problems. Clearly, with some computer assistance, I can crack much more complex problems than that. And in physics, we can in principle calculate almost everything we observe.

Now, Denef, Douglas, and other particle physicists aren't laymen in this sense but they can be victims of the same "I don't know what to do so it must be impossible" fallacy. I am often tempted to surrender to this fallacy but I try not to. The claim that "something is impossible to calculate" requires much more solid evidence than the observation that "I don't know where to start".

Let me give you a simple example from quantum field theory. Calculate the tree-level scattering amplitude in the \(\NNN=4\) gauge theory with a large number \(n\) of external gluons. Choose their helicity to be "maximally helicity violating" (or higher than that). The number of Feynman diagrams contributing to the scattering amplitude may be huge – exponentially or factorially growing with \(n\), at least if you allow loops – but we know that the scattering amplitude is extremely simple (or zero, in the super-extreme cases) because of other considerations. This claim supports a part of the twistor/amplituhedron minirevolution and may be proven by recursive identities and similar tools.

So it's clearly not true that the brute force incorporation of all the diagrams is the only way or the fastest way to find the solution. There exist hidden structures in the theory that allow you to find the answer more effectively by transformations, twistors, or – and this theme is really deep and omnipresent in recent 20 years of theoretical physics – through dual descriptions (which are equivalent but the equivalence looks really shocking at the beginning).

It's plausible that even though one string vacuum is correct, humans have no chance to find it. But it's also plausible that they will find it. They may localize the viable subsets by applying many filters, or they may calculate the relevant parameters or the cosmological constant using an approximate or multi-stage scheme that allows one to pick tiny viable subsets and dramatically simplify the problem. Or our vacuum can simply be a special one in the landscape. It may be the "simplest one" in some topological sense; and/or the early cosmology may produce the probability for the right vacuum that vastly exceeds the probability of others, and so on. The loopholes are uncountable. The fact that you may write down a "story" whose message is that "the problem is hard" doesn't mean much. A "story" that implicitly assumes that you shall listen to no other stories is just a matter of a propaganda.

But I want to mention one point about the Douglas-Denef research direction that I have never paid much attention to. It's largely related to the second paper in the "complexity and the landscape" series that they promised but never delivered, one that was more focused on the early cosmology. Did the paper exist? Was it shown to be wrong? What was in it? The abstract of the first, published paper, tells us something about the second, non-existent paper:
In a companion paper, we apply this point of view to the question of how early cosmology might select a vacuum.
This sentence actually sketches a totally wonderful question that I had never articulated cleanly enough. It's possible that the only problem that this question created was that it made Douglas and Denef realize something that invalidates pretty much the basic assumptions or philosophy of their whole research direction (the basic assumption is basically defeatism and frustration). What do I mean?

It seems to me that these two men have realized something potentially far-reaching and it's this simple idea:
It seems that the newborn Universe was a rather small quantum mechanical system with a limited number of degrees of freedom – perhaps something comparable to a 10-qubit quantum computer to be produced in 2020. But if that's so, all the problems that this "cosmic computer" was expected to solve must be reasonably simple!
This looks like a potentially powerful – and perhaps more far-reaching than the Douglas-Denef \(P\neq NP\) defeatist – realization. You know, if the very young quantum computer had the task to find the right values of fluxes to produce a vacuum with a tiny cosmological constant, a task that may be considered similar to the "zero subset problem" for a large value of \(n\) – then it seems that the relatively modest "cosmic computer" must have had a way to solve the problem because we're here, surrounded by the cosmological constant of a tiny magnitude.

This realization sounds great and encouraging. However, it may be largely if not completely invalidated by the anthropic reasoning. The anthropic reasoning postulates that all the Universes with the totally wrong values of the cosmological constant and other parameters also exist, just like our hospitable Universe. They only differ by the absence of The Reference Frame weblog in them. So no serious physics blogger and his serious readers can discuss the question why the early cosmic computer decided to pick one value of the cosmological constant or another. If you embrace this anthropic component of the "dynamics", it is no longer necessary that the "right vacuum was quickly picked/calculated by a tiny cosmic quantum computer". Instead, the selection could have been made "anthropically" by a comparison of quattuoroctogintillions of Universes much later, after they have evolved for billions of years and became large. (The number is comparable to the income I currently get in Adventure Capitalist when I move the Android clock to 1970 and back to 2037 LOL. I haven't played it today. It's surely a good game to teach you not to be terrified by large but still "just exponential" integers.)

Now, the fifth section of the Douglas-Denef Complexity I paper is dedicated to the more advanced "quantum computing" issues. This section was arguably a "demo" of the Complexity II paper that has never appeared. The most important guy who is cited is Scott Aaronson. The first four references in the Denef-Douglas paper point to papers by Aaronson – your desire to throw up only weakens once you realize that the authors in the list of references are alphabetically sorted. ;-)

I believe that I have only known the name of Scott Aaronson since late 2006 when this Gentleman entered the "String Wars" by boasting that he didn't give a damn about the truth about physics, and as all other corrupt šitheads, he will parrot the opinions of the highest bidder and seek to maximize profit from all sides of the "String Wars". He immediately became a role model of the Academic corruption in my eyes.

So Douglas and Denef actually talk about lots of the complexity classes that Aaronson's texts (and his book) were always full of, \(NP\), co-\(NP\), \(NP\) to the power of co-\(NP\), \(DP\), \(PH\), \(PSPACE\), and so on.

But it seems sort of striking how they ignored a realization that could have been the most innovative one in their research:
Our brains shouldn't give up the search for the right vacuum defining the Universe around us because the newborn Universe was, in some sense, smaller than our brains but it managed to find the right answer, too.
Again, I am neither optimistic nor pessimistic when it comes to all these questions, e.g. the question whether the humans will ever find the correct stringy vacuum. I think it's OK and actually unavoidable when individual researchers have their favorite answers. But when it comes to important assumptions that haven't been settled in one way or another, it's absolutely critical for the whole thinking mankind to cover both (or all) possibilities.

It's essential that in the absence of genuine evidence (or, better, proofs), people in various "opinion minorities" are not being hunted, eliminated, or forbidden. The anthropic principle has been an example of that. The more you believe in some kind of a (strong enough) anthropic principle, the more you become convinced that the amount of fundamental progress in the future will be close (and closer) to zero.

But high-energy fundamental physicists are arguably smart and tolerant enough that they realize that no real (non-circular, not just "plausible scenario") evidence exists that would back the defeatist scenarios and many other opinions. That's why the people, even though you could probably divide them to "anthropic" and "non-anthropic" camps rather sharply, have largely stopped bloody arguments. They know that if a clear proof of their view existed, they could have already turned it into a full-fledged (and probably quantitative) theory that almost everyone must be able to check. Such a complete theory answering these big questions doesn't exist in the literature so people realize that despite the differences in their guesses, all of them are comparably ignorant.

Almost everyone has realized that the papers that exist simply aren't solid evidence that may rationally settle the big questions. It's important that some people keep on trying to isolate the right vacuum because there exists a possibility that they will succeed – but they must be allowed to try. I think that the dangerous herd instinct and the suppression of ad hoc random minorities is much more brutal in many other fields of science (and especially in fields that are "not quite science").

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :