Monday, May 06, 2013 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Comparing the depth of the millennium problems

The Riemann Hypothesis is probably the deepest one

In 2000, the Clay Mathematics Institute offered 7 times $1,000,000 for the proofs of the Millennium Prize Problems. Is it possible to compare which of them are deeper than others?

Needless to say, such a comparison depends on personal preferences, emotions, and there is probably no rigorous way to "prove" that one problem is deeper than others. However, that doesn't mean that one can't have an opinion; and it doesn't prevent some opinions from being more well-informed than others.

So let me offer a sketch of the seven problems and some evaluation of their depth.

The Poincaré Conjecture

I began with this one because it has been proven by Grigori Perelman. The statement of the theorem is simple:

Every simply connected, closed 3-manifold is homeomorphic to the 3-sphere.
For three-dimensional manifolds, Perelman had to prove that everything that smells like a sphere is a sphere. The "smelling" means that the manifold is "closed" i.e. free of boundaries etc.; and it is "simply connected" which means that you can't "laso it". Every topological circle within the manifold may be gradually and continuously shrunk to a point.

You may ask why the problem talks about three-dimensional spheres only. Roughly speaking, it's because less-than-three-dimensional manifolds are too simple to be classified (and to prove the claims about the classification). Among the two-dimensinoal surfaces, you have Riemann surfaces with \(h\) handles – and perhaps \(b\) circular boundaries and \(c\) crosscaps, too.

And higher-dimensional manifolds become sort of simpler, too. At least those that "smell like a sphere" are easy to be put under control. Needless to say, the diversity of complicated manifolds such as the Calabi-Yau manifolds becomes more extreme for higher dimensions. But when you talk about "truly special, highly constrained" manifolds, there is a sense in which \(d=3\) maximizes the "depth of the mathematical insights" needed to understand the set of manifolds.

Perelman solved it and in some sense, the technology behind the proof is sort of "easy" and the proof allows us to say that there was "nothing totally new" hiding behind the conjecture. The proof is easy to formulate in terms of concepts that are natural in quantum field theory and especially string theory, namely the so-called Ricci flows (this idea was first proposed as a path to the proof by Richard Hamilton).

One may study strings propagating on a 3-dimensional manifold and use this "non-linear sigma-model" as an effective theory. One may try to change the characteristic world sheet length and look what it does with the whole theory. This is about the flows of the renormalization group. Effectively, such a transition to lower world sheet energies or longer distances has the effect of "rounding the target three-manifold". So if it is a potato close enough to the sphere, it will increasingly resemble the sphere with the standard metric.

Pretty much constructively, you may actually end up with the sphere. The flow of the renormalization group, the Ricci flow, may betray you because in some cases it may produce a singularity instead of making your theory ever smoother. But these possible evolutions may be controlled by some extra "surgeries" and a careful supervision of the "fundamental group" of the manifold at each stage. At the end, Perelman showed that a rather natural strategy – a method to make the spherical character of the original manifold manifest – really works and may be made totally rigorous.

There are of course many other special questions about the geometry in \(d=3\) – in some sense, many problems of knot theory could be included here. The Poincaré conjecture is rooted in the "totally continuous" questions of geometry and it should therefore not be surprising that the proof sounds like a proof in mathematical physics. Such "purely continuous proofs" with very limited links to any auxiliary discrete structures etc. are very important – especially in physics – but one could argue that at the end, they're sort of straightforward once the right "physical intuition" is used.

It was great when Perelman completed the proof. But if we ignore the historical coincidence that the proof is a relatively recent event, I think that many "comparably difficult" problems and proofs in the purely continuous geometry are much more important and deeper – classification of Lie groups, classification of possible holonomies of manifolds, and so on.

Birch and Swinnerton-Dyer conjecture

This conjecture proposes a way to count the diversity of the arithmetic data defining an elliptic curve (a two-torus written as a quadratic-cubic equation in complex variables; named in this way because of links to the "elliptic integrals") by the Hasse-Weil \(L\)-function.

I don't want to go into details here but it's a counting problem that is very similar to Fermat's Last Theorem ultimately proved by Andrew Wiles. That's also why Wiles was the natural person to ask for a description of the problem. Recall that Wiles proved Fermat's Last Theorem by trying to count certain things.

It could be difficult to compare FLT and the Birch and Swinnerton-Dyer conjecture when it comes to their depth – I am no real number theorist, after all. However, let me mention that most of us loved FLT more than this particular Millennium Prize Problem. But we should acknowledge that much of this preference boils down to a relatively shallow criterion – Fermat's Last Theorem may be explained to a schoolkid. The elliptic curves are about a bit more advanced maths.

I am not sure whether this difference should be counted as evidence that FLT is "more deep". It's surely more popular because it's more accessible. But when one starts to master the actual maths that is needed for the full proof of FLT, it turns out that it's the same kind of maths that is (probably) needed to decide about the fate of this Birch-and-someone conjecture. The problems may be comparably deep and there could exist similar "truly comparable" other problems in number theory, too. FLT only differed by its being extremely accessible. I mean the proposition, not its proof which is very advanced and complicated.

FLT and this Millennium Prize Problem are examples of number theory and its connections with "geometry over discontinuous fields". One generalizes some natural concepts we know from the continuous geometry and they turn out to be relevant for problems that sound like problems about integers, especially primes and factorizations etc. (number theory). This is always a deep link. However, I will argue that the Riemann Hypothesis is probably deeper than these FLT-like problems because in the RH case, the flow of wisdom goes in both directions, more symmetrically. When we solve FLT or this Birch-and-someone problem, we learn something about integers and solutions of equations with integers etc. using methods that we also routinely use in the continuous context but we don't seem to learn much about the continuous maths.

The Hodge conjecture

The Hodge conjecture talks about the de Rham cohomology – cohomology is about the independent antisymmetric tensor fields (forms) that are closed by not exact (and, nearly equivalently, homology is the space of topologically inequivalent and non-trivial, non-contractible submanifolds of a manifold) – and it says that if you talk about manifolds that may be described via complex algebraic equations, there's also a similar way to describe all the cohomology classes as combinations of the classes of submanifolds that may also be specified by complex algebraic equations.

In some sense, this problem says that if a manifold seems to be equivalent to complex algebraic equations, then complex algebraic equations are also enough to describe all the cohomology classes. There's nothing "new" that would force you to go outside the complex algebraic "box" if you started inside the box in the first place.

It's interesting that this problem hasn't been proven but it's hard to predict the apparent depth that this problem will have after a hypothetical proof is found. Such a proof may turn out to be constructive and somewhat straightforward – in some sense, analogous to Perelman's proof of the Poincaré Conjecture. It may differ e.g. from Wiles' proof of Fermat's Last Theorem by remaining "simple" for all dimensions. In order to prove FLT, Wiles had to use qualitatively more advanced mathematical techniques than the techniques you need for the proof of FLT specialized to a particular exponent such as \(n=4\). We don't know but I find it somewhat plausible that the future proof of the Hodge conjecture will be more uniform – a more straightforward generalization of a natural proof you could construct for a particular dimension \(d\).

But of course, the proof may be much deeper than that. One must always remember that we should better talk about "a proof" because there can exist many proofs, even many qualitatively different proofs. Some of them may be much shorter than others; some of them may be much deeper than others. Only the first proof of the Millennium Prize Problems earns a million of dollars but it's always possible that another, inequivalent proof that is still in the pipeline is actually deeper and more important (but unrewarded).

The Yang–Mills existence and mass gap problem

This problem is most closely linked to quantum field theory, the main "technology" in which I was sort of professionally trained, but that doesn't mean that I consider this problem to be too deep. In fact, I don't.

The real reason is that using physical arguments, sometimes very deep ones (including wisdom hiding in the AdS/CFT), we have acquired answers to much deeper and more detailed questions than just the existence of the mass gap. To get the prize, you need to present an argument that is pretty much rigorous – and define a QFT sort of rigorously along the way. And once you complete this "paperwork", you also have to use your framework to prove a rather basic claim that pure QCD doesn't contain any "arbitrarily light massive particles"; the lightest massive particle has a strictly positive mass (comparable to the QCD scale) and nothing (i.e. gap) in between this mass and zero.

I don't think that this is a natural way towards profound insights. Quantum field theories and string theory are sort of pulsating, living animals that may be studied with the help of many techniques and perspectives that admit very precise rules and that work but that don't fit the mathematicians' fully elaborated "straightjackets" that have already been formulated in the very rigorous mathematical way. In some sense, this problem wants you to kill the pulsating animal and decompose it into some boring components that admit a totally rigorous mathematical definition.

When you do it, you will only recover some physical insights that are known to be true to the physicists – in fact, physicists may answer much more detailed and quantitative questions of this kind. I would say that the importance of the problem is in rigorous mathematicians' desire to declare that quantum field theory is "fully incorporated" to the rigorous maths, within a basic mathematical framework that has been completely found and should no longer evolve in the future. I am not sure whether I would subscribe to this thesis. QFT and string theory are already "the cutting-edge physics" tools and as physicists are discovering their new properties, they're also learning about new ways to mathematically formalize them. To write down "a definition" of a quantum field theory and to prove some basic properties of the corresponding mathematical structure is a settled subject but "all interesting knowledge about quantum field theories" is surely not a completed subject.

Navier–Stokes existence and smoothness

This problem is close to physics, in fact it is the only obvious "classical physics" problem in the list. The Navier-Stokes equations describe fluid mechanics which displays complicated phenomena, especially turbulence that looks "cool" on the pictures. That's a reason why this topic is popular. To win the prize, you must:
Prove or give a counter-example of the following statement:

In three space dimensions and time, given an initial velocity field, there exists a vector velocity and a scalar pressure field, which are both smooth and globally defined, that solve the Navier–Stokes equations.
You potentially don't have to learn the answer to any interesting "physical" question. The reason why the smooth, globally defined solution to the Navier-Stokes problem exists – or doesn't exist – may be highly technical details you have to be picky about. It is not guaranteed at all that a winner of this prize must achieve some genuine progress in the "puzzling physics properties" of turbulent fluids etc.

Turbulence leads to interesting conceptual phenomena such as a self-similarity of the relevant pictures. It's been believed for quite some time that all the relevant statistical distributions are pretty much self-similar. There is now clear evidence that this ain't the case. The self-similarity is broken at some point.

There is another issue – the real-world fluids with a cutoff at the atomic length scale may differ, when it comes to some short-distance behavior, from the fluids idealized by the Navier-Stokes equations. The problem is a mathematical one about the equations so the winner of the prize may also miss many actual real-world subtleties associated with the existence of atoms, and so on.

There are numerous interesting problems and patterns related to fluid mechanics in general and turbulence in particular that may be discussed or proved. I am just not sure whether the rigorous mathematical formulation above is a natural way to direct the researchers towards the most profound parts of these problems of mathematics of classical physics. The problem was probably incorporated in order to state that "there are still interesting math problems related to classical physics" – a thesis I would almost certainly subscribe to – but I am not sure whether the representative problem was formulated usefully.

P versus NP problem

This computer-science problem was the original reason that made me write this blog entry. Scott Aaronson didn't like that I interpreted the "P = NP problem" as a comparably messy problem to the question whether chess is a draw (whether two ideally clever chess players attempting to reach the best results inevitably end up with a draw).

Just to be sure, I do think – like most experts – that chess is a draw and that P isn't the same thing as NP.

In this computational complexity business, you have an integer \(N\) that measures the "size" of the objects in the problem you want to solve and the main question is whether the number of computer operations scaling like a power of \(N\) for large \(N\) is enough to solve such a problem in a class. The polynomially fast solutions are considered "fast"; the "slow" ones need exponentially or factorially (etc.) long timescales to be solved.

As the diagram above shows, there is a large set of NP problems. They're "quickly checkable". NP stands for "non-deterministically polynomial". They're problems such that if you're told a solution, you may verify it is a solution in a polynomial time. Verification is generally believed to be easier than the search for solutions, so that's why people generally believe that "P is not NP".

In the NP class, you have the P subclass – problems that can be (polynomially) quickly solved, not just verified. And then there is the NP-complete subset. Well, the NP-complete subset is the intersection of NP with the set of "NP hard problems" which are those that are as difficult as the hardest problems in NP. In practice, NP-complete is a limited class of problems that have been proved equally complicated as other NP-complete problems by various ways to convert one problem in the class to another (plus all the other problems that may be shown equivalent in the future). So I can just give you a representative: quickly compute the permanent of a matrix (the determinant-like sum over permutations but with no minus signs).

If P were the same as NP, which is believed to be false, then NP-complete problems, because they're still elements of the NP problems, would be solvable in a polynomial time as well, so NP-complete would be the same set as P and NP, too. In the more likely case that P isn't NP, NP-complete and P problems are two completely disjoint subsets of NP ("opposite" in their complexity).

P vs NP problem is sort of interesting because it's rather easy to be formulated but if you look closely, it's really a messy problem and there isn't a good reason why there should exist an accessible enough resolution. We're asking whether problems are polynomially quickly solvable. This very adjective is sort of contrived. From a practical viewpoint, there is usually a big difference between polynomial and longer-than-polynomial time requirements but it doesn't seem to me that this distinction is too fundamental from a deeper theoretical viewpoint. In other words, I believe that this "polynomial technicality" is already enough to classify the P vs NP problem as a "mostly practical messy problem", not a mathematically natural fundamental problem.

At the same moment, I think that some hype promoting the P vs NP problem resembles TV commercials way too much. In Wikipedia, Scott Aaronson is quoted as saying:
"If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in 'creative leaps,' no fundamental gap between solving a problem and recognizing the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss..."
That sounds great. We surely want to know whether everyone may be a Mozart; whether Mozart's work may be automatized. Well, it probably can't. But even if P were equal to NP, it could still be difficult to construct the "straightforward algorithm" that solves an arbitrary problem in NP. P = NP would say that a fast solving algorithm exists for each problem in NP; it doesn't say that we actually know a method how to find such an algorithm for any NP problem. For the known mutually equivalent NP-problems, you could perhaps do it but it could still be hard to find the equivalence of other NP-complete problems with the known ones (recall again that if N equals NP, then all problems in NP are NP-complete, too). The translations to the "already solved" NP problems could become "increasingly difficult" for problems that deviate from the initially solved subclass. Things could be very hard in practice even if P were equal to NP.

I may describe the same problem in the following, nearly equivalent way: the P vs NP problem is asking about the existence of an algorithm for each problem in NP and whether such an algorithm is polynomially fast. So in some sense, it's not a question about some important mathematical structures themselves but about higher-order "texts" (algorithms) that discuss the mathematical structures and their not-too-fundamental properties such as the polynomial speed.

The brute-force way to decide would be to look at all candidate algorithms and check whether they work and whether they are polynomially fast. Clearly, this can't be done in a finite time because the potential algorithms, although polynomially fast, may still be given by an arbitrarily long program.

"Finding all nicest and fastest algorithms that may solve a difficult problem" looks like an extremely creative, Mozart-like problem and it's plausible that this creative achievement is needed to decide about the P vs NP problem. In this sense, I think that not only the proof of P = NP but also the disproof probably require something extraordinarily difficult and Mozart-like. If P isn't NP, as most experts expect, it still seems natural to assume that to find the proof of "P isn't NP" is an NP-complete problem of a sort. Such a proof may be extremely long or unnatural or whatever. To decide about P = NP, it seems like you need to place yourself about "all the Mozarts" whose superioty you want to prove if you want to prove that P isn't NP. And that's a contradiction, at least a morally or practically. You shouldn't prove that all of us are inferior insects relatively to Mozart by becoming the superior supervisor and guru of Mozart and every other genius. ;-)

So I don't expect any proof of P = NP and I don't expect any proof that P isn't NP, either. One of the proofs, probably the latter, probably exists but it's probably extremely complicated and doesn't illuminate much.

The Riemann hypothesis

The Riemann Hypothesis seems to be the deepest problem to me although it may ultimately turn out to be just about one physics-related problem/insight among many.

It may be phrased as a problem on analytic functions of a complex variable: the Riemann \(\zeta\)-function has no roots away from the real axis and the \(1/2+is\) vertical line. But it may also be phrased as an equivalent problem about number theory, namely the distribution of primes. For example, an equivalent way to phrase the Riemann Hypothesis is to say\[

|\pi(x) - \operatorname{Li}(x)| \lt \frac{1}{8\pi} \sqrt{x} \, \log(x), \qquad \text{for all } x \ge 2657.

\] The number of primes smaller than \(x\) may be approximated by the integral of \(1/\ln(t)\) – this inverse logarithm function quantifies the probability that a number comparable to \(t\) is a prime – integrated up to \(x\) and the error is very small, given by the right hand side of the inequality above. (Similar but weaker statements with a larger allowed error on the right hand side may be proved, e.g. by the prime number theorem; they're equivalent to the proved non-existence of the zeroes of the \(\zeta\)-function outside the real axis and outside the width-one critical strip symmetrically located around the critical axis.)

Because of its relationships to integers and primes, the Riemann zeta function is an extremely natural function of a complex variable. In some sense, it's the most natural "non-elementary" function of a complex variable that has something to do with integers in general and primes in particular. The function may be defined in many ways, it clearly has infinitely many non-trivial zeros on the \(1/2+is\) vertical axis, and it sounds crazy we're not able to prove that it has no extra zeroes (which would be paired symmetrically with respect to this vertical axis) in the bulk of the complex plane (well, they would be in the width-one strip around the vertical axis).

The values of \(s\) for which \(\zeta(1/2+is)=0\) are very interesting, non-trivial real numbers. The Hilbert–Pólya conjecture is a general approach to the proofs of the Riemann Hypothesis that many, including myself, find extremely promising and natural. It says that the set of such real values \(s\) is the spectrum of a Hermitian operator. This operator may be defined in a way that makes the Hermiticity manifest; and that makes it manifest that the eigenvalues are linked to these zeroes i.e. special values of \(s\); that would prove that \(s\) is real and there are no other non-trivial roots (away from the axis).

In some sense, the non-integer values of \(s\), the zeroes of the Riemann \(\zeta\)-function, are "inverse" to the set of primes on the real axis. I won't tell you the exact sense but if the allowed momenta were primes, the allowed windings would be the roots \(s\) of the \(\zeta\)-function. This vague statement also implies that the zeroes of the \(\zeta\)-function become more dense, as \(2\pi\ln s\), if you go further away from the real axis; note that the probability that an integer is a prime was decreasing as \(1/ \ln n\).

I have tried many times to go one step further (or several steps). In one approach of mine, I decided that for each such zero \(s\), there is actually a whole unitary representation of \(SL(2,\RR)\). There should exist a natural physical system with the \(SL(2,\RR)\) symmetry whose states include unitary representations of this group. One has to find the physical system, prove that the second Casimirs are given by \(x(x-1)\) where \(x=1/2+is\), and the Riemann Hypothesis follows from some known results about five classes of the unitary representations of \(SL(2,\RR)\). I find it utterly natural to use the whole \(SL(2,\RR)\) and not just a generic single Hermitian operator, especially because of the role that \(x(x-1)\) plays in the \(\zeta\)-function and the role that the \(1/2+is\) vertical line plays in the theory of representations of that group.

Also, for some time, I believed that the non-existence of the physical string states above Martin Schnabl's tachyon-condensation solution to the open string field theory proved the non-existence of other non-trivial zeroes – because the form of Schnabl's solution may be written in terms of the \(\zeta\)-function, too. I have never been able to complete the proof, not even at the physicist's level of rigor, and I am not sure that a proof of this sort exists today.

At any rate, I tend to think that because the \(\zeta\)-function is so unique in the complex calculus, its very visible and manifestly correct property – the location of zeroes on two straight lines only – is something important that we should understand. The Riemann Hypothesis seems much more unique, special, and fundamental to me than any other problem discussed above.

And that's the memo.

Add to Digg this Add to reddit

snail feedback (50) :

reader George Christodoulides said...

nice post. now you should create your own website and link this one to it!

reader Scott Aaronson said...

"P = NP would say that a fast solving algorithm exists for each problem
in NP; it doesn't say that we actually know a method how to find such an
algorithm for any NP problem."

Err, see the Cook-Levin Theorem (1971). You only need to give a polynomial-time algorithm for any one NP-complete problem (say 3SAT); then a polynomial-time algorithm for any other NP problem can be explicitly derived as soon as you know why the other problem is in NP at all.

Furthermore, this is not just a theoretical point. These days, people who need to do program-checking, VLSI design, etc. very often simply translate their NP problem to SAT, then run highly-optimized SAT-solving algorithms on it.

reader Luboš Motl said...

Err, see the Cook-Levin Theorem (1971). You only need to give a polynomial-time algorithm for any one NP-complete problem (say 3SAT); then a polynomial-time algorithm for any other NP problem can be explicitly derived as soon as you know why the other problem is in NP at all.

I understand it but it's vacuous relatively to the bold desire to supervise Mozart. If we already know why a problem is NP, i.e. at most as hard as the NP-complete problem, then we know how to reduce it to an NP-complete problem. What a big deal. That's really just a definition of the NP-complete problems. But one may still have a trouble with solving problems we can't prove are in NP but they are there.

reader T H Ray said...

"Just to be sure, I do think – like most experts – that chess is a draw and that P isn't the same thing as NP."

Lubos, it's a false dichotomy -- between the two-player game with equilibrium point, which is chess, and NP problems (NP-Hard, NP-Complete) which lack an equilbrium point. A chess master studying a set of moves after a stalemated match has been played is guaranteed to find a crisis, i.e., the system in an equilibrium state, where the match can go either way. Now consider two machine programs playing a match--is there a subroutine contained in one of the other of them, to compute the singularity, i.e., the game at equilibrium, such that guarantees a win for the program possessing the knowledge, or a stalemate in the case that both programs possess such knowledge? The problem is, there are many, many such points in an unfinished game -- such that the computational schema to anticipate responses to the crisis begins to resemble a random walk in the abstract chessboard space.

Over a long period of time, and after many chess games, a lot of literature has been compiled for optimal opening strategies and countermoves. Then comes along a Bobby Fischer who opens unconventionally with the king side rook's pawn to rook 4. In fact, we know there exist some initial moves that result in a guaranteed loss -- who knows an ostensibly weak opening can be turned into a win?

It's the same with Scott's Mozart comparison. If for every NP-complete problem, there exists a polynomial time solution, P, everyone who knows the solution can play like Bobby Fischer in his prime.

reader Oren Zur-Shavit said...

very interesting, tnx!

reader markus333 said...

You are digging yourself into a hole here. If there was a constructive algorithm for P=NP, this would have huge implications, certainly going way beyond efficiently solving some "boring" permanent problem, as you seem to be putting it. As far as I understand, there would even be a reasonable chance that such an algorithm might knock some of the other millennium problems off their perch - by searching through all possible (polynomial sized?) formal proofs, or something like that.

reader Luboš Motl said...

Dear Markus, if you read my blog entry and my comment, you will see that the non-constructive character of the existence proof for the fast algorithms is the very point of mine. So why are you trying to pretend that I claimed that P=NP means that there is a straightforward, fast procedure to find the fast algorithm for each NP problem? My claim was *exactly* the opposite.

reader Luke Lea said...

P = NP would say that a fast solving algorithm exists for each problem in NP; it doesn't say that we actually know a method how to find such an algorithm for any NP problem.

Naive query from the idiot's corner: Is this like Godel's incompleteness proof where it is shown that undecidable propositions mus exist but it isn't shown how to find one or identify one?

reader markus333 said...

Right, that is part of what you said. But you seem to be implying that even if P=NP, this is somehow not amazingly interesting because only a non-constructive proof might exist. But even if it were the case that P=NP could not be proved constructively, an indication of this would be incredibly interesting. Surely? Both philosophically and even physically.

P vs. NP is just one attempt to rigorously define some aspect of an "easy" problem vs. a "hard" one. Is your objection that this kind of question is just not very interesting?

Or is it that these particular complexity classes (P and NP) may not rooted enough in modern physics and not interesting enough to study for this reason?

Or is it that ANY complexity classes that a complexity theorist might come up with are to some degree bound to be boring and artificial since they have not sprung from the head of string theory? :)

reader Luboš Motl said...

Yes, Markus, I have all the complaints you mentioned.

The proof of the P vs NP assertion doesn't imply anything constructive that could be used in practice or be useful - and even if such a new way to approach problems existed, it could be inferior to existing approaches in many situations. And even if it were superior, it's no contradiction. People sometimes invent new, faster way to do things. What's the problem?

For example, the universal proving machine could be the Wolfram Alpha v53.0 gadget running on millions of servers in 2080 and it could master proofs up to the length of 40 pages. Any real proof that nothing like that could happen? Such a proving machine could still use lots of infrastructure and special functions and underlying algorithms to deal with different cases, like Mathematica does already today.

My point is that one is trying to find a solution to a messy problem, so the algorithms etc. to do so, while in principle straightforward, may be even more messy.

Also, I agree with the claim that a segregation of problems into "easy" and "hard" problems isn't terribly interesting. In reality, problems are hard or easy sort of continuously - all kinds of shades are in between - and they are easy or hard depending on various parameters such as N but many other things. Moreover, there's no counterpart of N in the problems that are most physically relevant because they're not discrete at all. In those cases, N is infinity and any potential difference between slow and fast becomes ill-defined.

I don't know why some people are obsessed with the attempts to divide the problems by their hardness just to two groups - maybe they're obsessed with one bit? The actual outcome is that there exist hundreds of complexity classes that divide the problems into easy and hard in different ways. Everything depends on the details. There isn't any "holy grail" to be found here.

I have already subscribed to the statement you assigned to me in the second paragraph from the end - yes, all these problems are recreational mathematics by their essence, not real deep problems in physics-like mathematics. Just like chess.

And concerning the last paragraph, yes, of course, the unrelatedness to the problems that are natural in string theory does suggest that those things aren't deep. You may try to humiliate this important point but the ultimate reason is probably that you don't know *anything* deep about mathematics so all references to deep things just sound funny to you,.

reader Luboš Motl said...

Well, it has some similarities but it isn't an equivalent statement. I am just saying that the proof of the existence of a fast algorithm isn't necessarily a constructive proof. Constructive vs non-constructive proofs are less philosophically puzzling than Gödel's theorems and they've been a part of maths long before Gödel.

reader T H Ray said...

Luke, there are two incompleteness theorems of Godel. The first is often collquially characterized as "Truth is stronger than proof," meaning that there will always exist true statements outside those that can be proved in any formal system. The second concerns the consistency of systems of axioms; no statement in the axioms can be proved using that same set of axioms.

Neither of these, though, are directly related to what is called an existence proof -- showing that something exists without explicitly constructing it. To give one practical example, we know (by Brouwer's fixed point theorem) that there are exactly two antipodal points at all times on our globe where the temperatures and barometric pressures are identical. To my knowledge, we don't have a way to locate them, even though they are sure to exist.

reader Luboš Motl said...

Dear T H Ray,

thanks for your comment. I wasn't trying to claim that they were equivalent problems. "Is chess a draw" and "Is P equal to NP" are just two questions in mathematics that can be answered either Yes or No. At this point, both possibilities are viable in both cases.

If you wanted to construct a more direct correspondence between chess and NP, I didn't understand what the analogy was. Your description of crises in chess looks too complicated to me. When chess is equipped with the right rules that prevent infinite repetitions of arrangements of the pieces etc. (by saying it's a draw when they get repeated X times etc.), then the problem to decide whether two ideal players have to end up in a draw is a completely finite, well-defined problem. It's just very complicated so that no one can reliably answer the question today - despite the fact that the rules of chess are "relatively" easy and almost everyone may learn them.

Bobby Fischer's opening is just a technicality in a "heuristic" approach to chess. A full proof that chess is a draw or not a draw would require to address not only Bobby Fischer's opening but all other conceivable openings and continuations etc. etc. It's a very difficult problem. But it still a finite one and brute force will almost certainly be extremely useful for the first time when a proof of "chess is a draw" is given. There's no reason why brute force should be useless here. Chess is a messy game with lots of pieces etc. I am saying that P vs NP is a comparably messy problem. Solutions may exist but they may be much messier - although, ultimately, producing polynomially fast algorithms in principle - than in the case of chess or the Four-Color Theorem, for example.

It's the same with Scott's Mozart comparison. If for every NP-complete problem, there exists a polynomial time solution, P, everyone who knows the solution can play like Bobby Fischer in his prime.

I don't understand this paragraph at all. I thought that it was clear that Bobby Fischer was an example of something that definitely is *not* a viable approach to proving or disproving the math statement "chess is a draw". Bobby Fischer may be a good player who beats other human players, perhaps in unexpected ways, but it is a completely different type of activity than reliably deciding on whether chess is a draw.

Also, your implication is manifestly invalid. If P=NP, it doesn't say anything about everyone's being equally good in chess or anything else. One problem in the P or NP class may still be solved 1 million times faster than another problem in the "equivalence" class. The equivalence is just some formal relation looking very vaguely at some abstract properties of the problems and their solutions. It doesn't mean that the two equivalent problems are really equally hard to solve.

And by the way, I think that Bobby Fischer's superioty in chess *is* ultimately about some kind of mental brute force, anyway. These players just have greater memory and patience to try things than the rest of us so they have a higher or much higher chance to succeed in chess games etc. I don't see the slightest glimpse that there is some "qualitative difference" between Bobby Fischer and a less achieved chess player. So if you use Bobby Fischer as some kind of a "divine miracle" proving that the segregation between the classes of "hard" and "easy" problems must be qualitative, then I think that your argument is manifestly false because the separation isn't qualitative even in the case of Fischer and worse chess players.

reader markus333 said...

The P (or BPP) is a useful concept when describing the fact that I can simulate classical physics beautifully on my iMac, but not the dynamics of even 20 quibits - a fascinating fact all on its own. And the NP class is a kind of "first tool" when trying to formalise the fact that one can reasonably expect to teach some kids string theory, but can't reasonably expect them to derive it from scratch.

I don't understand what's supposed to be recreational about trying to formulate definitions along these lines?

I would also extrapolate that you think any interest in the relationship between P/BPP and BQP is "recreational maths" as well.

But you have got to be aware of some really non-trivial ideas that have arisen recently precisely because there are people thinking about this type of relationship. In fact some of this does touch on string theory, in particular ads/cft e.g. (I'm not personally associated with this work, it's just the first thing that came up on google on the topic).

reader James Gallagher said...

The problem is that P very likely is not NP, so it's a boring philosophical amusement to contemplate. And to argue that there are practical applications is very misleading - improving algorithm efficiency is really unrelated to the higher philosophical issues - which you might contemplate on your holiday period, but really don't have to be concerned with when doing your day job.

It's similar to chemist's not caring about QM interpretations.

The work in improving SAT algorithm's is solid and very applicable - but no one even needs to know all the higher level arguments to contribute to these improvements, which are, in the end, the only important part of computer science

reader Scott Aaronson said...

ROTFL! Improving SAT algorithms is the only important part of CS? What about ... I dunno, compilers, caching, pipelining, the operating system stack, Internet protocols, or any of the other stuff that had to be invented, and that allows a commenter like yourself to spew such ill-considered opinions with such ease?

reader James Gallagher said...

I include most of that cool stuff under "improving algorithm efficiency"

None of that list of items requires any obscure philosophical arguments.

Go on, name one of the items on that list you just trotted out that was enabled/helped/improved/anything by philosophical computer science ideas.

reader Scott Aaronson said...

How about ... all of them? Do you know anything about the formal language theory underlying compilers, for example? OK, but you say that stuff is old hat, from the 60s? Fine: are you aware that where I work (the MIT EECS department), the theory group has spun off three companies (RSA Security in the 80s, Akamai in the 90s, and CILK Arts in the 2000s) that together account for more value than all the spinoffs from all the "applied" parts of the department combined? Does that surprise you? Does it affect your thinking at all?

One more question: is it only in CS that anything for which you can't immediately see the application should be cut off like a cancerous tumor? Or is it also in other fields---like, let's say, physics?

reader James Gallagher said...

(1st paragraph No it doesn't. Near where I grew up a group of silly geeks started a company called Acorn Computers, I purchased an Acorn Archimedes directly from their offices (!), they are now ARM holdings, they specialize in chip design, and I don't think any one of them gave a monkeys about turing et al

(2nd paragraph) Yes i think a lot of silly "physics" is worthless too - but when you need a $100 billion dollar experiment to prove anything it's gonna cause all sorts of issues.

I actually think a huge investment into attempting schor's algorithm might be better spent than a new super collider (But for the reason that it will be bad news for you)

reader Dilaton said...

This is a very nice introduction to the millenium problems :-)

Lumo said

"In some sense, the non-integer values of s, the zeroes of the Riemann ζ-function,
are "inverse" to the set of primes on the real axis. I won't tell you
the exact sense but if the allowed momenta were primes, the allowed
windings would be the roots s of the ζ-function..."

This paragraph reminds me of T-duality ...

And I dont apologize if my comment is over the top this time, since I blame my too fanatic reading of TRF articles for such weird geeky thoughts spontaneously appearing in my mind at the least appropriate occasions all the time :-P !


reader Luboš Motl said...

Let me ignore the rant about CS, I am not sufficiently interested in this stuff. Concerning 4): And you would be right, too.

The deep thing about string theory is surely not the technical detail of replacing point-like particles by strings - the new assumption or decision to make this purely technical change. The depth of string theory is in the results that nontrivially follow, in surprising ways that *could not* be expected from the beginning, including holography, dualities, geometrization of various seemingly non-geometric problems, and so on. But none of these truly deep results is clear from the very beginning when one decides to replace points by strings!

You completely seem to misunderstand the difference between the results and assumptions. Assumptions are never deep if results don't show it. I am amazed how many deeply misguided people must be visiting this blog if they give you upvotes.

reader Eugene S said...

Bobby Fischer was crazy, but not when it came to chess. I doubt very much that he ever opened 1. h4 in a game against a grandmaster that counted. And I'm not even going to check the chess databases, it is just too silly. A new move in chess, i.e., the first move in an opening to deviate from the million-game databases of top-level chess games, is called a "novelty". The notation would be, e.g., 12. Bxf7N (this is shorthand for "Bishop takes on f7; this move has never been recorded in high-level play before in this position"). As heavily analyzed as chess openings are, grandmasters are still in the 21st century coming up with novelties as early as move eight. But 1. h4? Give me a break.

Was Fischer notable for how many novelties he came up with? I don't know but I doubt it. He had all the skills that top grandmasters have -- prodigious memory, obsessive working habits, extremely fast calculator, tenacity and fighting spirit, full-blown paranoia (euphemistically called "sense of danger" by chess players), psychological acuity, a harmonious sense for how pieces work together, to name only some -- but he had just enough extra of some of them that in his prime, he was unbeatable.

He was not, however, the inventor of a new "school of chess", a "new and revolutionary system": the last chess players to make such a claim were the inventors of the so-called hypermodern chess, people like Richard Reti and Aaron Nimzovitch early in the 20th century. It is true that many of Fischer's wins bear a certain trademark signature, a naturalness and inevatibility towards what appears to be the foreordained result, but that was not due to some revolutionary innovation in how he approached the game of chess, but instead owed itselt to the edge that he had over his opponents in several of the key parameters.

reader Luboš Motl said...

Exactly, Eugene. I wasn't born yet when he was the top star but I think that it's possible to be sure that you're right, anyway.

It's great fun if someone is unbeatable but it always boils down to some quantitative edge between the champion and the competition - to imagine a gap similar to the gap between Earth and the Heaven surely looks poetic but it is completely unrealistic.

reader Eugene S said...

It's Aaronson's loyal readers, coming here en masse to show us how much more numerous they are. Tyranny of the majority. Initially I was heartened by Aaronson's guest article, thinking this might be a terrific opportunity for two smart people to strike up a civil modus vivendi while still vigorously disagreeing with each other. Unfortunately from the start your guest struck up a note of "Look at me walking into Lubos' den with a clothespin on my nose, aren't I noble?" And in his latest blog post he compares you with crackpots and loons, patting himself on the shoulder for being such a mensch that he talks to the lowliest of creatures. I guess both your diplomatic skills could stand improvement but it's something you two will have to sort out amongst yourselves.

reader Luboš Motl said...

Dear Eugene, thanks for your summary - it's better not to read similar blogs for the sake of health.

reader James Gallagher said...

UK Grandmaster Tony Miles once beat Karpov with a rook pawn opening (as Black). The match was due to be televised as part of a weekly BBC chess tournament,but there was a strike by BBC workers on that day so no video footage of the game exists (shame)

reader Scott Aaronson said...

Annnnd ... I think I've located the problem, right here! You explain correctly that the interest of string theory doesn't come from its starting assumptions, but rather from how much comes out of those assumptions that wasn't explicitly put in. Then you assert it as obvious that the same can't possibly be true of complexity theory. To me --- i.e., to someone who's studied the subject much more than you have --- it's obvious that the exact opposite is the case. The PCP Theorem, the role of finite fields in escaping the relativization barrier, GMW and other amazing cryptographic protocols (like Gentry's homomorphic encryption scheme), the Unique Games Conjecture and its connections to geometry, SDP relaxation, the "dualities" between upper bounds for one problem and lower bounds for another problem ... none of these are things anyone was thinking about when they formulated the P vs. NP problem back in the 70s. Yet they're all things that emerged, more-or-less inevitably, from efforts to understand P vs. NP and related problems. If you can't be bothered to learn about these things, but want to persist in attacking complexity theory anyway for having "no surprises" (!!!), then I don't see any point in continuing this discussion.

reader Eugene S said...

Yes, Miles played 1. ..a6, for which Karpov had not prepared. It's still vastly different from 1. h4. It's difficult to explain if you aren't a chessplayer and if you are, then I wouldn't need to explain, but in a nutshell, 1. ...a6, while passive, preserves many options and can transpose into many well-known openings. In fact, this move is routinely played very early in, e.g., the Najdorf Sicilian and the Ruy Lopez. It could also transpose to some kind of hedgehog formation etc. Whereas 1. h4 is over-committal, does not transpose easily into known openings, weakens the kingside, and is just plain ugly :)

reader Scott Aaronson said...

LOL! Somehow the fact that I enter "enemy territory," and wind up with way more upvotes than my bellicose host on his own turf (a surprise to me as well), gets twisted into yet another count against me... :-)

reader LdB said...

Scott having followed the argument I think I have to side with LM because if you look at Grigori Perelman's proof for example it solves the problem but in it's proof it gives you no more idea of why it is so or natural as LM has sort of been expressing it.

So the Poincaré Conjecture proof as good as it is has no natural extension at least at this stage that we can see.

Solving the N vs NP problem may turn out to be just the same a solution with no use beyond that and I believe that is what LM is saying.

I guess the question I would ask Scott is what makes you think that a proof of NP v P would lead to anything beyond a mechanical proof like Grigori Perelman's.

reader T H Ray said...

Ah, thanks, Lubos. I've learned something new. I thought it was a settled matter that a perfect chess match has to end in a draw, though I hadn't given any
thought to a formalized proof. On the other hand, I conjecture that if such a proof can be written, It will surely be an existence proof.

An existence proof for P = NP? would not suffice, regardless of whether the
answer is positive or negative. The main reason that the problem is so easy to understand in the first place, is that it begins with a construction (an assembled jigsaw puzzle, say) and asks if there is a way to assemble the pieces in at least as short a time as it took to make the cuts (assume that these are created by hand rather than pressed all at once onto the board).

If there is a way to do that in one case, the same general algorithm can be applied to
all cases in that class. For instance, take the traveling salesman problem with only a few nodes where the optimum time is known -- increase the complexity by
n nodes and call it the problem space. It's obvious that the construction one used to solve the easy problem does not satisfy the problem space of the hard one. Just as one might write a program to optimize assembly time for a 10-piece puzzle, which doesn't satisfy the requirement for optimizing assembly time for a 1000 piece puzzle -- just increasing processor speed, in fact, might even
allow a brute force program to assemble the 10 piece puzzle even faster than it was created.

One obvious interest in P = NP? to the CS community, then, is to try and overcome the physical limitations of brute force computation by replacing it with an algorithm that transcends the problem space.


reader Luboš Motl said...

It's not a surprise for me at all. I have known for years that the Internet is overfilled with trolls and imbeciles.

reader Luboš Motl said...

This is a very mixed baggage of results - most of them are uninteresting engineering technicalities - but what's more important is that they're not results of one particular compact theory or structure. They're results in a vaguely defined discipline of mathematics.

But if you cherry-pick some nice results over here, it's still sensible to consider them isolated interesting results at various points of mathematics - results that tend to be studied by similar people, largely for sociological reasons. The same can't be said about the results in string theory that are really inseparable from each other.

reader T H Ray said...

I see my dyslexia has gotten me in trouble again. I was thinking black to respond (hence the reference to king side). It wasn't, however, my main point.

reader Luboš Motl said...

Right, T H Ray. A surprising twist turning a weakness into a special advantage is rare but this sometimes gives new geniuses their flavor. They think different. In some cases, it's due to good luck, in others, it's due to their seeing a bit further than others.

If computers remain weak in some situations, I am afraid that it's because the programs don't have the sufficient empathy to expect imperfect decisions of the foe. "He will buy this trick of mine" is something that a human player may often do but a computer player may avoid such things because it may incorrectly assume that the human opponent is sort of more perfect and mechanical than it is.

Otherwise I can't believe that there is any reason why computers shouldn't beat any human player today.

reader Eugene S said...

Dear Tom, I am not an expert on computer chess by any stretch of the imagination. However: as far as I know the only times when an outcome is guaranteed to be a draw are the positions encoded in endgame tablebases. At present, all possible positions with any six pieces still on the board are known and the outcomes -- win, lose or draw -- with best play on both sides are also known. Every node in every branch of the tree is known, so if two chess programs reach such a position their operators will quickly agree to a draw.

What you write about strategy-concealing randomizing subroutines comes as complete news to me. Perhaps you are right, if so I would appreciate a reference. In general, the programs do not "think" about strategy, and especially I don't think that they try to get inside their opponent's silicon brain and guess what plans it might be pursuing. It would be futile because the programs are not "making plans", at least not in the sense that we humans think of the term.

It is perhaps interesting that the best chess players are not chess programs but man-machine teams, with the humans calling the shots on each move after looking at the program's output. A comforting thought when one is worried about being supplanted by computers...

reader Robert Rehbock said...

First, I join the congratulations on your MIT appointment. I have been reading your book and the exchanges on this blog, etc. Don't think it fair to say LM not willing to learn these things ... think that first we have to define "these things" more rigorously. "Deeper, Interesting, more fundamental" seem to be vague and ill-defined because what "... things that emerged, more-or-less inevitably, from efforts to understand ..." are ub fact more or less fundamental depending on the definition of fundamental. Because I am not qualified to assert the superiority of either of your views of fundamental, I instead offer that a Justice once said he could not define smut but knew it when he saw it. I agree with LM in great part but hope this dialogue does continue because I am learning much from both of you.

reader Luboš Motl said...

Interesting, Tom. I think that everyone expects chess to be a draw for the same reason. There are many endings and the ending configuration is so far from the initial one and is connected by so many possible games in between that it can't possibly have a clear "greater proximity" to the white player or the black player. It's pretty much symmetric so given the diversity of possible reactions, almost every advantage of the white player may be transformed into an advantage of the black player, too. The draw is the only answer that sort of respects the symmetry of the stochastic messy games in between.

This babbling of mine is very far from a formal proof that chess is a draw. ;-) But more generally, games with many moves where a single move often has the same effect as several moves are likely to be draws if a draw is possible at all.

Your problem space vs time comments are interesting, too.

reader Eugene S said...

Today has been sort of a simultaneous exhibition chess match for you, right? Fielding questions from beginners to experts, duking it out with international heavyweights, writing lengthy technical yet accessible articles, and I see you've also been active on physics SE today, plus pursuing whatever fiendish plans you hatched to snag a Millennium Prize... I guess you're not happy unless your multitasking cerebrum is in absolute, terminal overload.

If you were a chess world champion I would not peg you as Bobby Fischer (evil!) nor as Kasparov (relied too often on intimidation and sheer force of personality) but Kramnik. Here he is talking about Vasily Smyslov, but it is obvious he is also talking about himself:

- How would you describe the seventh World Champion, Vasily Smyslov?

- How can I express it in the right way? ...
He is truth in chess! Smyslov plays correctly, truthfully and has a natural style. By the way, why do you think he lacks that aura of mystique like Tal or Capablanca? Because Smyslov is not an actor in chess, his play is neither artistic nor fascinating.
But I am fond of his style. I would recommend a study of Smyslov's games to children who want to know how to play chess because he plays the game how it should be played: his style is the closest to some sort of 'virtual truth' in chess. He always tried to make the strongest move in each position. He has surpassed many other of the World Champions in the number of strongest moves made. As a professional, this skill impresses me. I know that spectators are more interested in flaws ...
ups and downs. But from the professional standpoint, Smyslov has been underestimated.

reader T H Ray said...

Hi, Eugene. I'm no expert on computer chess, either. I don't have a handy reference (though that sounds like a research project I should undertake) -- yet I would be very surprised to find that an analysis of machine moves, even to the examination of source code, didn't contain information that could not be unencrypted by a competitor.

Suppose otherwise -- that two programmers agreed in advance to share their strategies, and let the machines decide the moves, each of them armed with the others' strategy. The machines themselves would have the means to encrypt information in the course of the game, or it would be as pointless as a game of tic-tac-toe (naughts and crosses to the Brits). In other words, even human players who are good at the game and have memorized much literature and many games, are broadly aware of the others' best strategies. It's no competitive advantage at the master level.

I am looking at the problem in terms of information. Just as G. Hardy said, "Chess problems are the hymn tunes of mathematics" -- what we get from a set of positions has nothing to do with the game itself; it has to do with a single "song" in the "hymnal." More than that, the initial song determines the order in which at least two other songs are to be sung, so we are getting what comes to resemble a random walk in the problem space. Because the total information is finite, albeit very large, it can be modeled as a random walk around an equilibrium point -- (which is a common model in economics theories today, taken from John Nash's discovery of the Nash equilibrium).

There's a point here somewhere. :-) Ah, right -- it's that the players make choices of where the equilibria will fall. For the entire game, this will collapse to one point; if one knew before the game began, where that point is to exist, he could force all the moves, and make the winning choice of moves.

reader T H Ray said...

Thank you, Lubos. I'm really enjoying this exchange. I think I may have responded to the points of your post in one to Eugene above.

My conjectures stem mostly from Leslie Lamport's research into Buridan's Principle, which I find generalizes; i.e., using the arithmetic theorem that a point can simultaneously approach any other set of points -- provided that it is far enough away -- implies a singularity in every measurement function from an initial condition, involving a continuous range of variables.

reader Scott Aaronson said...

OK Lubos, I'll bite. Of the things I listed, exactly which ones are "uninteresting engineering technicalities"? Can you even explain any one of them in enough detail to justify that opinion?

reader Joe Shipman said...


First of all, you should note that Checkers has been solved and shown to be a draw, using cleverly organized brute force.

Probably 95% of chess grandmasters believe that with optimal play chess is a draw, but not for the reason you say. Your "babbling" actually gets at the intuitive reason that checkers is a draw, because the symmetry between White and Black is more important than the fact that Black moves first, since in checkers having an extra move doesn't help much.

However, in chess, having an extra move is very important (this is also true in Go). Players will frequently sacrifice a pawn to obtain "the intiative" even when no other positional advantage is involved. There is a strong consensus that having White in chess is a significant advantage. The reason chess is thought by most grandmasters to be a draw is that the advantage is not quite large enough. There is a "margin of draw" which, as has become gradually clearer over the last 50 to 100 years, is larger than the amount of White's initial advantage. In very rough terms, white starts with an advantage worth a little less than half a pawn, but an advantage of a full pawn is just barely enough to win (typically, high-level games in which one player has an extra pawn but the other player has any nontrivial compensating positional advantage turn out upon deep analysis to be drawn with correct play).

For the game of Go, there is no margin of draw because the winner is determined by a point count at the end. The amount of advantage the first player (Black, in Go) has is a matter of debate, and the handicap given to White in tournaments to make the game even has risen slightly in recent decades but no one thinks it should be less than 4 points or more than 8 points.

reader Joe Shipman said...

I agree with you that the Riemann Hypothesis is probably the deepest, and I also agree that even if P=NP that might not be of practical importance. However, it might also be of tremendous practical importance, which you shouldn't rule out. As you point out, Aaronson's opinion that factoring is an easy probem but 3SAT is a hard problem (I mean "easy" and "hard" in the sense of being able to answer small instances of the problems with high confidence by applying physics as well as math) is faith-based, while others have faith that quantum computers can't be built and factoring is hard, and still others might have faith that an effective way to calculate matrix permanents might be found (I am rooting for Leslie Valiant to adapt Scott's Boson Sampling work to accomplish this ;) ).

My favorite problem that failed to make the "Millennium" list is the Invariant Subspace Problem--does every bounded linear operator on Hilbert space have a nontrivial invariant subspace? I think physicists' intuition might help here as much as it might help for the Riemann Hypothesis. Have you thought about this problem?

reader Θ³Σx² - ∂³Σx² - ΘΣ said...

Good review, my contribution to the RH problem long ago Cheers

reader Luboš Motl said...

Thanks for your comments. I don't believe that my argument why it should be a draw applies more to checkers than chess.

Checkers is much closer to "drawing matches" because the length of the moves isn't as variable as it is for chess, so the precise frequency in timing may be more important. It's still a draw, you said.

It's also much harder to go through chess configurations because the pieces are inequivalent to each other so the number of configurations is about 16! * 16! times greater, if I exaggerate a bit.

reader Luboš Motl said...

Dear Joe, right.

Didn't you mean Banach space instead of the Hilbert space? "Banach" is what would make it a topic close to what my uncle has studied for decades. "Hilbert" makes it sound more physics-y but it's probably a wrong description of the problem.

I am not familiar with the problem and it's sort of hard to imagine that there is any unsolved problem of this Ansatz among the operators that are non-pathological from a physics viewpoint. It sounds too straightforward...

reader T H Ray said...

Hi again, Eugene. Possibly Lubos will consider these posts off topic, but I don't think so. They are deep questions that relate quite directly to the millenium problems P=NP? and the Riemann Hypothesis.

And I wouldn't be so quick to judge yourself non-expert -- you quite accurately nailed the significance of backward calculation. It's not that "a plan" by either competitor is meaningful -- it's that the two-player macro plan is indifferent to winners and losers; the continuous function bounded by the start and the end of the game theoretically has a critically singular point independent of where the game begins or ends. As you and Lubos have pointed out, for many midgame or endgame problems one can identify a set of perfect information that predicts the conditional outcome. What is more useful for us to know (mathematically), is a way to generalize the problem without boundary conditions, avoiding all that "messy stochastic" stuff that Lubos spoke of. Easier siad than done, of course.

Ten years ago I wrote a paper, "P = NP if time is probabilistic," that never got any traction. I went on and developed some of the ideas for a paper presented at necsi ICCS 2006, and forgot about it -- the main theme is the search for an n-dimensional equilbrium function that would obviate boundary conditions. It's retrievable online at‎ if you or anyone else is interested.

I decided to mention the paper here because it broadly attacks what we've been discussing, in terms of a love that Lubos and I both share for Analysis, i.e., the mathematics of continuous functions -- which is exactly what the "backward calculation" to which you refer, requires -- i.e., reversibility. The summary contains commentary on string theory as a self-organized phenomenon, and the Riemann hypothesis as an equilibrium function on the complex sphere.
All best,

reader Joe Shipman said...

For Banach Spaces examples are already known of bounded operators with no invariant subspace. For Hilbert space the answer is unknown (for finite or uncountable dimension there is always a nontrivial invariant subspace so one can restrict attention to L2 (or l2)).

reader Joe Shipman said...

Your argument applies to both checkers and chess but there is an additional factor in chess, initiative, which is more important, and so the true reason chess is a draw is different ("margin of draw" because advantages may not be large enough to checkmate).