Some context: this is an email I sent on the subject of Deolalikar's attempted proof that P is not NP. I read the first 6 chapters of the proof (the 7th chapter is the core of the proof itself), but I am not an expert on the material and did not fully understand everything I read. I also think the Clay Math Institute's explanation of P vs. NP is more accessible than my own.

The course numbers I mentioned below are Caltech courses. The errors I mentioned are minor, and could well stem only from my own misunderstanding of the work.


Back to P vs NP. I've decided to present my thoughts of the paper below in friendly, colorful* Q & A format. Maybe we can have an interesting discussion on the topic.

*Any color you want so long as it is black.

Q: What is the P vs NP question?

The P vs NP question is a question in complexity theory, the study of "computational complexity" of problems. A /problem/ is a yes-no question of the form, "Does x have the property P?", and a solution to the problem is a program that takes the input x and determines correctly whether x has that property. Complexity theory asks, for a particular property, what the fastest speed of a program that can check that property is. [We typically express these speeds in big-Oh notation. For example, the fastest solution to "Is the graph x connected?" is O(n), where n is the size of the graph.]

"P" and "NP" are complexity classes, which means they are a collection of problems, related somehow by how fast their solutions are. Loosely, /P/ is the class of all problems whose solution is "fast". [/Fast/ means that the solution runs in polynomial time; the name "P" stands for "polynomial". A program runs in /polynomial time/ if the time it takes to run on an input of size n is at most than n^k for some fixed k.] /NP/ is the class of all problems whose answer can be verified by a fast program (polynomial time). ["NP" stands for "nondeterministic polynomial".] For example, the problem of determining whether a graph has a Hamiltonian cycle is in NP; given a path in the graph, it is fast to verify whether or not it is a Hamiltonian cycle. However, we don't know of any way to quickly find such a path.

It is clear that P is a subset of NP: if it is easy to find the solution to a problem, then it is also easy to verify the solution someone gives you. The P vs NP question asks if the converse is true. Is P a strict subset of NP, or is P equal to NP? Most believe that P is not equal to NP; for example, it is believed that the problem of determining whether a graph has a Hamiltonian cycle is not in P, but it is known that it is in NP.

The significance of the P vs NP problem is that there is a broad class of problems that are known to be in NP, but are believed to not be in P. For example, the problem of a breaking a particular cryptographic protocol typically lies in NP, because given a solution (i.e., the private key to the protocol), it is easy to verify whether it is correct or not. However, we hope that breaking these cryptographic protocols is not a problem in P -- otherwise they would could be broken quickly, because problems in P have fast solutions.

Q: Did I need to understand all that to read Vinay Deolalikar's proof?

No. There is zero complexity theory in the proof; none of the concepts above are mentioned anywhere. Instead, the author cites two theorems to translate the problem of P vs NP into a completely different looking problem in logic. Relatively little background is needed to tackle it, as most of the concepts the author uses are briefly introduced before they are needed.

Q: Is the proof correct?

Based only on my understanding of the mathematics, I don't know. The portion I read (chapters 2 through 6) gives every appearance of being correct. However, many deep and fundamental assumptions permeate this paper, like any advanced mathematics work. To a novice, these assumptions appear completely opaque, or may not even be noticed at all; to an expert, they are transparent. It would require someone experienced specifically with the author's field to understand and see whether the subtle and abstract way that these assumptions underly what he writes is valid or not. If there is an error in the proof, it is almost certainly in this "deep" aspect of the proof.

I did see some minor errors, more on that in a moment.

Based on the context of the proof, I think the proof is most likely flawed, perhaps mortally, or perhaps in a way that could be repaired in two years (cf. Fermat's Last Theorem). The number of false proofs greatly exceeds the number of true proofs, and the likelihood of a lone individual making a subtle mistake is high. I didn't get a great "vibe" from the portion I read; it seemed rather heavy on exposition, introduction, and citing other results and light on proving new things. (Fine by me, it makes it easier to read.)

Q: What about those errors you mentioned?

There was a fair drizzling of typos, e.g., a LaTeX typo, a minus sign instead of a plus sign, K instead of k, and the occasional wrong variable. For whatever reason the density of these errors seems to increase as one goes further in the text, culminating in a bizarre mess on the second page of chapter 6. In a single example, a factor of |V| factorial was forgotten, the author took something divided by 2 instead of choose 2, and deduced an answer that depended upon one variable that it should be independent of, and depended upon another variable in the wrong direction. (It is quite possible I'm misunderstanding what the example is of.) Fortunately the main exposition of the paper doesn't depend upon the example, I suppose. Immediately afterwards the author states a Theorem 6.3 which has neither a proof nor a citation; the theorem states that something equals a certain symbol that is not defined or even mentioned elsewhere in the paper. I can't tell if Theorem 6.3 is used anywhere.

However, other than that weird mess in chapter 6, the technical content of the paper appears to be quite good. It does look to me like the author proofread the earlier parts of the paper more.

[Edit: If someone understands chapter 6 better than I, and I am mistaken, I would be happy to hear an explanation.]

Q: What's the general idea of the proof?

That depends; do you know about 3-SAT?

Q: Never heard of 3-SAT. What's the general idea of the proof?

Recall those complexity problems I mentioned at the beginning. One such problem is the Boolean Satisfiability Problem, also called 3-SAT or k-SAT. k-SAT is known to be a problem in NP, and in fact is one of the most famous problems in NP. If it can be shown that k-SAT does not lie in P, then this implies that P is not NP. The author aims to show that k-SAT does not lie in P.

To show that k-SAT does not lie in P requires showing that /every/ program that runs in polynomial time does not correctly solve k-SAT. The author tries to show that all polynomial time algorithms behave "locally" in some sense; that influence can only propogate from one variable to another in some limited way. Then, the author discusses the statistical properties of solutions to k-SAT in different phases (this is where the physics comes in). In one of the phases, solutions to k-SAT are sparse and hard to find; they are "too far" apart, in some sense, for a polynomial time algorithm to find them. This shows the desired result.

Q: I know about 3-SAT. What's the general idea of the proof?

The author considers the statistical properties of solutions to SAT_k(n, m), a randomly chosen satisfiability problem in n variables, with m clauses, and each clause having k terms (i.e., a problem in k-SAT). It turns out a key value is alpha = m / n, the "clause density". For higher values of alpha, there are more clauses (more constraints) and fewer variables (fewer potential solutions), so solutions are scarcer; while for lower values of alpha, solutions are more plentiful. In the limit as n goes to infinity (for fixed k), k-SAT problems resolve into three distinct phases. If alpha is sufficiently low, below some alpha_d, there are many solutions, and the solutions are mostly connected in one huge chunk (two potential solutions are adjacent if they differ in a single variable). If alpha is sufficiently high, above some alpha_c, there are almost surely no solutions (the probability of a particular k-SAT problem having any solution goes to 0 as n goes to infinity). For alpha between alpha_d and alpha_c, the are almost surely solutions (the probability goes to 1 in the limit) but the solutions are not all in one chunk; instead they are in exponentially many small chunks that are very far apart from each other (Hamming distance is O(n)).

This is where the physics comes in: the critical values alpha_d and alpha_c are phase transitions between the three distinct phases of k-SAT. There's even a diagram of this (chapter 5).

All known approaches to k-SAT only work effectively in the alpha < alpha_d phase; in the alpha_d < alpha < alpha_c phase, where there are solutions but not many, they fail to find solutions in polynomial time. The author intends to show that this is an intrinsic property of all polynomial time algorithms. The author uses the fact that the class of all polynomial time algorithms is equivalent in some sense to first order logic plus least fixed point operators (basically, induction), to show that poly-time algorithms must act "locally" in some sense. For some reason, having solutions on order O(n) apart in the middle phase of k-SAT makes these "local" algorithms not sufficent to solve k-SAT.

k-SAT being not in P, and in NP, that shows that P is not NP.

Q: What physics does the author use?

Besides the phase transitions in k-SAT I just discussed, the author also uses some stat mech like stuff to show that certain dependency relationships are local (I didn't quite follow it myself), in chapter 2. The author defines a Gibbs distribution in terms of the partition function, exactly as in Ph2 / 12.

Q: If I wanted to look at the paper, what should I look at, and what background would I need?

I suggest math-oriented folk to try chapter 3, and CS people to go for chapter 5. Physicists should read up on k-SAT (http://en.wikipedia.org/wiki/Boolean_satisfiability_problem) and then go to chapter 5 for the phase transitions.

Chapter 1 is the synopsis; it went way over my head before I read the other stuff.

Chapters 2 through 6 introduce definitions and theorems from outside sources; the author doesn't prove anything novel here. By and large very little background knowledge is needed, as the author introduces the needed concepts as he goes along.

Chapter 2 introduces the notion of conditional independence. Some basic graph theory is needed, and I had to look up a few definitions online. The author introduces Markov chains and, equivalently, Gibbs distributions. The chapter is a somewhat tedious and long but has its virtues. I found this resource (http://www.andrew.cmu.edu/user/scheines/tutor/d-sep.html) good for background on some of it.

Chapter 3 is fun and fairly clean. Builds a little off of Ma 6c type material, and goes into model theory a little. It does toss around a few concepts such as first/second-order logic and transfinite induction, but you don't actually have to understand that to follow along.

Chapter 4 was fairly painful, I don't have a good sense of what it was about. It builds a lot on chapter 3, and assumes you're pretty comfortable with model theory.

Chapter 5 is easy reading, so long as you are comfortable with k-SAT. It's pretty concrete, and also there are phase transitions in a surprising context. Chapter 5 contains some of the central ideas of the proof, and most of my explanation above is based on content from this chapter.

Chapter 6 is okay, and quite short. It's about random graphs.

Chapter 7 is the heart of the proof, and 25 pages long (a fourth of the whole paper). I didn't read any of it.

There's also an appendix.

Q: If I'm bored, what else could I do?

A great exercise is to prove Theorem 3.5. It requires no background other than Definitions 3.1, 3.2, and 3.4, and knowing what a power set is. The level of difficulty is about the same as a medium-hard homework problem in Ma 5 or Ma 108. A good warm up is to prove Lemma 3.3, to test that you understand the definitions.

The two parts of the theorem can be shown independently, or you can observe that the first part follows directly from the second.

It is interesting to see what happens if you let the set A in the theorem be infinite (the theorem is only stated for finite A). It is not hard to find a counterexample to the second half of the theorem with A infinite. Much harder is to show that the first part of the theorem still holds true in the case of A infinite; to do this requires knowing concepts from Ma 116. (The contrast in difficulty between finite and infinite A is a little surprising to me.)

For those comfortable with k-SAT, another fun exercise, also not difficult, comes from a comment the author makes on page 60, in chapter 5. The author is discussing the phase transition alpha_c I mentioned before, and idly remarks that alpha_c = 2^k ln(2) in the limit as n goes to infinity. Find a convincing argument to support this value of alpha_c. (The difficulty lies not in the computations, which are only a few lines, but in figuring out what the question is asking and how to compute the value.)

This is a great example of a "physics proof", which I discussed in a previous email. The heuristic I am looking for is simple and insightful, and makes it clear /why/ there is this phase transition. The heuristic can be made into a rigorous proof, by adding some bookkeeping and careful pushing of symbols around, but doing so makes it harder to understand for someone not already familiar with the underlying idea.