CMU Randomized Algorithms

Randomized Algorithms, Carnegie Mellon: Spring 2011

Another Algorithm for SAT

BTW, when solving HW#1 Problem #2, it may be interesting to try solve the problem for {k}-SAT. The algorithm remains the same, and you should be able to show a success probability of approximately {(\frac{k}{2(k-1)})^n}. As a sanity check, for {k=3} this is again {(3/4)^n}.


So here’s a different randomized algorithm to solve {k}-SAT in less than {2^n} time. This one is from a paper titled Satisfiability Coding Lemma due to Ramamohan Paturi, Pavel Pudlak and Francis Zane. Hence, we’ll call this the PPZ algorithm.

Pick a random permutation {\pi} on the variables. Pick a uniformly random “start” assignment {A_0: [n] \rightarrow \{0,1\}} for the variables. Now consider the variables in the order {\pi}, and do the following:

If the current variable {x_i} is the only remaining variable for some clause, set it to satisfy that clause. (If {x_i} occurs positively in some clause and negatively in some other clause, you can abort the process at this point.) If the current variable {x_i} only occurs in clauses with more than one variable, set it equal to {A_0(i)}.

Now, setting this variable to {0} or {1} will cause some clauses to be satisfied: remove these clauses. In other clauses, this variable will just disappear. E.g., if a clause was {(x_1 \lor \lnot x_7 \lor x_9)} and we set {x_1} to {0}, then the new clause will just be {(\lnot x_7 \lor x_9)}. Do all these simplifications, and then go on to the next variable.

That’s it. To recap, we just pick a random variable, if it is forced (due to some clause of length {1}), set it to the forced value, else set it randomly. Continue until we have a complete assignment.

A theorem due to PPZ says:

Theorem 1
Given a satisfying {k}-SAT formula, the probability that this algorithm finds a satisfying assignment is at least {2^{-n(1-1/k)}}.

Hence for {k = 3}, this gives an algorithm with expected running time approximately {2^{0.66n} \approx 1.58^n}. Not quite as good as Sch\”{o}ning’s algorithm, but not too bad! And the proof is also very clean. In this post we’ll just consider the case where there is a unique satisfying assignment {A^*:[n] \rightarrow \{0,1\}}. (The general case is not much harder, see Section~4.1 in the PPZ paper.)

So yeah, there’s a unique satisfying assignment {A^*}. This means that for every variable {x_i}, changing the value of {A^*(i)} to {\lnot A^*(i)} will give us an unsatisfying assignment, and some clause will complain because it is not satisfied any more; if there are several, pick any one. Call this a critical clause for variable {x_i}, and name it {C_i}. OK, {n} variables, each with its very own critical clause.

Why these critical clauses? Suppose in the clause {C_i}, we’ve set the {k-1} other variables to be the same as in {A^*}. Then the only way to satisfy {C_i} will be to also set {x_i} equal to {A^*(i)}. And, if it turns out that in the random order we’re considering the variables, if {x_i} came after all the other variables in {C_i}, this is precisely what the PPZ algorithm will do! Indeed, it will not have to make a random choice for {x_i}: our hand will be forced to choose {A^*(i)}.

So suppose for a random permutation, call a variable {x_i} lucky if among the {k} variables in their critical clause, {x_i} places last in the ordering {\pi}. And {L(\pi)} be the set of lucky variables for permutation {\pi}. Then, assuming that we lucked out and chose {A_0(j) == A^*(j)} for all the variables {x_j \in [n] \setminus L(\pi)}, the algorithm will certainly also set {x_j} to {A^*(j)}. So, for the permutation {\pi}, the chance (over the choice of {A_0}) that we find the satisfying assignment {A^*} is {2^{-(n-I(\pi))}}.

Finally, what is {E[2^{-(n-|L(\pi)|)}]}, the expectation taken over random permutations {\pi}? Rewriting, it is {E[2^{|L(\pi)|}]/2^n}. By convexity, the numerator is at least {2^{E[|L(\pi)|]}}. And what is the expectation? See, there are {n} variables. Each one has a {1/k} chance to be lucky and placed last in its critical clause, and so the expected size of {L(\pi)} is {n/k}. So our success probability is at least {2^{n/k - n}}, proving the theorem for the unique assignment case.


A couple of notes: Firstly, you can think of the PPZ algorithm as a randomized non-backtracking Davis-Putnam procedure. You’ll probably hear about DPLL algorithms (named after Martin Davis, Hilary Putnam, George Logemann, and Donald Loveland) sooner or later, if you haven’t already; so if you’re interested, check out these notes by Satish Rao.

Secondly, the PPZ algorithm can be improved. After their paper, Paturi, Pudlak and Zane teamed up with Mike Saks to give us the PPSZ paper (here, here). Their new algorithm first does some simple preprocessing (a few steps of resolution, since you ask), and then runs the PPZ algorithm. The new PPSZ analysis beats Sch\”{o}ning’s performance of {(\frac{2(k-1)}{k})^n} for {k = 4} and higher.

Moreover, as we mentioned in this post, the two algorithms work better in somewhat orthogonal cases: Sch\”{o}ning’s algorithm works better when there are many “well-spread” satisfying assignments, and PPSZ works better with “few” solutions. And the papers of Iwama and Tamaki, and Hertli et al., combine these two algorithms carefully to get the current world record algorithms. Can you beat this world record?

One response to “Another Algorithm for SAT

  1. Jeremiah January 12, 2011 at 10:36 pm

    Problem 2.A: Is there a term missing from the sum?

%d bloggers like this: