We want to assign truth values to each of the variables to satisfy F (such an assignment may or may not exist).
Algorithm for 2-SAT
Start with an arbitrary truth assignment
Repeat until all clauses are satisfied or at most 2mn2 many iterations
Choose an arbitrary clause that is not satisfied
Choose a literal at random and flip the value of the variable corresponding to the literal.
If a valid truth assignment was found return that assignment, else return UNSAT.
Correctness & Guarantees of Algorithm
Let’s take S to be a satisfying assignment for the n variables. Ai : Assignment to the variables after step i. Xi : Number of variables having the same values in Ai and S
Xi=∣{j∣Ai(xj)=S(xj)}∣
If Xi′=n, then Ai′=S and that is a satisfying assignment.
Expected Time to Reach S
We know that
P[Xi+1=1∣Xi=0]=1
Suppose 1≤Xi≤n−1
And let C be a clause that is not satisfied by Ai. If this is the case, then we know at least one of the two variable assignments in C is inconsistent with S.
If both variable assignments were consistent with S, then the clause should’ve been satisfied, else S wouldn’t be a satisfying assignment.
If we randomly pick any one of the two variables and flip them, the probability that Xi+1 increases is ≥21.
P[Xi+1=j+1∣Xi=j]≥21P[Xi+1=j−1∣Xi=j]≤21
Equivalence with Markov Chain
While the graph induced by X0,X1,…,Xn may not be a Markov chain, we can study a very similar Markov process and analyze the expected running time of this algorithm.
The transition probability matrix would be given as
Pij=⎩⎨⎧21110if j=i+1 or i−1∀1≤i≤n−1if i=0 and j=iif i=j=notherwise
Deriving a formula for expected number of steps
We have a random variable Zj that represents the number of steps to reach n from j. Let hj=E[Zj].
We know that hn=0 and h0=h1+1.
Further hj≥ expected number of steps to fully agree with S starting from A0 which agrees with S in j locations, as the probability of Xi increasing is ≥21.
Zj={1+Zj−11+Zj+1with probability 21with probability 21
Therefore, we know that the expected number of steps to find a satisfying assignment is ≤n2.
Success Probability
By Markov’s Inequality
P[Z0>2n2]≤2n2E[Z0]=2n2h0=2n2n2=21
Now, we can boost the success probability by running this algorithm multiple times. We run this for 2mn2 steps, and analyze as if we run the experiment m times.
When we split this into m blocks each, the probability of failure in each block is at most 21, with the condition that no satisfying assignment was found in the previous block.
For Markov Chains, the history does not matter, so if we aren’t able to find a satisfying assignment after (2n2)⋅i steps, it’s as if we just started a new trial with an “arbitrary assignment”.
Hence, the probability of success in finding a satisfying assignment after 2mn2 steps is
1−2m1
If we take m=n, the runtime ends up to be 2n3=O(n3), with success probability 1−2n1.