CMU Randomized Algorithms

Randomized Algorithms, Carnegie Mellon: Spring 2011

Monthly Archives: February 2011

HW #4 Open Thread

Please ask your HW#4 questions here. Here’s one:

For Exercise 2, Part A, the algorithm says to “Choose 2m+1 elements S uniformly at random…”. Does this mean with replacement, or without replacement? In other words, are we choosing 2m+1 elements one at a time, each with equal probability of picking any of the elements from A, such that S might be a multiset? Or, are we choosing a set of 2m+1 elements from A, where each set has probability 1 / binom(n, 2m+1) of being picked?

It does not matter; you can do it either way. Though the analysis for the process where you choose independently with replacement is slightly simpler.

Update: the phrasing for problem #2(b) was ambiguous, and has been fixed. (Thanks Kevin!)

Update #2: the indices in the algorithm in problem #2 were off by 1 and have been fixed. (Thanks Favonia!)

Advertisements

Lecture #14: Game Theory

Today we discussed some key concepts in game theory and connections between (some of) them and results in online learning.

We began by discussing 2-player zero-sum games. The Minimax Theorem states that these games have a well-defined value V such that (a) there exists a mixed strategy p for the row-player that guarantees the row player makes at least V (in expectation) no matter what column the column-player chooses, and (b) there exists a mixed strategy q for the column-player that guarantees the row player makes at most V (in expectation) no matter what row the row-player chooses. We then saw how we could use the results we proved about the Randomized Weighted Majority algorithm to prove this theorem.

We next discussed general-sum games and the notion of Nash equilibria. In a zero-sum game, a Nash equilibrium requires the two players to both be playing minimax optimal strategies, but in general there could be Nash equilibria of multiple quality-levels for each player. We then proved the existence of Nash equilibria. However, unlike in the case of zero-sum games, the proof gives no idea of how to find a Nash equilibrium or even to approach one. In fact, doing so in a 2-player nxn game is known to be PPAD-hard.

Finally we discussed the notion of correlated equilibria and the connection of these to swap-regret, a generalization of the kind of regret notion we discussed last time. In particular, any set of algorithms with swap-regret sublinear in T, when played against each other, will have an empirical distribution of play that approaches (the set of) correlated equilibria.  See lecture notes.

HW3 Graders

Hi, if you volunteered to help grade HW3, please send me mail (if you haven’t already done so). Thanks! –Anupam

Lecture #13: Learning Theory II

Today we talked about online learning.  We discussed the Weighted Majority and Randomized Weighted Majority algorithms for the problem of “combining expert advice”, showing for instance that the RWM algorithm satisfies the bound E[\# mistakes] \leq (1+\epsilon)OPT + \frac{1}{\epsilon}\log n, where n is the number of “experts” and  OPT is the number of mistakes of the best expert in hindsight.  Also, this can be used when the experts are not predictors but rather just different options (like whether to play Rock, Paper, or Scissors in the Rock-Paper-Scissors game).  In this case, “# mistakes” becomes “total cost” and all costs are scaled to be in the range [0,1] each round.

We then discussed the “multiarmed bandit” problem, which is like the experts problem except you only find out the payoff for the expert you chose and not for those you didn’t choose.  For motivation, we discussed this in the context of the problem of selling lemonade to an online series of buyers, where the “experts” correspond to different possible prices you might choose for selling your lemonade.  We then went through an analysis of the EXP3 algorithm (though we did a simpler version of the analysis that gets a T^{2/3} dependence on T in the regret bound rather than the optimal T^{1/2}).

See the lecture notes (2nd half)

 

 

Lecture #12: Learning Theory 1

Today we talked about the problem of learning a target function from examples, where examples are drawn from some distribution D, and the goal is to use a labeled sample S (a set of examples drawn from D and labeled by the target f) to produce a function h such that Pr_{x \sim D}[h(x)\neq f(x)] is low. We gave a simple efficient algorithm for learning decision-lists in this setting, a basic “Occam’s razor” bound, and then a more interesting bound using the notion of shatter coefficients and a “ghost sample” argument. See 1st half of these lecture notes.

A few additional comments:

  • One way to interpret the basic Occam bound is that in principle, anything you can represent in a polynomial number of bits you can learn from a polynomial number of examples (if running time is not a concern). Also “data compression implies learning”: if you can take a set of m examples and find a prediction rule that is correct on the sample and requires < m/10 bits to write down, then you can be confident it will have low error on future points.
  • On the other hand, we would really like to learn from as few examples as possible, which is the reason for wanting bounds based on more powerful notions of the “underlying complexity” of the target function, such as shatter coefficients. Other very interesting bounds are based on a notion called “Rademacher complexity” which is even tighter.
  • For more info, see notes for 15-859(B) machine learning theory

Lecture #11: Online algorithms

Today we discussed randomized online algorithms, and in particular, algorithms for the ski-rental (elevator-or-stairs) and paging problems. See lecture notes as well as Chapter 13 of the MR book. Also, Claire Mathieu has a very nice set of notes on the randomized ski-rental problem.

Homework #3 Open Thread

Homework #3‘s now online.

Please post questions and clarifications in the comments…

Lecture #10: Polynomial Identity Testing, and Parallel Matchings

1. Matrix multiplication checking

The matrix multiplication checking problem is to verify the process of matrix multiplication: Given three {n \times n} matrices {A}, {B}, and {C}, is it the case that {AB = C}? The fastest known deterministic algorithm is to actually multiply {A} and {B} and compare the result to {C}—this takes {O(n^\omega)} time, where {\omega} is the exponent of matrix multiplication, and currently {\omega = 2.376} due to an algorithm of Coppersmith and Winograd. Note that an easy lower bound on the running time of any randomized algorithm for matrix multiplication verification is {\Omega(n^2)} since the input has to at least be read (see Lecture 4 for more details on this). We will now give a randomized algorithm (in co-{RP}) which takes only {O(n^2)} time.

Let us introduce some notation for the rest of the course: {x \in_{R} X} means “choose {x} uniformly at random from the set {X}”. Our checking problem algorithm is as follows:

  • Pick a vector {x \in_R \{0,1\}^n}.
  • Compare {ABx} with {Cx}. This takes {O(n^2)} time, since we can first compute {y = Bx}, and then compute {Ay = ABx}. Each matrix-vector product takes only {O(n^2)} time.
  • If {ABx = Cx}, then output Yes, otherwise output No.

(We’re imagining working over the reals; if the matrices are over the field {\mathbb{F}}, the computations should also carried out over the field {\mathbb{F}}. The proofs remain unchanged.) Now if {AB=C}, our algorithm always outputs the correct answer. If {AB\neq C}, the algorithm may output the wrong answer. We now bound the probability of such an error.

First, we need a simple lemma:

Lemma 1 Given {n}-digit strings {a,b \in {\mathbb R}^n} and {x \in_R \{0,1\}^n}, {\Pr[a \cdot x = b \cdot x] \le \frac{1}{2}}.

Proof: Suppose {a_i \ne b_i}. Let {\alpha = \sum_{j \ne i} a_j x_j} and {\beta = \sum_{j \ne i} b_j x_j}. We can write {a \cdot x = \alpha + a_i x_i} and {b \cdot x = \beta + b_i x_i}. This gives us

\displaystyle a \cdot x - b \cdot x = (\alpha - \beta) + (a_i - b_i)x_i.

We can invoke the Principle of Deferred Decisions (see Section 3.5 of M&R) to assume that we’ve first picked all the values {x_j} for {j \ne i}. Then we can write

\displaystyle \Pr[a \cdot x - b \cdot x = 0] = \Pr\left[x_i = \frac{\alpha - \beta}{b_i - a_i}\right] \le \frac{1}{2},

where we use the fact that {(\alpha - \beta)/(b_i - a_i)} can only be either {0} or {1} (or neither), and a randomly chosen {x_j} will take that value with probability at most half. \Box

Theorem 2 (Freivalds) If {AB \ne C}, our algorithm fails with probability at most {\frac{1}{2}}.

Proof: If {AB\neq C}, then there is at least one row in {AB}, say {(AB)_i}, that differs from the corresponding row {C_i} in C. Apply Lemma 1 with {a=(AB)_i} and {b=C_i}. The probability that {a\cdot x=b\cdot x} is at most {1/2}. For the algorithm to output Yes, we must have {a\cdot x=b\cdot x}. Therefore, the probability of failure for the algorithm is at most {1/2}. \Box

2. Polynomial identity checking

In the polynomial identity checking problem, we are given two multi-variate polynomials {f(x_1, \ldots, x_n)} and {g(x_1, \ldots, x_n)} each with degree {d}; again we are computing in some field {\mathbb{F}}. We may not be given the polynomials explicity, so we may not be able to read the polynomials in poly-time — we just have “black-box” access for evaluating a polynomial. Given these two polynomials, the problems is to determine if the polynomials are equal: i.e., if {f = g}, or equivalently, {f - g = 0}. Letting {Q = f - g}, it suffices to check if a given polynomial is identically zero. There is no known poly-time algorithm for this problem. But we will now show that it is in co-{RP}.

First consider the univariate case. We can pick {d+1} distinct values at random from {\mathbb{F}}. If {Q(x) = 0} for all {d+1} values for {x}, then {Q = 0}. This follows from the basic and super-useful fact, that for any field {\mathbb{F}}, a polynomial of degree at most {d} over that field can have at most {d} roots.

This approach does not directly apply to the multivariate case; in fact, the polynomial over two variables {f(x,y) = xy - 3} over the reals has an infinite number of roots. Over the finite field {\mathbb{F}_q}, the degree-{d} polynomial over {n} variables

\displaystyle  Q(x_1, x_2, \ldots, x_n) = (x_1 - 1)(x_1 - 2) \cdots (x_1 - d)

has {dq^{n-1}} roots (when {d \leq q = |\mathbb{F}|}).

However, things still work out. Roughly speaking, we can handle the multivariate case by fixing {n-1} variables and applying the result from the univariate case. Consider the following algorithm, which assumes we have some subset {S \subset \mathbb{F}} with {|S| \ge 2d}.

  • Pick {r_1, \ldots, r_n \in_R S}.
  • Evaluate {Q(r_1, \ldots, r_n)}.
  • If 0, return {Q = 0}.

Theorem 3 (Schwartz (1980), Zippel (1979)) If, in the above algorithm, the polynomial {Q \ne 0}, we have

\displaystyle \Pr[Q(r_1, \ldots, r_n) = 0] \le \frac{d}{|S|}.

Proof: By induction on {n}. The base case is the univariate case described above. With {Q \ne 0}, we want to compute {\Pr[Q(r_1, \ldots, r_n) = 0]}. Let {k} be the largest power of {x_1}. We can rewrite

\displaystyle Q(x_1, \ldots, x_n) = x_1^k A(x_2, \ldots, x_n) + B(x_1, \ldots, x_n)

for some polynomials {A} and {B}. Now we consider two events. Let {\mathcal{E}_1} be the event that {Q(r_1,\cdots,r_n)} evaluates to {0}, and {\mathcal{E}_2} be the event that {A(r_2,\cdots,r_n)} evaluates to {0}.

We can rewrite the probability that {Q(r_2,\cdots,r_n)} is {0} as:

\displaystyle  \begin{array}{rcl}  \Pr[Q(r) = 0] = \Pr[\mathcal{E}_1] & = & \Pr[ \mathcal{E}_1 \mid \mathcal{E}_2 ] \Pr[\mathcal{E}_2] + \Pr[\mathcal{E}_1 \mid \lnot \mathcal{E}_2 ] \Pr[\lnot \mathcal{E}_2 ] \\ & \leq & \Pr[\mathcal{E}_2] + \Pr[\mathcal{E}_1 \mid \lnot \mathcal{E}_2] \end{array}

Let us first bound the probability of {\mathcal{E}_2}, or the probability that {A(r_2,\cdots,r_n)=0}. The polynomial {A} has degree {d-k} and fewer varaibles, so we can use the inductive hypothesis to obtain

\displaystyle  \Pr[\mathcal{E}_2]=\Pr[A(r_2, \ldots, r_n) = 0] \le \frac{d-k}{|S|}.

Similarly, given {\lnot \mathcal{E}_2} (or {A(r_2,\cdots,r_n)\neq 0}), the univariate polynomial {Q(x_1, r_2, \ldots, r_n)} has degree {k}. Therefore, again by inductive hypothesis,

\displaystyle \Pr[\mathcal{E}_1 \mid \lnot \mathcal{E}_2]=\Pr[Q(x_1, r_2, \ldots, r_n) = 0 \; | \; A(r_2, \ldots, r_n) \ne 0] \le \frac{k}{|S|}.

We can substitute into the expression above to get

\displaystyle  \begin{array}{rcl}  \Pr[Q(r) = 0] & \leq & \Pr[\mathcal{E}_2] + \Pr[\mathcal{E}_1 \mid \lnot \mathcal{E}_2] \\ & \le & \frac{d-k}{|S|} + \frac{k}{|S|} = \frac{d}{|S|} \end{array}

This completes the inductive step. \Box

Polynomial identity testing is a powerful tool, both in algorithms and in complexity theory. We will use it to find matchings in parallel, but it arises all over the place. Also, as mentioned above, there is no poly-time deterministic algorithm currently known for this problem. A result of Impagliazzo and Kabanets (2003) shows that proving that the polynomial identity checking problem is in {P} would imply that either {NEXP} cannot have poly-size non-uniform circuits, or Permanent cannot have poly-size non-uniform circuits. Since we are far from proving such strong lower bounds, the Impagliazzo-Kabanets result suggest that deterministic algorithms for polynomial identity checking may require us to develop significantly new techniques.

For more on the polynomial identity checking problem, see Section 7.2 in M&R. Dick Lipton’s blog has an interesting post on the history of the theorem, as well as comparisons of the results of Schwartz, Zippel, and DeMillo-Lipton. One of the comments points out that the case when {S = \mathbb{F}_q} was known at least as far back as Ore in 1922; his proof appears as Theorem 6.13 in the book Finite Fields by Lidl and Niederreiter; a different proof by Dana Moshkowitz appears here.

3. Perfect matchings in bipartite graphs

We will look at a simple sequential algorithm to determine whether a perfect matching exists in a given bipartite graph or not. The algorithm is based on polynomial identity testing from the previous section.

A bipartite graph {G=(U,V,E)} is specified by two disjoint sets {U} and {V} of vertices, and a set {E} of edges between them. A perfect matching is a subset of the edge set {E} such that every vertex has exactly one edge incident on it. Since we are interested in perfect matchings in the graph {G}, we shall assume that {|U|=|V|=n}. Let {U=\{u_1,u_2,\cdots,u_n\}} and {V=\{v_1,v_2,\cdots,v_n\}}. The algorithm we study today has no error if {G} does not have a perfect matching (no instance), and errs with probability at most {\frac12} if {G} does have a perfect matching (yes instance). This is unlike the algorithms we saw in the previous lecture, which erred on no instances.

Definition 4 The Tutte matrix of bipartite graph {G=(U,V,E)} is an {n\times n} matrix {M} with the entry at row {i} and column {j},

\displaystyle  M_{i,j}=\left\{ \begin{array}{ll} 0 & if (u_i,v_j)\notin E\\ x_{i,j} & if (u_i,v_j)\in E\\ \end{array} \right.

(Apparently, Tutte came up a matrix for general graphs, and this one for bipartite graphs is due to Jack Edmonds, but we’ll stick with calling it the Tutte matrix.)

The determinant of the Tutte matrix is useful in testing whether a graph has a perfect matching or not, as the following lemma shows. Note that we do not think of this determinant as taking on some numeric value, but purely as a function of the variables {x_{i,j}}.

Lemma 5 {\mathrm{det}(M) \neq 0 \iff} There exists a perfect matching in {G}

Proof: We have the following expression for the determinant :

\displaystyle \mathrm{det}(M)=\sum_{\pi\in S_n} (-1)^{sgn(\pi)} \prod_{i=1}^{n} M_{i,\pi(i)}

where {S_n} is the set of all permutations on {[n]}, and {sgn(\pi)} is the sign of the permutation {\pi}. There is a one to one correspondence between a permutation {\pi\in S_n} and a (possible) perfect matching {\{(u_1,v_{\pi(1)}),(u_2,v_{\pi(2)}),\cdots ,(u_n,v_{\pi(n)})\}} in {G}. Note that if this perfect matching does not exist in {G} (i.e. some edge {(u_i,v_{\pi(i)})\notin E}) then the term corresponding to {\pi} in the summation is 0. So we have

\displaystyle \mathrm{det}(M) = \sum_{\pi\in P} (-1)^{sgn(\pi)} \prod_{i=1}^n x_{i,\pi(i)}

where {P} is the set of perfect matchings in {G}. This is clearly zero if {P=\emptyset}, i.e., if {G} has no perfect matching. If {G} has a perfect matching, there is a {\pi\in P} and the term corresponding to {\pi} is {\prod_{i=1}^n x_{i,\pi(i)} \ne 0}. Additionally, there is no other term in the summation that contains the same set of variables. Therefore, this term is not cancelled by any other term. So in this case, {\mathrm{det}(M)\ne 0}. \Box

This lemma gives us an easy way to test a bipartite graph for a perfect matching — we use the polynomial identity testing algorithm of the previous lecture on the Tutte matrix of {G}. We accept if the determinant is not identically 0, and reject otherwise. Note that {\mathrm{det}(M)} has degree at most {n}. So we can test its identity on the field {Z_p}, where {p} is a prime number larger than {2n}. From the analysis of the polynomial testing algorithm, we have the following :

  • {G} has no perfect matching {\implies \Pr[accept]=0}.
  • {G} has a perfect matching {\implies \Pr[accept]\ge \frac12}.

The above algorithm shows that Perfect Matching for bipartite graphs is in RP. (The non-bipartite case may appear as a homework exercise.) Also, this algorithm for checking the existence of a perfect matching can be easily converted to one that actually computes a perfect matching as follows:

  1. Pick {(u_i,v_j)\in E}.
  2. Check if {G\backslash \{u_i,v_j\}} has a perfect matching.
  3. If “Yes”, output {(u_i,v_j)} to be in the matching and recurse on {G\backslash \{u_i,v_j\}}, the graph obtained after the removal of vertices {u_i} and {v_j}.

  4. If “No”, recurse on {G-(u_i,v_j)}, the graph obtained after removing the edge {(u_i,v_j)}.

Note that this algorithm seems inherently sequential; it’s not clear how to speed up its running time considerably by using multiple processors. We’ll consider the parallel algorithm in the next section.

Some citations: the idea of using polynomial identity testing to test for matchings is due to Lovász. The above algorithm to find the matching runs in time {mn^\omega}, where {n^\omega} is the time to multiply two {n\times n}-matrices. (It is also the time to compute determinants, and matrix inverses.) Rabin and Vazirani showed how to compute perfect matchings in general graphs in time {O(n^{\omega + 1})}, where {n}. Recent work of Mucha and Sankowski, and Harvey show how to use these ideas (along with many other cool ones) to find perfect matchings in general graphs in time {n^\omega}.

4. A parallel algorithm for finding perfect matchings

However, we can give a slightly different algorithm (for a seemingly harder problem), that indeed runs efficiently on parallel processors. The model here is that there are polynomially many processors run in parallel, and we want to solve the problem in poly-logarithmic depth using polynomial work. We will use the fact that there exist efficient parallel algorithms for computing the determinant of a matrix to obtain our parallel algorithm for finding perfect matchings.

We could try the following “strawman” parallel algorithm:

Use a processor for every edge {(u_i,v_j)} that tests (in parallel) if edge {(u_i,v_j)} is in some perfect matching or not. For each edge {(u_i,v_j)} that lies in some perfect matching, the processor outputs the edge, else it outputs nothing.

We are immediately faced with the problem that there may be several perfect matchings in the graph, and the resulting output is not a matching. The algorithm may in fact return all the edges in the graph. It will only work if there is a unique perfect matching.

So instead of testing whether an edge {(u_i,v_j)} is in some perfect matching or not, we want to test whether an edge {(u_i,v_j)} is in a specific perfect matching or not. The way we do this is to put random weights on the edges of the graph and test for the minimum weight perfect matching. Surprisingly, one can prove that the minimum weight perfect matching is unique with a good probability, even when the weights are chosen from a set of integers from a relatively small range.

Lemma 6 Let {S=\{e_1,\cdots ,e_m\}} and {S_1,\cdots S_k\subseteq S}. For every element {e_i} there is a weight {w_i} picked u.a.r. from {\{0,1,\cdots ,2m-1\}}. The weight of subset {S_j} is {w(S_j)=\sum_{e_i\in S_j} w_i}. Then

\displaystyle  \Pr[ \text{ minimum weight set among } S_1,\cdots ,S_k \text{ is unique } ] \ge\frac12.

Proof: We will estimate the probability that the minimum weight set is not unique. Let us define an element {e_i} to be tied if

\displaystyle \min_{S_j | e_i\in S_j} w(S_j) = \min_{S_j | e_i\notin S_j} w(S_j)

It is easy to see that there exists a tied element if and only if the minimum weight subset is not unique. Below we bound the probability that a fixed element {e_i} is tied. The result will then follow using a union bound.\par We use the principle of deferred decisions. Let us fix the weights {w_1,\cdots,w_m} of all the elements except {w_i}. We want to bound {\Pr_{w_i}[ e_i \text{ is tied } \mid w_1,\cdots,w_{i-1},w_{i+1},w_m]}. Let

\displaystyle  W^- = \min_{S_j | e_i\notin S_j} w(S_j) \qquad \text{and} \qquad W^+ = \min_{S_j | e_i\in S_j} w(S_j)

with {w_i} assigned the value 0. It is easy to see that {e_i} is tied iff {W^-=W^+ + w_i}. So,

\displaystyle  \begin{array}{rcl}  & & Pr_{w_i}[ e_i \text{ is tied } \mid w_1,\cdots,w_{i-1},w_{i+1},w_m] \\ &=& Pr_{w_i}[w_i=W^- - W^+ \mid w_1,\cdots,w_{i-1},w_{i+1},w_m] \\ &\le& \frac{1}{2m}. \end{array}

The last inequality is because there is at most on value for {w_i} for which {W^-=W^+ + w_i}. This holds irrespective of the particular values of the other {w_{i'}}s. So {\Pr[e_i \text{ is tied } ] \le \frac{1}{2m}}, and

\displaystyle  \Pr[ \exists \text{ a tied element }] \le\sum_{i=1}^m \Pr[e_i \text{ is tied} ] \le \frac12.

Thus {\Pr[ \text{ minimum weight set is unique } ] \ge\frac12}. \Box

Now we can look at the parallel algorithm for finding a perfect matching. For each edge {(u_i,v_j)}, we pick a random weight {w_{i,j}}, from {[2m-1]}, where {m=|E|} is the number of edges in {G}. Let the sets {S_j} denote all the perfect matchings in {G}. Then the Isolation Lemma implies that there is a unique minimum weight perfect matching with at least a half probability. We assign the value {x_{i,j}=2^{w_{i,j}}} to the variables in the Tutte matrix {M}. Let {D} denote the resulting matrix. We use the determinant of {D} to determine the weight of the min-weight perfect matching, if it is unique, as suggested by the following lemma.

Lemma 7 Let {W_0} be the weight of the minimum weight perfect matching in {G}. Then,

  • {G} has no perfect matching {\implies} {\mathrm{det}(D)=0}.
  • {G} has a unique min-weight perfect matching {\implies} {\mathrm{det}(D)\ne 0} and the largest power of 2 dividing {\mathrm{det}(D)} is {W_0}.

  • {G} has more than one min-weight perfect matching {\implies} either {\mathrm{det}(D)=0} or the largest power of 2 dividing {\mathrm{det}(D)} is at least {W_0}.

Proof: If {G} has no perfect matching, it is clear from lemma 5 that {\mathrm{det}(D)=0}. \par Now consider that case when {G} has a unique min-weight perfect matching. From the expression of the determinant, we have

\displaystyle \mathrm{det}(D)=\sum_{\pi\in P} (-1)^{sgn(\pi)} \prod_{i=1}^n 2^{w_{i,\pi(i)}}=\sum_{\pi\in P} (-1)^{sgn(\pi)}2^{\sum_{i=1}^n w_{i,\pi(i)}}=\sum_{\pi\in P} (-1)^{sgn(\pi)} 2^{w(\pi)}

where {w(\pi)} is the weight of the perfect matching corresponding to {\pi} and {P} is the set of all perfect matchings in {G}. Since there is exactly one perfect matching of weight {W_0} and other perfect matchings have weight at least {W_0+1}, this evaluates to an expression of the form {\pm 2^{W_0} \pm 2^{W_0+1} \cdots \pm \textrm{other powers of 2 larger than } W_0}. Clearly, this is non-zero, and the largest power of 2 dividing this is {W_0}.\par Now consider the case when {G} has more than one min-weight perfect matchings. In this case, if the determinant is non-zero, every term in the sumation is a power of 2, at least {2^{W_0}}. So {2^{W_0}} divides {\mathrm{det}(D)}. \Box

We refer to the submatrix of {D} obtained by removing the {i}-th row and {j}-th column by {D_{i,j}}. Note that this is a matrix corresponding to the bipartite graph {G\backslash \{u_i,v_j\}}. The parallel algorithm would run as follows.

  1. Pick random weights {w_{i,j}} for the edges of {G}. (In the following steps, we assume that we’ve isolated the min-weight perfect matching.)
  2. Compute the weight {W_0} of the min-weight perfect matching from {\mathrm{det}(D)} (using the parallel algorithm for computing the determinant): this is just the highest power of {2} that divides {\mathrm{det}(D)}.
  3. If {\mathrm{det}(D)=0}, output “no perfect matching”.
  4. For each edge {(u_i,v_j)\in E} do, in parallel,:
    1. Evaluate {\mathrm{det}(D_{i,j})}.
    2. If {\mathrm{det}(D_{i,j})=0}, output nothing.
    3. Else, find the largest power of 2, {W_{i,j}}, dividing {\mathrm{det}(D_{i,j})}.
    4. If {W_{i,j} + w_{i,j} = W_0}, output {(u_i,v_j)}.
    5. Else, output nothing.

It is clear that, if {G} has no perfect matching, this algorithm returns the correct answer. Now suppose {G} has a unique minimum weight perfect matching, we claim Lemma 7 ensures that precisely all the edges in the unique min-weight perfect matching are output. To see this, consider an edge {(u_i,v_j)} not in the unique min weight perfect matching. From the lemma, {\mathrm{det}(D_{i,j})} is either zero (so the edge will not be output), or {W_{i,j}} is at least as large as the min-weight perfect matching in {G\backslash \{u_i,v_j\}}. Since the min-weight perfect matching is unique and does not contain edge {(u_i,v_j)}, this implies {w_{i,j} + W_{i,j}} will be strictly larger than {W_0}, and this edge will not be output in this case either. Finally, if an edge {(u_i,v_j)} is in the unique min-weight perfect matching, removing this edge from the matching gives us the unique min-weight perfect matching in {G\backslash \{u_i,v_j\}}. So, in this case {W_{i,j} = W_0 - w_{i,j}} and the edge is output.

Thus, if {G} has a perfect matching, this algorithm will isolate one with probability at least {\frac12}, and will output it—hence we get an RNC algorithm that succeeds with probability at least {1/2} on “Yes” instances, and never makes mistakes on “No” instances.

Finally, some more citations. This algorithm is due to Mulmuley, Vazirani and Vazirani; the first RNC algorithm for matchings had been given earlier by Karp, Upfal, and Wigderson. It is an open question whether we can find perfect matchings deterministically in parallel using poly-logarithmic depth and polynomial work, even for bipartite graphs.

Lecture #8: Oh, and one more thing

I forgot to mention something about the two choices paradigm: recall from HW #2 that if you throw {m} balls into {n} bins randomly and {m \gg n}, the maximum load is about {\frac{m}{n} + O(\sqrt{\frac{m}{n} \log n})}. In fact, you can show that this variance term is about right—with high probability, the highest loaded bin will indeed be about {O(\sqrt{\frac{m}{n} \log n})} above the average.

On the other hand, if you throw {m} balls into {n} bins using two-choices, then you can show that the highest load is about {\frac{m}{n} + \log \log n + O(1)} with high probability. So not only do we gain in the low-load case (when {m \approx n}), we get more control over the variance in the high load case (when {m \gg n}): the additive gap between the average and the maximum loads is now independent of the number of balls! The proofs to show this require new ideas: check out the paper Balanced Allocations: the Heavily Loaded Case by Berenbrink, Czumaj, Steger and Vöcking for more details.

Here is a recent paper of Peres, Talwar and Weider that gives an analysis of the {(1 + \epsilon)}-choice process (where you invoke the two choices paradigm only on {\epsilon} fraction of the balls). It also refers to more recent work in the area (weighted balls, weighted bins, etc), in case you’re interested.

Lecture #8: Balls and Bins

1. Balls and Bins

The setting is simple: {n} balls, {n} bins. When you consider a ball, you pick a bin independently and uniformly at random, and add the ball to that bin. In HW #2 you proved:

Theorem 1 The max-loaded bin has {O(\frac{\log n}{\log \log n})} balls with probability at least {1 - 1/n}.

One could use a Chernoff bound to prove this, but here is a more direct calculation of this theorem: the chance that bin {i} has at least {k} balls is at most

\displaystyle  \binom{n}{k} \left( \frac1n \right)^k \leq \frac{n^k}{k!} \cdot \frac1{n^k} \leq \frac{1}{k!} \leq 1/k^{k/2}

which is (say) {\leq 1/n^2} for {k^* = \frac{8 \log n}{\log \log n}}. To see this, note that

\displaystyle  k^{k/2} \geq (\sqrt{\log n})^{4 \log n/\log\log n} \geq 2^{2 \log n} = n^2.

So union bounding over all the bins, the chance of some bin having more than {k^*} balls is {1/n}. (I’ve been sloppy with constants, you can do better constants by using Stirling’s approximation.)

Here is a semantically identical way of looking at this calculation: let {X_i} be the indicator r.v. for bin {i} having {k^*} or more balls. Then {E[X_i] \leq 1/n^2}. And hence if {X = \sum_i X_i}, then {E[X] \leq 1/n}. So by Markov, {\Pr[ X > 1 ] \leq E[X] \leq 1/n}. In other words, we again have

\displaystyle  \Pr[ \text{ max load is more than } \frac{8\log n}{\log \log n} ] \rightarrow 0.

This idea of bounding the expectation of some variable {X}, and using that to upper bound some quantity (in this case the max-load) is said to use the first moment method.

1.1. Tightness of the Bound

In fact, {\Theta(\frac{\log n}{\log\log n})} is indeed the right answer for the max-load with {n} balls and {n} bins.

Theorem 2 The max-loaded bin has {\Omega(\frac{\log n}{\log \log n})} balls with probability at least {1 - 1/n^{1/3}}.

Here is one way to show this, via the second moment method. To begin, let us now lower bound the probability that bin {i} has at least {k} balls:

\displaystyle  \binom{n}{k} \left( \frac1n \right)^k \left(1 - \frac1n \right)^{n-k} \geq \left(\frac{n}{k}\right)^k \cdot \frac1{n^k} \cdot e^{-1} \geq 1/ek^k,

which for {k^{**} = \frac{\log n}{3\log \log n}} is at least {1/en^{1/3}}, since {k^k \leq (\log n)^{\log n/3\log\log n} = n^{-1/3}}. And so we expect {\Omega(n^{2/3})} bins to have at least {k^{**}} balls.

Let us define some random variables: if {X_i} is the indicator for bin {i} having at least {k^{**}} balls, and {X} is the expected number of bins with at least {k^{**}} balls, we get that

\displaystyle  E[X_i] \geq 1/en^{1/3} \quad \text{ and } \quad E[X] = \Omega(n^{2/3}).

Alas, in general, just knowing that {E[X] \rightarrow \infty} will not imply {\Pr[ X \geq 1 ] \rightarrow 1}. Indeed, consider a random variable that is {0} w.p. {1-1/n^{1/3}}, and {n} otherwise—while its expectation is {n^{2/3}}, {X} is more and more likely to be zero as {n} increases. So we need some more information about {X} to prove our claim. And that comes from the second moment.

Let’s appeal to Chebyshev’s inequality:

\displaystyle  \Pr[ X = 0 ] \leq \Pr[ |X -\mu| \geq \mu ] \leq \frac{\mathrm{Var}(X)}{\mu^2} = \frac{\sum_i \mathrm{Var}(X_i) + \sum_{i \neq j} \mathrm{Cov}(X_i,X_j)}{E[X]^2}.

You have probably seen covariance before: {\mathrm{Cov}(Y, Z) := E[(Y - E[Y])(Z - E[Z])]}. But since the bins are negatively correlated (some bin having more balls makes it less likely for another bin to do so), the covariance {\mathrm{Cov}(X_i, X_j) \leq 0}. Moreover, since {X_i \in \{0,1\}}, {\mathrm{Var}(X_i) \leq E[X_i] \leq 1}; by the above calculations, {E[X]^2 \geq n^{4/3}}. So summarizing, we get

\displaystyle  \Pr[ X = 0 ] \leq \frac{\sum_i \mathrm{Var}(X_i) + \sum_{i \neq j} \mathrm{Cov}(X_i,X_j)}{E[X]^2} \leq \frac{n}{E[X]^2} \leq n^{-1/3}.

In other words, there is a {1 - 1/n^{1/3}} chance that some bin contains more than {k^{**}} balls:

\displaystyle  \Pr[ \text{ max load is less than } \frac{\log n}{3\log \log n} ] \rightarrow 0.

(Later, you will see how to use martingale arguments and Azuma-Hoeffding bounds to give guarantees on the max-load of bins. You can also use the “Poisson approximation” to show such a result, that’s yet another cool technique.)

1.2. So, in Summary

If you want to show that some non-negative random variable is zero with high probability, show that it’s expectation is tends to zero, and use Markov—the first moment method. If you want to show that it is non-zero with high probability, show that the variance divided by the squared mean tends to zero, and use Chebyshev—the second moment method.

1.3. Taking it to the Threshold

Such calculations often arise when you have a random process, and a random variable {X} defined in terms of a parameter {k}. Often you want to show that {X} is zero whp when {k} lies much below some “threshold” {\tau}, and that {X} is non-zero whp when {k} is far above {\tau}. The first things you should try are to see if the first and second moment methods give you rough answers. E.g., take {n} vertices and add each of the {\binom{n}{2}} edges independently with probability {1/2} (also called the Erdös-Rényi graph {G(n,1/2)}), and define {X} to be the expected number of cliques on {k} vertices. Show that {\tau = 2 \log n} is such a threshold for {X}.

2. The Power of Two Choices

The setting now is: {n} balls, {n} bins. However, when you consider a ball, you pick two bins (or in general, {d} bins) independently and uniformly at random, and put the ball in the less loaded of the two bins. The main theorem is:

Theorem 3 The two-choices process gives a maximum load of {\frac{\ln \ln n}{\ln 2} + O(1)} with probability at least {1 - O(\frac{\log^2 n}{n})}.

The intuition behind the proof is the following picture:

The actual proof is not far from this intuition. The following lemma says that if at most {\alpha} fraction of the bins have at least {i} balls, then the fraction of bins having {i+1} balls can indeed be upper bounded by {Bin(n, \alpha^2)}, where {Bin(n,p)} is the Binomial random variable.

Lemma 4 If {N_i} is the number of bins with load at least {i}, then {\Pr[ N_{i+1} > t \mid N_i \leq \alpha n ] \leq \frac{\Pr[ Bin(n,\alpha^2) > t ]}{\Pr[ N_i \leq \alpha n ]}}.

Proof: For the proof, let us consider the “heights” of balls: this is the position of the ball when it comes in, if it is the first ball in its bin then its height is {1}, etc. Observe that if there are {t} bins with load {i+1}, then there must be at least {t} balls with height {i+1}. I.e., if {B_j} is the number of balls with height at least {j}, then {N_j \leq B_j}, and so we’ll now upper bound {\Pr[ B_{i+1} > t \mid N_i \leq \alpha n ] = \frac{\Pr[ B_{i+1} > t \cap N_i \leq \alpha n ]}{\Pr[ N_i \leq \alpha n]}}.

Consider the following experiment: just before a ball comes in, an adversary is allowed to “mark” at most {\alpha n} bins. Call a ball marked if both its random bins are marked. Note that when we condition on {N_i \leq \alpha n}, we know that the final number of bins with load at least {i} is at most {\alpha n}. In this case, we can imagine the adversary marking the bins with load at least {i} (and maybe some more). Now the chance that a ball is marked is at least the chance that it has height {i+1} and there are at most {\alpha n} bins with height at least {i}. Hence, if {M} is the number of marked balls, we get

\displaystyle  \frac{\Pr[ B_{i+1} > t \cap N_i \leq \alpha n ] }{ \Pr[ N_i \leq \alpha n]} \leq^{(*)} \frac{\Pr[ M > t ]}{\Pr[ N_i \leq \alpha n ]} = \frac{\Pr[ Bin(n,\alpha^2) > t ]}{\Pr[ N_i \leq \alpha n ]}.

The second equality follows from the fact that {M \sim Bin(n, \alpha^2)}. \Box

If you’d like to be more precise about proving (*) above, see the details in the notes from the Mitzenmacher-Upfal. (CMU/Pitt access only.)

Now we can use Chernoff to prove tail bounds on the Binomial distribution.

Lemma 5 If {\alpha^2 \geq 6 \frac{\ln n}{n}}, then

\displaystyle  \Pr[ Bin(n, \alpha^2) > 2n\alpha^2 ] \leq 1/n^2.

Moreover, if {\alpha^2 < 6 \frac{\ln n}{n}}, then

\displaystyle  \Pr[ Bin(n, \alpha^2) > 12 \ln n] \leq 1/n^2.

Proof: We’re interested in {X = \sum_{i = 1}^n X_i} where each {X_i = 1} w.p. {p = \alpha^2}, and {0} otherwise. The expectation {\mu = np \geq 6 \ln n}. And the chance that this number exceeds {(1+1)\mu} is at most

\displaystyle  \exp( -\frac{\mu^2}{2\mu + \mu} ) \leq \exp( -\mu/3 ) \leq 1/n^2,

which proves the first part. For the second part, {\mu < 6 \ln n}, and the probability that {X} exceeds {12 \ln n \geq \mu + 6 \ln n} is at most

\displaystyle  \exp( -\frac{(6 \ln n)^2}{2\mu + 6 \ln n} ) \leq \exp( -2\ln n ) \leq 1/n^2,

as claimed. \Box

So, now let us define {\alpha_i} to be the fraction of bins we’re aiming to show have load at least {i}. Define {\alpha_4 = 1/4}, and {\alpha_{i+1} = 2\alpha_i^2}. (The reason it is {2\alpha_i^2} instead of {\alpha_i^2}, which is the expectation, is for some breathing room to apply Chernoff: in particular, the factor {2} comes from the first part of Lemma 5.)

For each {i \geq 4}, let {\mathcal{E}_i} be the good event “{N_i \leq n\alpha_i}”; recall that {N_i} is the number of bins with load at least {i}. We want to lower bound the probability that this good event does not happen.

Lemma 6 If {\alpha_i^2 \geq 6 \frac{\ln n}{n}}, then

\displaystyle  \Pr[ \lnot \mathcal{E}_{i+1} ] \leq i/n^2.

Proof: We prove this by induction. The base case is when {i = 4}, when at most {n/4} bins can have load at least {4} (by Markov). So {\Pr[ \lnot \mathcal{E}_4 ] = 0 < 4/n^2}.

For the induction,

\displaystyle  \Pr[ \lnot \mathcal{E}_{i+1} ] \leq \Pr[ \lnot \mathcal{E}_{i+1} \mid \mathcal{E}_i ]\Pr[ \mathcal{E}_i ] + \Pr[ \lnot \mathcal{E}_i ].

By Lemma 4 the former term is at most {\frac{\Pr[ B(n, \alpha_i^2) \geq \alpha_{i+1}]}{\Pr[ \mathcal{E}_i ]} \cdot \Pr[ \mathcal{E}_i ]}, which by Lemma 5 is at most {1/n^2}. And by induction, {\Pr[\lnot \mathcal{E}_i] \leq i/n^2}, which means {\Pr[ \lnot \mathcal{E}_{i+1}] \leq (i+1)/n^2}. \Box

Consider {i^* = \min\{i \mid \alpha_i^2 < 6 \frac{\ln n}{n}\}}. By the Lemma 6, {\Pr[ \lnot \mathcal{E}_{i^*} ] \leq i^*/n^2 \leq 1/n}. Hence, with probability {1 - 1/n}, we have the number of bins with more than {i^*} balls in them is at most {n\alpha_{i^*}}.

We’re almost done, but there’s one more step to do. If this number {n \alpha_{i^*}} were small, say {O(\log n)}, then we could have done a union bound, but this number may still be about {\Omega(\sqrt{n \log n})}. So apply Lemma 4 and the second part of Lemma 5 once more to get:

\displaystyle  \begin{array}{rcl}  \Pr[ N_{i^* +1 } > 12 \ln n] &\leq& \Pr[ N_{i^* +1 } > 12 \ln n \mid \mathcal{E}_{i^*} ] \Pr[ \mathcal{E}_{i^*}] + \Pr[ \lnot \mathcal{E}_{i^*} ] \\ &\leq& \Pr[ Bin(n, \alpha_{i^*}^2) > 12 \ln n \mid \mathcal{E}_{i^*} ] \Pr[ \mathcal{E}_{i^*}] + \Pr[ \lnot \mathcal{E}_{i^*} ] \\ &\leq& 1/n^2 + \Pr[ \lnot \mathcal{E}_{i^*} ] \leq \frac{n+1}{n^2} \end{array}

Finally, since {N_{i^* + 1}} is so small whp, use Lemma 4 and a union bound to say that

\displaystyle  \begin{array}{rcl}  \Pr[ N_{i^* + 2} > 1 ] &\leq& \Pr[ B(n, \frac{(12 \ln n)^2}{n}) > 1 ] + \Pr[ N_{i^* + 1} > 12 \ln n ] \\ &\leq& E[ B(n, \frac{(12 \ln n)^2}{n}) ] + \frac{n+1}{n^2} \\ &\leq& O(\frac{\log^2 n}{n}). \end{array}

Finally, the calculations in Section 2 show that {i^* = \frac{\ln \ln n}{\ln 2} + O(1)}, which completes the proof.

2.1. A Calculation

Since {\log_2 \alpha_4 = -2}, and {\log_2 \alpha_{i+1} = 1 + 2 \log_2 \alpha_i}, we calculate

\displaystyle  \log_2 \alpha_{i} = - 2^{i-4} - 1.

So, for {\log_2 \alpha_i \approx - \frac12 \log_2 n}, it suffices to set

\displaystyle  i = \log_2 \log_2 n + 3 = \frac{\ln \ln n}{\ln 2} + O(1).

3. A Random Graphs Proof

Another way to show that the maximum load is {O(\log\log n)}—note that the constant is worse—is to use an first-priciples analysis based on properties of random graphs. We build a random graph {G} as follows: the {n} vertices of {G} correspond to the {n} bins, and the edges correspond to balls—each time we probe two bins we connect them with an edge in {G}. For technical reasons, we’ll just consider what happens if we throw fewer balls (only {n/512} balls) into {n} bins—also, let’s imagine that each ball chooses two distinct bins each time.

Theorem 7 If we throw {\frac{n}{512}} balls into {n} bins using the best-of-two-bins method, the maximum load of any bin is {O(\log\log n)} whp.

Hence for {n} balls and {n} bins, the maximum load should be at most {512} times as much, whp. (It’s as though after every {n/512} balls, we forget about the current loads and zero out our counters—not zeroing out these counters can only give us a more evenly balanced allocation; I’ll try to put in a formal proof later.)

To prove the theorem, we need two results about the random graph {G} obtained by throwing in {n/512} random edges into {n} vertices. Both the proofs are simple but surprisingly effective counting arguments, they appear at the end.

Lemma 8 The size of {G}‘s largest connected component is {O(\log n)} whp.

Lemma 9 There exists a suitably large constant {K > 0} such that for all subsets {S} of the vertex set with {|S| \ge K}, the induced graph {G[S]} contains at most {5|S|/2} edges, and hence has average degree at most {5}, whp.

Given the graph {G}, suppose we repeatedly perform the following operation in rounds:

In each round, remove all vertices of degree {\leq 10} in the current graph.

We stop when there are no more vertices of small degree.

Lemma 10 This process ends after {O(\log\log n)} rounds whp, and the number of remaining vertices in each remaining component is at most {K}.

Proof: Condition on the events in the two previous lemmas. Any component {C} of size at least {K} in the current graph has average degree at most {5}; by Markov at least half the vertices have degree at most {10} and will be removed. So as long as we have at least {K} nodes in a component, we halve its size. But the size of each component was {O(\log n)} to begin, so this takes {O(\log \log n)} rounds before each component has size at most {K}. \Box

Lemma 11 If a node/bin survives {i} rounds before it is deleted, its load due to edges that have already been deleted is at most {10i}. If a node/bin is never deleted, its load is at most {10i^* + K}, where {i^*} is the total number of rounds.

Proof: Consider the nodes removed in round {1}: their degree was at most {10}, so even if all those balls went to such nodes, their final load would be at most {10}. Now, consider any node {x} that survived this round. While many edges incident to it might have been removed in this round, we claim that at most {10} of those would have contributed to {x}‘s load. Indeed, the each of the other endpoints of those edges went to bins with final load at most {10}. So at most {10} of them would choose {x} as their less loaded bin before it is better for them to go elsewhere.

Now, suppose {y} is deleted in round {2}: then again its load can be at most {20}: ten because it survived the previous round, and 10 from its own degree in this round. OTOH, if {y} survives, then consider all the edges incident to {y} that were deleted in previous rounds. Each of them went to nodes that were deleted in rounds {1} or {2}, and hence had maximum load at most {20}. Thus at most {20} of these edges could contribute to {y}‘s load before it was better for them to go to the other endpoint. The same inductive argument holds for any round {i \leq i^*}.

Finally, the process ends when each component has size at most {K}, so the degree of any node is at most {K}. Even if all these edges contribute to the load of a bin, it is only {10i^* + K}. \Box

By Lemma 10, the number of rounds is {i^* = O(\log \log n)} whp, so by Lemma 11 the maximum load is also {O(\log \log n)} whp.

3.1. Missing Proofs of Lemmas

Lemma 12 The size of {G}‘s largest connected component is {O(\log n)} whp.

Proof: We have a graph with {n} vertices and {m=\frac{n}{512}} edges where we connect vertices at random.

\displaystyle  \begin{array}{rcl}  \Pr[ k+1\text{ vertices connected } ] &\le& \Pr[ \text{ at least } k \text{ edges fall within } k+1 \text{ nodes }] \\ &\le& \binom{m}{k}\left(\frac{\binom{k+1}{2}}{\binom{n}{2}}\right)^k = \binom{m}{k}\left(\frac{k(k+1)}{n(n-1)}\right)^k \\ &\leq& \binom{m}{k}\left(\frac{4k}{n}\right)^{2k}. \end{array}

Since {k(k+1) \leq 2k^2} and {n(n-1) \geq n^2/2}. Now the probability that any such set exists can bounded above by the union bound

\displaystyle  \begin{array}{rcl}  \Pr[ \exists \text{ a connected set of size }k+1 ] &\le& \binom{n}{k+1}\binom{m}{k}\left(\frac{4k}{n}\right)^{2k}\\ &\le& n\left(\frac{ne}{k}\right)^k\left(\frac{ne}{512k}\right)^k\left(\frac{4k}{n}\right)^{2k}\\ &\le& n\left(\frac{e^2}{16}\right)^k \le \frac{1}{2n} \quad\text{ if }\; k=\Theta(\log n ) \end{array}

which proves the claim. \Box

Lemma 13 There exists a suitably large constant {K > 0} such that for all subsets {S} of the vertex set with {|S| \ge K}, the induced graph {G[S]} contains at most {5|S|/2} edges, and hence has average degree at most {5}, whp.

Proof:

\displaystyle  \Pr[ \text{ a fixed set of }k\text{ nodes gets } > \frac{5k}{2} \text{ edges }] \leq \binom{m}{5k/2}\left(\frac{4k}{n}\right)^{2\cdot 5k/2} = \binom{m}{5k/2}\left(\frac{4k}{n}\right)^{5k}.

By a union bound over all sets, the probability that such a set exists is

\displaystyle  \begin{array}{rcl}  \Pr[ \exists \text{ a bad set } ] &\le& \sum_{k \geq K} \binom{n}{k}\binom{m}{5k/2}\left(\frac{4k}{n}\right)^{5k}\\ &\le& \sum_{k \geq K} \left(\frac{ne}{k}\right)^k\left(\frac{ne}{512(5k/2)}\right)^{5k/2}\left(\frac{k}{n}\right)^{5k} = \sum_{k \geq K} \left(\frac{k}{n}\right)^{3k/2}\alpha^k, \end{array}

where {\alpha = \frac{e^{7/2}}{80^{5/2}} < 1/2}. Now, we can break this sum into two: for small values of {k}, the {(k/n)^k} term would be very small, else the {\alpha^k} term would be small. Indeed, for {k \geq 2\log_2 n}, we know that {\alpha^k \leq 1/n^{2}}, so

\displaystyle  \sum_{k = 2 \log n}^n \left(\frac{k}{n}\right)^{3k/2} \alpha^k \leq \sum_{k = 2 \log n}^n \alpha^k \leq 1/n.

Now for the rest:

\displaystyle  \sum_{k = K}^{2 \log n} \left(\frac{k}{n}\right)^{3k/2} \alpha^k \leq \sum_{k = K}^{2 \log n} \left(\frac{k}{n}\right)^{3k/2} \leq 2 \log n \cdot \left( \frac{ 2 \log n }{n} \right)^{3K/2} \leq 1/n^4,

for {K = 3}, say. \Box

Bibliographic Notes: The layered induction appears in Balanced Allocations Azar, Broder, Karlin, and Upfal. The random graph analysis is in the paper Efficient PRAM Simulation on a Distributed Memory Machine by Karp, Luby, and Meyer auf der Heide; I learned it from Satish Rao. The Always-go-left algorithm and analysis is due to How Asymmetry Helps Load Balancing by Berthold Vöcking.

Update: Here’s a survey on the various proof techniques by Mitzenmacher, Sitaraman and Richa.