Randomized Algorithms, Carnegie Mellon: Spring 2011
Notes on Lecture #5
A couple of things I remembered about Chernoff bounds when sitting in Avrim’s lecture, which thought I’d post here.
The first thing is a (slightly different) bound that you may find easier to remember. To begin, let us state the multiplicative bounds mentioned in the handout, altered ever so slightly:
Now, here’s the version of the general multiplicative bounds I find easiest to remember:
Note that we’ve relaxed the assumption that ‘s are , and are allowing ourselves to lie in . (This is often super-useful, you may want to go over the proof of Theorem 1 in the book or the handout and think about how you may prove it.)
Moreover, (4) is exactly the same as (1). And if we constrain , the exponent on the right of (5) is at most , which is the same as in (2), so (5) is a better bound than (2) for ‘s in this range. So we’re doing at least as well as the first two tail bounds in Theorem 1.
Cool! So does this subsume (3) as well? Sadly, no. We lose something in the translation. Here’s an example. Let the ‘s be i.i.d., taking on value with probability and otherwise, so that . Now, what is the value of which gives us a tail probability of ? If we set for suitably large then (3) gives us , but for (5) to fall below we need for some constant .
Secondly, some jargon: these bounds are often called concentration inequalities (since they show that “most of the probability mass is concentrated around the mean”) and tail bounds (they show that the “tails of the distribution are small”) or large deviation bounds (since they show that “large deviations from the mean are unlikely”). There are tons of other tail bounds, some relaxing the requirements of independence, others relaxing the boundedness requirements, others looking at functions other than the sum of the random variables—you’ll see more in the homeworks and future lectures.
Comments are closed.