# CMU Randomized Algorithms

Randomized Algorithms, Carnegie Mellon: Spring 2011

## Lecture #15: Distance-preserving trees (part I)

**1. Metric Spaces **

A metric space is a set of points, with a distance function that satisfies for all , *symmetry* (i.e., ), and the *triangle inequality* (i.e., for all ). Most of the computer science applications deal with finite metrics, and then denotes the number of points .

There are many popular problems which are defined on metric spaces:

- The Traveling Salesman Problem (TSP): the input is a metric space, and the goal is to find a tour on all the nodes whose total length is as small as possible. This problem is sometimes defined on non-metrics as well, but most of the time we consider the metric version.
The best approximation algorithm for the problem is a -approximation due to Oveis-Gharan, Saberi and Singh (2010). Their paper uses randomization to beat the -approximation of Cristofides (1976), and make progress on this long-standing open problem. The best hardness result for this problem is something like due to Papadimitriou and Vempala.

- The -Center/-Means/-median problems: the input is a metric space , and the goal is to choose some positions from as “facilities”, to minimize some objective function. In
*-center*, we minimize , the largest distance from any client to its closest facility; here, we define the distance from a point to a set as . In*-median*, we minimize , the total (or equivalently, the average) distance from any client to its closest facility. In*-means*, we minimize , the average squared distance from any client to its closest facility. (Note: to see why these problems are called what they are, consider what happens for the -means/medians problem on the line.)The best algorithms for -center give us a -approximation, and this is the best possible unless P=NP. The best -median algorithm gives an -approximation, whereas the best hardness known for the version of the problem stated above is unless P=NP. For -means, gap between the best algorithm and hardness results is worse for general metric spaces. For geometric spaces, better algorithms are known for -means/medians. - The -server problem: this is a classic online problem, where the input is a metric space (given up-front); a sequence of requests arrives online, each request being some point in the metric space. The algorithm maintains servers, one each at some positions in the metric space. When the request arrives, one of the servers must be moved to to serve the request. The cost incurred by the algorithm in this step is the distance moved by the server, and the total cost is the sum of these per-step costs. The goal is to give a strategy that minimizes the total cost of the algorithm.The best algorithm for -server is a -competitive deterministic algorithm due to Koutsoupias and Papadimitriou. Since -server contains paging as a special case (why?), no deterministic algorithm can do better than -competitive. It is a long-standing open problem whether we can do better than CC deterministically—but far more interesting is the question of whether randomization can help beat ; the best lower bound against oblivious adversaries is , again from the paging problem.

** 1.1. Approximating Metrics by Trees: Attempt I **

A special kind of metric space is a *tree metric*: here we are given a tree where each edge has a length . This defines a metric , where the distance is the length of the (unique) shortest path between and , according to the edge lengths . In general, given any graph with edge lengths, we get a metric .

Tree metrics are especially nice because we can use the graph theoretic idea that it is “generated” by a tree to understand the structure of the metric better, and hence give better algorithms for problems on tree metrics. For instance:

- TSP on tree metrics can be solved exactly: just take an Euler tour of the points in the tree.
- -median can be solved exactly on tree metrics using dynamic programming.
- -server on trees admits a simple -competitive deterministic algorithm.

So if all metrics spaces were well-approximable by trees (e.g., if there were some small factor such that for every metric we could find a tree such that

for every , then we would have an -approximation for TSP and -median, and an -competitive algorithm for -server on all metrics. Sadly, this is not the case: for the metric generated by the cycle graph , the best factor we can get in~(1) is . This is what we would get if we just approximated the tree by a line.

So even for simple metrics like that generated by the cycle (on which we can solve these problems really easily), this approach hits a dead-end really fast. Pity.

** 1.2. Approximating Metrics by Trees: Attempt II **

Here’s where randomization will come to our help: let’s illustrate the idea on a cycle. Suppose we delete a uniformly random edge of the cycle, we get a tree (in fact, a line). Note that the distances in the line are at least those in the cycle.

How much more? For two vertices adjacent in the cycle, the edge still exists in the tree with probability , in which case ; else, with probability , and lie at distance from each other. So the expected distance between the endpoints of an edge of the cycle is

And indeed, this also holds for any pair (check!),

But is this any good for us?

Suppose we wanted to -median on the cycle, and let be the optimal solution. For each , let be the closest facility in to ; hence the cost of the solution is:

By the expected stretch guarantee, we get that

I.e., the expected cost of this solution on the random tree is at most . And hence, if is the cost of the optimal solution on , we get

Great—we know that the optimal solution on the random tree does not cost too much. *And* we know we can find the optimal solution on trees in poly-time.

Let’s say is the optimal solution for the tree , where the closest facility in to is , giving . How does this solution perform back on the cycle? Well, each distance in the cycle is less than that in the tree , so the expected cost of solution *on the cycle* will be

And we have a randomized -approximation for -median on the cycle!

** 1.3. Popping the Stack **

To recap, here’s the algorithm: pick a random tree from some nice distribution. Find an optimal solution for the problem, using distances according to the tree , and output this set as the solution for the original metric.

And what did we use to show this was a good solution? That we had a distribution over trees such that

- every tree in the distribution had distances no less than that in the original metric, and
- the expected tree distance between any pair satisfies for some small ; here .

And last but not least

- that the objective function was linear in the distances, and so we could use linearity of expectations.

Note that TSP, -median, -server, and many other metric problems have cost functions that are linear in the distances, so as long as the metrics we care about can be “embedded into random trees” with small , we can translate algorithms on trees for these problems into (randomized) algorithms for general metrics! This approach gets used all the time, and is worth remembering. (BTW, note that this general approach does not work for non-linear objective functions, like -center, or -means.)

But can we get a small in general? In the next section, we show that for any -point metric with *aspect ratio* , we can get ; and we indicate how to improve this to , which is the best possible!

**2. Embeddings into Trees **

In this section, we prove the following theorem using tree embeddings (and then, in the following section, we improve it further to ).

Theorem 1Given any metric with and aspect ratio , there exists a efficiently sampleable distribution over spanning trees of such that for all :

Forall, , and.

To prove this theorem, we will use the idea of a *low diameter decomposition*. Given a metric space on points and a parameter , a *(randomized) low-diameter decomposition* is an efficiently sampleable probability distribution over partitions of into such that

*(Low Radius/Diameter)*For all , there exists such that for all , . Hence, for any , .*(Low Cutting Probability)*For each pair , with .

We’ll show how to construct such a decomposition in the next section (next lecture), and use such a decomposition to prove Theorem 1.

Comments are closed.