Next: The Euclidean Traveling Salesman Up: Approximation Algorithms Previous: Approximation Algorithms

## Approximating Vertex Cover

As we have seen before, finding the minimum vertex cover of a graph is NP-complete. However, a very simple procedure can efficiently find a cover that is at most twice as large as the optimal cover:

```

VertexCover(G=(V,E))

while    do:

Select an arbitrary edge

Add both u and v to the vertex cover

Delete all edges from E that are incident on either u or v.

```

It should be apparent that this procedure always produces a vertex cover, since each edge is only deleted immediately after an incident vertex has been added to the cover. More interesting is the claim that any vertex cover must use at least half as many vertices as this one. Why? Consider just the edges selected by the algorithm. No two of these edges can share a vertex. Therefore, any cover of just these edges must include at least one vertex per edge, which makes it at least half the size of the greedy cover.

• Although the procedure is simple, it is not stupid - Many seemingly smarter heuristics can give a far worse performance in the worst case. For example, why not modify the procedure above to select only one of the two vertices for the cover instead of both. After all, the selected edge will be equally well covered by only one vertex. However, consider the star-shaped graph of Figure . This heuristic will produce a two-vertex cover, while the single vertex heuristic can return a cover as large as n-1 vertices, should we get unlucky and repeatedly select the leaf instead of the center as the cover vertex.

Figure: Neglecting to pick the center vertex leads to a terrible vertex cover

• Greedy isn't always the answer - Perhaps the most natural heuristic for this problem would repeatedly select and delete the vertex of highest remaining degree for the vertex cover. After all, this vertex will cover the largest number of possible edges. However, in the case of ties or near ties, this heuristic can go seriously astray and in the worst case can yield a cover that is times optimal.
• Making a heuristic more complicated does not necessarily make it better - It is easy to complicate heuristics by adding more special cases or details. For example, the procedure above does not specify which edge should be selected next. It might seem reasonable always to select the edge whose endpoints have highest degree. However, this does not improve the worst-case bound and just makes it more difficult to analyze.
• A postprocessing cleanup step can't hurt - The flip side of designing simple heuristics is that they can often be modified to yield better-in-practice solutions without weakening the approximation bound. For example, a postprocessing step that deletes any unnecessary vertex from the cover can only improve things in practice, even though it won't help the worst-case bound.

The important property of approximation algorithms is relating the size of the solution produced directly to a lower bound on the optimal solution. Instead of thinking about how well we might do, we have to think about the worst case i.e. how badly we might perform.

Next: The Euclidean Traveling Salesman Up: Approximation Algorithms Previous: Approximation Algorithms

Algorithms
Mon Jun 2 23:33:50 EDT 1997