Graph Algorithms

As we have seen, the assignment problem can be solved with branch-and-bound, but there are also polynomial-time solutions for this problem. In this chapter we will consider these. Branch-and-bound is simpler, more general and possibly faster in practice, but inherently exponential. With the study of matching we touch on the area of problems called combinatorial optimization. Many of these problems can be formulated as problems of graphs. Therefore, it is essential to know how to solve a basic set of graph problems: computing minimum spanning trees, shortest paths, matchings and maximum flow. These and some other problems are treated in this chapter.

Minimum Spanning Trees

Prim's Algorithm

In the chapter on greedy algorithms we have studied Kruskal's algorithm for finding a minimum spanning forest. There is an alternative algorithm, called Prim's Algorithm with comparable performance. It only works for connected graphs, for a graph consisting of several components, it should be run for each component separately.

The idea in Prim's algorithm is to grow the minimum spanning tree from a single node: Initially the tree is empty. The set of reached nodes consists of a single node s, which we will call the starting node. At any later time, the lightest edge leading from a tree node to a non-tree node is added.

Prim's MST Algorithm

This algorithm is similar, but not identical to Dijkstra's algorithm: it is not true that for any node v all nodes on the shortest path from s to v are in the spanning tree. Nevertheless, we can implement Prim's algorithm just like Dijkstra's algorithm. In this case the entry of a non-tree node v in the priority queue is not the length of the shortest discovered path from s to v, but the weight of the lightest edge leading from a tree node u to v.

This algorithm does not need a union-find data structure: for testing whether edges are potentially interesting, it is sufficient to maintain an array of bits marking the tree nodes. When using a priority queue, in the course of the algorithm at most n inserts, n deletemins and m decreasekey operations are performed. With an appropriate priority queue, these operations can be performed in O(n * log n + m) time. For graphs with m > n * log n, this is less than the time O(m * log n) of Kruskal's algorithm.

Click here to see the algorithm integrated in a running Java program. In this program the priority queue is implemented using a binary heap. The heap has been implemented using arrays. The array key[] contains the keys and the array ind[] contains the indices of the elements in the heap. The array pos[] gives the position of an element with specified index in the heap. So, for each element i represented in the heap, ind[pos[i]] == i. Here we exploit that the indices of the entries in the priority queue are from a finite domain. So, we can use direct addressing. In general one should use a dictionary ADT for this, for example by using hashing. In the given implementation deletemin and decreasekey take O(log n), most other operations take constant time. So, the overall running time is O((n + m) * log n). However, in practice the cost of the decreasekeys will tend to take much less than m * log n because not all edges result in a decreasekey, and because nodes do not move every time from the bottom of the heap to its top. The file also contains two alternative MST algorithms. They lead to fewer percolate operations and for sparse graphs, for example for graphs with n = m, this reduced number of percolates even leads to a reduced time consumption.

Of course, in many practically important cases the time for Kruskal's algorithm can be reduced to either O(n * log n + alpha(n, m) * m) or even to O(alpha(n, m) * m). Here O(alpha(n, m) * m) is the time for performing m find operations on a structure of size n. Already for m a little bit larger than n alpha(n, m) is a constant. The first happens when the trick of starting with a small subset of the edges is successful, the second happens when the weights of the edges are polynomial in n, so that the sorting can be performed in linear time.

The proof of correctness for Prim's algorithm goes analogously to that of Kruskal's algorithm. Assuming that until a certain point of time the partial solution is promising, we can concentrate on the first mistake. So, assume that until step t the constructed partial solution S is promising and that adding the edge e in step t leads to a non-promising partial solution S + e. Let F be the minimum spanning tree containing S. F + e contains a cycle. Let e' be the other edge in F connecting S and F - S. e' has weight at least as large as e, so F' = F - e' + e is not heavier than F and must therefore also be a minimum spanning tree. But F' contains S + e, in contradiction with the assumption that S + e is not promising.

Boruvka's Algorithm

Both Kruskal's and Prim's algorithm add one edge at a time. However, we can add many more edges without risk of loosing optimality. This idea is used in an algorithm known as Boruvka's algorithm, who designed it with application on a parallel computer in mind. The algorithm works in rounds. For a graph G, in every round the following steps are performed:
  1. Each node u traverses its adjacency list and determines the lightest edge (u, v). It sets next[u] = v. This induces a directed graph G' with n nodes and edges (u, next[u]).
  2. The cycles of length 2 in G' are resolved: each node u with next[next[u]] == u and u < next[v] sets next[u] = u. All edges (u, next[u]) with next[u] != u are added to the set S of edges in the minimum spanning tree.
  3. Each node u determines the root root[u] of the tree in G' to which it belongs.
  4. G is transformed and reduced. Any edge (u, v) is replaced by (root[u], root[v]) (but it should be remembered from which actual edge this edge stems). All nodes with root[u] != u are removed. From the new adjacency lists self-loops are removed. Of a set of parallel edges only the lightest is kept.
To understand how the algorithm works, we should consider the structure of the graph G', G is undirected, so u ranges among the possible choices for next[v], where v = next[u]. This implies that the edge (v, next[v]) is certainly not heavier than the edge (u, next[u]). In other words, along any path in G' the weights of the edges are weakly decreasing. If we assume that the weights are all different (ties can always be broken by adding the index of the nodes as a secondary weight criterion), this means that the weights along a path must be strongly decreasing until we reach a cycle of length 2. Thus, G' is a directed forest, with cycles of length 2 at the roots.

Boruvka's MST Algorithm

Because each node chooses an edge, and because each edge is chosen at most twice, the number of different chosen edges is at least n / 2. Thus, at most n / 2 nodes survive a round, which means that after at most log n rounds only one node is remaining.

The correctness of Boruvka's algorithm can be proven with an extension of the argument of Prim's algorithm. The running time is more interesting. The number of nodes gets strongly reduced in every round, but this is not necessarily true for the number of edges. Each round can be performed in O(m) time, so the whole algorithm takes O(m * log n) time.

The worst-case time consumption is not better than the above algorithms, but this method has nevertheless importance. The most important aspect is that this algorithm can be parallelized quite well. Observing that a graph with n nodes can never have more than n * (n - 1) / 2 edges, it also follows that for very dense graphs the number of edges does decrease along with the number of nodes. This puts a bound on the complexity of O(n^2).

More generally, for a graph with m = n^2 / 4^k, the time can be bounded on O(log (n^2 / m) * m). We prove this. Denote by n_t the number of nodes after round t, m_t is defined analogously. n_0 = n, m_0 = m. We know that n_t <= n / 2^t and that m_t <= min{m, n_t^2}. Thus, using k = log_4 (n^2 / m), we get

  T_total  = sum_{t >= 0} (a * n_t + b * m_t )
          <= sum_{t >= 0} (a * n / 2^t + b * min{m, n^2 / 4^t})
           = O(n) + b * (sum_{t = 0}^k m + sum_{t > k} n^2 / 4^{t - k})
           = O(n) + O(k * m) + O(m).

Depending on the graph structure, Boruvka's algorithm may in practice be even better. As an example we consider the problem of computing a minimum spanning tree on a planar graph. On planar graphs the nodes can have arbitrary degree (a star graph with one central nodes and n - 1 neighbors is planar), but, on planar graphs without self-loops and parallel edges, in total the number of edges is linear in n (it is at most 3 * n - 6). Fusing adjacent nodes does not impair the planarity of the graph. So, for planar graphs we may assume that at all times m_t = O(n_t), thus the whole algorithm is running in O(n) time. This holds independently of the weights or the details of the structure. Neither Kruskal's nor Prim's algorithm achieves this.

Shortest Paths

The shortest path problem has several variants. Here we consider the following two:

In the unweighted case the SSSP Problem can be solved using breadth-first search in O(n + m) time. In the weighted case with all edge weights positive, Dijkstra's algorithm is the standard solution to the problem, requiring O(log n * (n + m)) when using binary heaps and O(n * log n + m) when using Fibonacci heaps.

Bellman-Ford Algorithm

If there are negative weights, then Dijkstra's algorithm does not need to be correct. The problem is that one can not be sure that the currently closest node to s has reached its final distance. The problem cannot be overcome by determining the largest negative value and adding this to all edges: this penalizes paths with more edges and may give a different solution.

Negative Edge Weights

Fortunately, there are several simple solutions. Unfortunately, these algorithms are considerably slower. One is known under the name Floyd-Warshall Algorithm and has running time O(n^3). Because of its simplicity and the very small constants hidden in the O(), this algorithm is nevertheless rather practical. It actually solves the all-pairs-shortest-path problem. If this is what one needs, it is a good option. Another algorithm, much closer to Dijkstra's is called the Bellman-Ford algorithm. The only data structure it needs is a queue (not a priority queue). It solves the SSSP problem in O(n * m). For sparse graphs this is quite ok, for dense graphs Floyd-Warshall may be better.

In general, the notion of shortest paths is not well-defined when there are so-called negative-cost cycles, cycles on which the sum of all costs is negative (or zero). One must either assume that no such cycles exist (that is what we will do in the following), or one must have a mechanism for detecting them.

  void negativelyWeightedSSSP(int s, int[] dist) 
  {
    for (v = 0; v < n; v++)
      dist[v] = INFINITY; 
    Queue q = new Queue(n);
    dist[s] = 0;
    q.enqueue(s);
    while (q.notEmpty()) 
    {
      v = q.dequeue();
      for (each neighbor w of v)
        if (dist[w] > dist[v] + weight[v, w]) // shorter path
        {
          dist[w] = dist[v] + weight[v, w];
          if (! q.isInQueue(w))
            q.enqueue(w);
        }
    } 
  } 

It is essential to test that an element is not on the queue before enqueuing it, otherwise the capacity of the queue may be exceeded. This algorithm is really very simple, but also quite dumb: all nodes on the queue are dequeued, processed and possibly enqueued again and again. Nevertheless, there are no substantially better algorithms.

Does the algorithm terminate at all? Yes, if there are no negative-cost cycles. How long does it take to terminate? For the analysis of this we need to introduce the concept of a round:

The same notion of round is useful in the unweighted case: in that case round r consists of processing all nodes at distance r from s. In the case of weighted graphs processed with the Bellman-Ford algorithm, the time analysis is based on the following result:

Lemma: For any node v whose shortest path from s goes over r edges, dist[v] has reached its final value at the end of round r.

Proof: For any node v which lies on a negative-cost cycle, there is no shortest path consisting of a finite number of edges, so for these v the claim is void. So, in the following we only consider the subgraph consisting of all nodes not lying on a negative cost-cycle. If this subgraph is empty, there is nothing to prove. Otherwise, we proceed by induction. For r = 0, it trivially holds: s has reached its final distance. Now assume the claim holds for r - 1. Consider a node w whose shortest path from s goes over r edges. Let v be the last node on this path before w. v has reached its final distance in some round r' <= r - 1. When v was getting its final distance, it was enqueued a last time, and in round r' + 1 <= r the algorithm sets distance[w] = distance[v] + weight[v, w], which is the correct value. End.

Corollary: If there are no negative-cost cycles, then at most n - r + 1 nodes are processed in round r.

Proof: If there are no negative-cost cycles, then there is at least one node v whose shortest path from s consists of r' edges for any r' <= r - 2. These at least r - 1 nodes are reaching their final dist-value no later than round r - 2, and thus are not inserted in the queue anymore in round r - 1 or later. At worst the other n - r + 1 nodes may arise in the queue processed in round r. End.

Theorem The Bellman-Ford algorithm has running time O(n * m).

Proof: The corollary implies that there are at most n rounds in each of which at most m edges must be processed. End.

The theorem is sharp in the sense that there are inputs for which the time consumption indeed is Omega(n * m), this can even happen for dense graphs. So, in theory the Bellman-Ford algorithm may take up to Omega(n^3) time and is therefore in the worst-case not better than the Floyd-Warshall algorithm. However, this view is too pessimistic: in practice there will often be fewer rounds and for those graphs that require many rounds, typically one has not to process all edges every time.

With the BFS-based algorithm and Dijkstra's algorithm, it is always explicitly known which nodes have reached there final distance values. The Bellman-Ford algorithm is different: at the end the distances are correct but during the algorithm it is not yet known which nodes have reached their final values: a node that does not appear in the queue may reappear later on.

Click here to see the Bellman-Ford algorithm integrated in a working Java program. In this implementation there is no test for negative-cost cycles. So, it may happen that the program does not terminate!

In the following picture the operation of the Bellman-Ford algorithm is illustrated. Because the order in which the neighbors of a node is not part of the specification of the algorithm, there are various correct possibilities, the picture shows one of them. The marked nodes are those that are not in the queue at the beginning of this stage and will not appear in the queue anymore.

Bellman-Ford Algorithm

Floyd-Warshall Algorithm

Text to appear in due time.

Recursive APSP Algorithm

Text to appear in due time.

Euler Tours

An Euler tour on a graph G = (v, E) is a cycle (a path starting and ending in the same node) visiting all edges in E exactly once. An Euler path is a path visiting all edges in E exactly once. The following fact is well-known:

Fact: A graph has an Euler tour if and only if all nodes have even degree. A graph has an Euler path if exactly two nodes have odd degree.

The positive side of this claim will follow from the construction hereafter. the negative side is not hard to prove. First we reduce the Euler path problem to the Euler tour problem: if v_1 and v_2 are the two nodes with odd degree, then we can add a single extra edge (v_1, v_2). Now all nodes have even degree. Construct a tour for this graph, and start the tour in v_1 first traversing the edge (v_2, v_1) (if the tour runs the other direction, then one starts with (v_1, v_2) or one can reverse the tour). This tour finishes in v_1 again. Omitting the first edge of the cycle gives a path running from v_1 to v_2.

How to construct a tour? The algorithm is simple:

The algorithm is correct, because of the following observations:

The whole algorithm takes O(n + m) time, which is surprisingly little for an apparently complex problem. There is an alternative algorithm, a modified DFS search, which constructs the tour without first explicitly constructing cycles. Looking into the algorithm it becomes clear that it is essentially doing the same. The efficiency of both algorithms is comparable. The given construction has the advantage of being easier to understand. Furthermore, in applications it is often not required to construct a single tour. In that case the simple algorithm based on "walk along the edges until you get stuck" is good enough and simpler.

Constructing an Euler Tour

Edge Coloring Bipartite Graphs

What is finding Euler tours good for, except for drawing nice pictures? In the first place, an Euler tour models a kind of road cleaning problem: starting from their main station, a squad of road cleaners must traverse all streets of a village. Driving over already cleaned streets means a waste of time. An Euler tour gives a possible optimal route for the cleaning squad.

Another important application is as a subroutine in a slightly more advanced problem. Consider a group of friends who have rented a fitness center for two hours. There are n_1 friends and n_2 training machines. Each of them wants to train for 30 minutes on 4 (not necessarily different) machines. Two questions arise:

There is a big difference between these questions. The first is a question about existence. So far we were not confronted with existence questions. Clearly the smallest element in a set can be selected, clearly a sorted order can be constructed, clearly membership can be tested, and elementary algorithms are obvious. For our problem it is not a priori clear that there exists any solution. Maybe we are just asking too much.

The second question is of a different nature. Here we ask about computability. Proving existence might for example go by contradiction and does not need to be constructive. Clearly any construction implies existence, but the other implication does not hold, and there are many problems for which we know that a solution exists, but for which so far no one could give an algorithm with acceptable (= less than exponential) running time.

Fortunately, in the case of the fitness center, there is a surprisingly simple algorithm. It is based on our accumulated knowledge. If there are more than four persons wanting to use the same machine, than clearly they cannot be scheduled in 4 time slots. So, in that case the answer to the first question is negative. Therefore, in the remainder of the discussion, we assume that there are at most four persons wanting to use any of the machines.

We first abstract the problem, reformulating it as a problem for graphs. There is a node for each of the friends and a node for each of the machines. There is an edge for each of the wishes. That is, if person i wants to use machine j, there is an edge (i, j). This graph is bipartite: all edges run between the subset V_1 of n_1 nodes corresponding to the friends and the subset V_2 of n_2 nodes corresponding to the machines.

This graph has degree 4 (each person wants to train on four machines, each machine is selected at most four times). If we succeed to allocate a number from {0, 1, 2, 3} to all edges so that for each node of V_1 and V_2 no two edges have the same number we are done. An assignment of a value x to edge (i, j) can namely be viewed as assigning person i to machine j in time slot x. The condition that all nodes in V_1 and V_2 get each number at most once means that a person is not scheduled to more than one machine at a time and that a machine is not allocated to more than one person at a time. Assigning values from {0, 1, 2, 3} is equivalent to coloring the edges with four different colors.

An edge coloring of a graph is an assignment of numbers to the edges so that no two edges incident upon a node have the same number. In general it is very hard to compute a coloring using a minimum number of colors, but for bipartite graphs this is much easier. Particularly it is generally true that it is possible to construct a coloring with d colors when d is the maximum degree of any node. This is a consequence of Hall's (also called König's) theorem which states that on bipartite graphs there is a matching which matches all nodes of maximum degree. This provides the step in an inductive proof: a coloring can certainly be found by repeatedly constructing such a maximum matching. Surprisingly coloring bipartite graphs can be performed much more efficiently than this. A single matching in a bipartite graph with n nodes and m edges takes O(sqrt(n) * m) time. For a regular bipartite graph of degree g, m = g * n, so repeatedly matching takes O(sqrt(n) * g * m) time. A coloring can be constructed in just O(log g * m) time.

Consider a bipartite graph with node sets V_1 and V_2, each with n nodes. Assume that the graph is regular of degree g. A first idea for constructing a coloring is to start allocating the colors to the first node of V_1, then the second and so on. When assigning the colors of node i, we should respect the conditions imposed by earlier assigned colors. If one is lucky this may work, but in general this approach will get stuck when we must assign a color c to an edge (i, j) while node j has an edge (i', j) for some i' < i, which was already assigned color c while coloring node i'. Another "greedy" strategy may also work, but not always. The idea is that one tries to assign the colors one by one. The algorithm gets stuck when, while assigning color c, one comes to a node i for which all the uncolored edges lead to nodes which already have an edge that was given color c before.

A working and efficient strategy is based on Euler splittings. For a g-regular graph G with g even, the graph can easily be split in two edge-disjoint subgraphs G_0 and G_1 each of degree g / 2. This is done by constructing an Euler tour of the graph, numbering the edges on the tour alternatingly 0 and 1. All edges which have been numbered 0 belong to G_0, all other edges to G_1. Because an Euler tour of a graph with n nodes and m edges can be constructed in O(n + m) time, this splitting costs time O(g * n).

In general, for a graph of degree g = 2^k, the algorithm consists of k rounds. In round i, 0 <= i < k, the algorithm is working on 2^i subgraphs in which all nodes have degree 2^{k - i}. Each of the 2^i operations in round i takes O(2^{k - i} * n) time, so the total amount of work in any of the rounds is O(2^k * n) = O(g * n) = O(m). Thus, in total over all rounds the algorithm takes O(k * m) = O(log g * m) time. For g which are not powers of 2, the construction of such colorings is considerably harder. After much research it has been established (by Cole, Ost and Schirra in Combinatorica, Vol. 21, 2001) that even the general case can be solved in O(log g * m) time.

Coloring Regular Bipartite Graphs

Unweighted Bipartite Matching

Definitions and Basic Approaches

We are considering an undirected unweighted graph G = (V, E) with n nodes and m edges. A matching M of G is a subset of the edges with the property that no two edges of M share an endpoint. Edges in M are said to be matched the other edges are said to be unmatched or free. Nodes that are the endpoint of a matched edge are said to be matched the other nodes are said to be exposed. In the unweighted matching problem, the goal is to find for a given graph a matching which maximizes the number of matched nodes.

The greedy algorithm, picking the edges in any order and considering whether they can be added is not optimal. For example, for a graph with four nodes connected with three nodes in a linear way, the greedy algorithm may start by picking the middle edge, blocking any further edge.

If for an edge e, u(e) denotes one endpoint of the edge and v(e) the other endpoint, then the matching problem can also be formulated as an ILP. The problem is to define a function f mapping the edges to numbers so that sum_e f(e) is maximized under the following conditions:

sum_{e | u(e) = w or v(e) = e} f(e) <= 1, for all w in V
f(e) = 0 or 1, for all e in E

First constructing an optimum solution to the corresponding LP and then rounding to integral values is much better than the greedy algorithm, but it does not necessarily give an optimum solution as must be shown in one of the exercises.

Augmenting Paths

A path p = [u_1, u_2, ..., u_k] is alternating when [u_1, u_2], [u_3, u_4], ... are free, while [u_2, u_3], [u_4, u_5] ... are matched. p is called augmenting when both u_1 and u_k are free.

Lemma: An augmenting path with respect to a matching M is simple.

Proof: Assume that the path is not simple. Then either the path is traversing some node i more than once, or it is finishing in a node i that was traversed before. The first would imply that i is incident on two or more matched edges, which is in contradiction with M being a matching. The second implies that the endpoint is incident on at least one matched edge, which is in contradiction with the path being augmenting, which means among other things that the last node is free. End.

Lemma: Let P be the set of edges on an augmenting path p = [u_1, ..., u_{2 * k}] with respect to a matching M, then M' = M ^ P is a matching of cardinality |M| + 1. Here A ^ B denotes the symmetric difference of A and B: A ^ B = (A - B) + (T - S).

Proof: Check that it is a matching by distinguishing three cases after assuming that two edges e and e' in M' = M ^ P are incident upon the same nodes:

For the number of edges, we notice that P has 2 * k - 1 edges. k - 1 of these are in M, the other k are free. So, |M'| = |M ^ P| = |M| + k - (k - 1) = |M| + 1. End.

Theorem: A matching M in a graph G is maximum if and only if there is no augmenting path in G with respect to M.

Proof: If there is an augmenting path, then the matching is not maximum because of the lemma. Now assume that M is not maximum. Let M' be maximum instead. Consider the edges in M ^ M'. Because this is the union of two subsets of matchings, all nodes in the subgraph G' = (V, M ^ M') have degree 2 or less. If the degree is 2, one of the incident edges is from M, the other from M'. A graph of degree two is a composition of cycles and paths. In the cycles the edges of M and M' alternate, and therefore they all contain the same number of edges from M and M'. Because |M'| > |M|, there must be at least one path with more edges from M' than from M. Even here the edges alternate, and therefore this path begins and ends with an edge from M'. So, this is an augmenting path. End.

This theorem gives a perfect characterization of the maximality of matchings in terms of augmenting paths. In the remainder we will consider how to find such paths: all presented matching algorithms are based on repeatedly finding augmenting paths.

Unweighted Bipartite Matching Algorithm

On bipartite graphs everything is easy: test all nodes on one side in order. For those that are not yet matched by the time of testing, perform a BFS of alternating paths and test whether this brings us to any other exposed node. A node that is matched once will always remain matched, and therefore n of these BFS operations are sufficient. Shortest paths are always simple, so the paths found can indeed be used for increasing the number of matched nodes. Takes O(n * (n + m)) for a graph with n nodes and m edges.

This algorithm sounds extremely natural and one might believe that it holds without limitations. However, this is not true. On a general graph (that is, one that is not bipartite), the above theorem holds, but it is not true that all augmenting paths are found by performing a BFS. Consider the following graph:

Hard to Find Augmentation

So, why is the algorithm correct for bipartite graphs? Let the set V of nodes be composed of V_1 and V_2, so that all edges are running between V_1 and V_2. Assume that we are starting the BFS searches from the nodes in V_1. Any pair of consecutive edges, an unmatched edge followed by a uniquely defined matched edge can be replaced by a single edge in a reduced graph consisting only of the nodes in the subset V_1. A target node is now any node of V_1 from which there is an edge to an unmatched node in V_2. A path in this reduced graph corresponds to an alternating path, and a path to a target node can be extended to an augmenting path. Most important, and it is here that we use that the graph is bipartite, is that any augmenting path starting from a node in V_1 is of this form.

The algorithm for the non-bipartite case is considerably more complex. The easiest implementation takes O(n^4) time.

Maximum Flow

Problem Definition

Network N = (s, t, V, E, b), where b gives the capacities of the edges. A flow f, is an assignment of values f(i, j) to the edges so that
sum_j f(j, i) = sum_j f(i, j), for every i in V: flow conservation
0 <= f(i, j) <= b(i, j), for every i, j: capacity constraints
The goal is to find a maximum flow, that is, to find an f for which the flow out of s or into t is maximized:
sum_i f(s, i) = sum_i f(i, t)
If an edge (i, j) with capacity b(i, j) is carrying a flow f(i, j), then the amount b(i, j) - f(i, j) is called the residual capacity. An edge with residual capacity zero is said to be saturated.

A cut through a network is a division of the set of nodes, s and t in two subsets V_s containing s and V_t containing t, which are mutually disjoint and together contain all nodes. The capacity of a cut is the sum of the capacities of all edges leading from V_s to V_t. Notice that we do not count, positive nor negative, edges running in the other direction. One of the most famous theorems in this domain, which we here give without proof, is the so called maxflow-mincut theorem:

Theorem: The value of the maximum flow equals the capacity of the minimum cut.

It appears that this theorem cannot be turned into an algorithm: finding the mincut is, in general, hard. However, the theorem provides us with a mean to argue. We will use it in our analysis of networks in which all edges have capacity one. An example of a flow network together with a maximum flow and the mincut is given in the following picture:

Maxflow = Mincut

Trivial Algorithms

The simplest idea is to repeatedly look for a path from s to t, like the augmenting paths. Then the flows along this path are increased by the minimum residual capacity along the path. What does this give us? In general this does not give a maximum flow, it only gives a maximal flow.

Applying this idea to the following example in which all edges have capacity one, then, the algorithm will find the direct path from s to t, transporting one flow unit. No further augmenting paths can be found. So, we get stuck with a flow of one unit, whereas one can clearly transport two units.

Hard to Find Maxflow

Correct Algorithms

The correct idea is to always look for paths in the associated network N(f) = (s, t, V, E(f), a), where for each edge (i, j) in E, E(f) has edges (i, j) and (j, i), with capacities b(i, j) - f(i, j) for the first and f(i, j) for the second. If in the original graph there were already two edges (i, j) and (j, i), then the edges in N(f) should be taken together, summing their capacities, so that there are no multiple edges.

Without proof we state the following result (it is directly related to the maxflow-mincut theorem):

Theorem: A flow f is maximum, if and only if there is no path from s to t in N(f).

Even though the approach of repeatedly constructing N(f) and searching paths from s to t in it eventually gives a maxflow, this is not a particularly efficient way to compute a maximum flow. In the following figure the edge (u, v) has capacity 1, while all other edges have capacity x, for some large number x, then we may find as paths an alternation of (s, u, v, t) and (s, ..., v, u, ..., t), increasing the flow by one unit after every operation. So, it takes 2 * x operations to find the maximum flow of 2 * x units. Because the size of the input is bounded by O(log x), the running time is exponential in the size of the input.

Maxflow May Take Long

Polynomial-Time Algorithm

One may feel that the above exponential time was only arising because we were taking the wrong path. This is true. Increasing only along shortest paths from s to t in N(f) guarantees polynomial time, but still not particularly good: O(n * m^2). We want better: O(n^3) is the goal. The presented algorithm achieving this is due to Karzanov (1974). It is an improvement of several earlier polynomial-time algorithms. Since then work went on, but no breath-taking new results were found, and all of these algorithms are considerably more complex.

AN(f) is the so-called auxiliary network with respect to f. It is a subnetwork of N(f): containing precisely all those edges of N(f) that lie on a shortest path from s to t. Because in our algorithm we are only looking for shortest paths, this will still allow us to find all interesting paths. At the same time this network is much simpler: it is layered: a network is layered, if there are subsets V_0 = {s}, V_1, ..., V_{d-1}, V_d = {t}, so that all nodes of V lie in one of the V_i, and so that all edges run between two consecutive subsets.

Notice that N(f) can be constructed from N and f in O(n + m) time in a trivial way, and that AN(f) can be found from N(f) by BFS (one from s and one from t, an edge (i, j) lies on a shortest path from s to t if d_s(i) + 1 + d_t(j) = d_s(t)).

It is very wasteful to throw away AN(f) after finding a single path. It is a much better idea to first find all possible paths in AN(f), and then adding this maximal flow f' in AN(f) to f. It can be shown that when doing this, the distance from s to t in AN(f + f') is at least one larger than it was in AN(f). This immediately implies that we need at most n of these stages before we come to a network in which s and t are disconnected: the search is over. So, the total time of an algorithm working along these lines will be O(n * time per stage).

Karzanov's Algorithm

Constructing a Maximal Flow in a Layered Network

In the following we will show how a maximal flow in AN(f) can be constructed in O(n^2), thus giving the desired O(n^3) time bound for the complete algorithm. The idea is not to find individual paths, but rather to push/pull a certain amount of flow from s to t along several paths at a time. The underlying reason is that finding an s-t path is (at least in theory) not cheaper than O(n + m), so, we can just as well do something more than just finding one s-t path.

Define the throughput of a node by the minimum of the sum of the capacities of the ingoing edge and the sum of the capacities of the outgoing edges. In general it is not true that we can construct a flow so that throughput(i) is flowing through node i. But, if we start at the node i with the smallest throughput x of all nodes, then it is certainly possible to push x units of flow from i to t and to pull x units of flow from s to i: we will not get stuck on the way. Afterwards the throughput of node i is reduced to 0, and we might remove i together with all its incident edges from AN(f). This already shows that we must perform at most n of these push-and-pull operations.

How to organize the push-and-pull operations? In a systematic way of course. We apply two rules:

After each push-and-pull operation the capacities of the edges in AN(f) over which flow is pushed or pulled are reduced and the throughputs of the involved nodes are updated. Then the minimum throughput is found again and the next push-and-pull operation is performed. Notice that we can afford O(n) overhead per operation, so we do not have to optimize the selection of the minimum and the like.

How expensive is the above algorithm? During a stage every edge gets saturated at most once. So, this gives O(m) in total for all operations of a stage. During every push-and-pull operation, each node is visited only once (because we are working level by level!) and except for possibly saturating many edges, the work performed for a node is constant. So, this gives O(n * n) for the at most n operations performed during a stage.

Theorem: Using the following ideas, maximum flows can be computed in O(n^3) time:

Cases that can be Solved Faster

If we consider networks in which all edges have capacity one, unit-capacity networks then we can apply the same algorithm but it will run faster. This we will not show. If it is even so that all nodes of the network either have indegree or outdegree 1, such networks are called simple in this context, then the algorithm runs even faster. This we will show.

Lemma: On a unit-capacity network a stage takes O(n + m) time.

Proof: Every edge is handled at most once, particularly there is no partial filling, so this takes O(m) time. When visiting a node, this will either be to remove the node from the network or at least one of the incident edges will be saturated. The first cost is bounded by O(n), the latter costs can be attributed to the first edge saturated and is thus bounded by O(m). End.

Lemma: On a simple unit-capacity network the distance d from s to t satisfies d <= n / maxflow + 1.

Proof: Divide the node set V in subsets V_i, 0 <= i <= d, constituted of all nodes at distance i from s. Notice, that we work with distances in the original network N, not those in an associated network N(f). The whole flow of x units is running through each of the V_i (because there are no bypassing edges, as the definition of the V_i is based on the distance of the nodes from s). So, because each node can transfer only one flow unit, each of the V_i must have size at least x. That is, there can be at most n / x of these subsets. End.

Theorem: On a simple unit-capacity network the maxflow algorithm requires at most O(n^{1/2} * m) time.

Proof: The time per stage is bounded to O(m) because of the above lemma. So, it suffices to show that the number of stages is bounded by O(n^{1/2}). We distinguish to cases:

In the first case, the number of stages can be bounded by the value of the maxflow, because in every stage the flow is increased by at least one unit. In the second case, we are using the last lemma, but not so easily as one might believe. Let x be the value of the maximum flow. Consider the first stage of the algorithm in which the value of the flow exceeds x - n^{1/2}. Let this be stage y. Hereafter at most n^{1/2} further stages will increase the flow to x. Before stage y, the value of the maximum flow through the network N(f) was at least n^{1/2}. One can show that even N(f) has unit-capacities and is simple, when N is so. Thus, in these networks the distance from s to t cannot exceed n^{1/2}, and because this distance increases for every performed stage, we must have y <= n^{1/2} + 1. End.

Faster Bipartite Matching

Now that we know how to find maximum flows, we can use this knowledge to solve other problems. There are many problems that somehow can be formulated as a maximum flow problem, by choosing the right capacities, adding some edges, etc. This is also the case for the bipartite matching problem. And, surprisingly, this leads to a more efficient solution of the problem than the special purpose algorithm we developed before.

Consider a bipartite graph, with node sets V_1 and V_2. This is turned into a network as follows: we add a source s, from which there is an edge to all nodes in V_1 and a sink t, to which there is an edge from all nodes in V_2. The edges in the graph are directed from the nodes in V_1 to those in V_2.

An integer flow from s to t in this network corresponds to a matching (because every node in V_1 is transferring at most one unit of flow, which means that it has at most one outgoing edge carrying flow (that is why we imposed that the flow should be integral). The same argument shows that also on the other side the matching property is satisfied. A maximum flow corresponds to a maximum matching: a larger matching would show how to pump more flow, a larger flow would show how to construct a larger matching.

In general, as shown above, maximum matchings can be computed in O(n^3) time, but in this case our network is simple (in the above defined sense) and has unit capacities, thus the maximum flow can be found in O(n^{1/2} * m) time, which is much faster than O(n^3), particularly for somewhat sparser graphs. The algorithm is so that a flow is not unnecessarily fractioned, and therefore the resulting flow will be integral. More formally this can be shown by induction: initially the flow is integral (namely equal to 0) and all capacities are integral. Thus, all throughputs are integral, and thus all edges get integral flows (because their residual capacities are integral). So, the next flow is integral again.

Using Maxflow for Bipartite Matching

Weighted Bipartite Matching

Problem and Context

Matching for non-bipartite graphs is substantially harder than bipartite matching. Another complication is to consider matching for weighted graphs. The hardest combination, but still solvable in polynomial time is non-bipartite weighted matching. We will limit ourselves to bipartite graphs, of which we will consider the weighted case now.

Consider a weighted graph, G = (V, E), with weight w_{i, j} associated to the edge running from node i to node j. The goal is to find a matching so that the sum of the weights of the matched edges is maximized. Without loss of generality (though this might have negative influence on the running time), we may assume that the graph is a complete: non-existing edges can be replaced by edges with weight zero, matching these has no impact on the sum of the weights, so any maximum matching in the original graph is also a maximum matching in the complete graph. For bipartite graphs, we can even assume, if this is needed in some argument, that the two subsets have the same cardinality: we can always add nodes to the smaller subset with only zero-capacity edges to the other subset. The maximization problem we are considering can be replaced by a minimization problem: an edge with weight w_{i, j} is replaced by one with "cost" c_{i, j} = W - w_{i, j}, where W is the maximum over all w_{i, j}. In this form the problem becomes the same as the earlier considered assignment problem.

The problem now has a simple formulation as a linear program. We must have

sum_{j = 1}^n x_{i, j} = 1, for all i,
sum_{i = 1}^n x_{i, j} = 1, for all j,
x_{i, j} >= 0, for all i and j.
and the goal is to minimize
C = sum_{i = 1}^n sum_{j = 1}^n c_{i, j} * x_{i, j}.

Notice that there is no way to impose that we would like to have integral values of the x_{i, j}. Of course we can impose this, but not as a linear program. And, non-integral solutions may exist to this problem. Examples are easy to construct: 2 nodes on each side, c_a1 = 3, c_b2 = 5, c_a2 = c_b1 = 4. However, fortunately, these non-integral solutions never lead to a value of C that is larger than what can be reached by integral solutions (non-integral solutions are always linear combinations of integral solutions). A chic way of saying this is to say "all basic feasible solutions are integral".

Bipartite Graph with Non-Integral Maximum Matching

Hungarian Algorithm

The following algorithm is known as the "Hungarian" method, and goes back on Kuhn (1955). Later this algorithm has been generalized to become the "primal-dual" method, which is a powerful method used, among other things, to turn solving weighted problems into repeatedly solving non-weighted problems.

Again we are performing searches for augmenting paths, but this time we do not search through all edges from the start. We start with a small graph, having only few admissible edges. When we get stuck on this graph, some further edges (at least one) are made admissible, and the process is repeated until all nodes are matched. We will certainly find a perfect matching this way. The main question is how to add the edges in such a way that the result will be a cost minimizing matching.

The method will be presented in a way that does not immediately give rise to an efficient algorithm. With some modifications it can be turned into an algorithm running in O(n^3). The central lemma on which the whole method is based is

Lemma: The optimal solution remains the same if a constant (possibly negative) is added to any row or column of the cost matrix.

Proof: Instead of the former values c_{i, j}, we now have cost values c'_{i, j} = c_{i, j} + p_i + q_j, where p_i gives the values added to all entries in row i, and q_j gives the values added to the entries in column j. The objective function (that is, the function to minimize) is now

C' = sum_{i = 1}^n sum_{j = 1}^n c'_{i, j} * x_{i, j}
= sum_{i = 1}^n sum_{j = 1}^n c_{i, j} * x_{i, j} + sum_{i = 1}^n sum_{j = 1}^n p_i * x_{i, j} + sum_{i = 1}^n sum_{j = 1}^n q_j * x_{i, j}
= sum_{i = 1}^n sum_{j = 1}^n c_{i, j} * x_{i, j} + sum_{i = 1}^n p_i + sum_{j = 1}^n q_j
So, this is larger than before by sum_{i = 1}^n p_i + sum_{j = 1}^n q_j, a value which does not depend on the choice of the x_{i, j}. End.

The method is explained using the following bipartite graph with five nodes on each side. This is the same graph as considered in the chapter on branch-and-bound algorithms.

Weighted Bipartite Graph

In a preparatory step, first in each row we subtract from all entries the minimum value in the row, and thereafter in each column from all entries the minimum value in the column. For the example graph the cost matrix is modified as follows:

Preparatory Steps

Notice that these manipulations do not lead to any negative values, because the column minima are determined after subtracting the row minima. The preparatory step is so that afterwards there is at least one zero in each row and column. This means that in the zero-cost graph, the bipartite graph which has an edge corresponding to each zero in the cost matrix, all nodes have degree at least one. If we are lucky the zero-cost graph has a perfect matching. A matching is said to be perfect if all nodes are matched. On a bipartite graph with n nodes on each side, this means that in total there are n matched edges. Because this matching corresponds to a zero-cost matching of the graph corresponding to the corrected cost matrix, it is a minimum matching of this weighted graph (here we use that all values are still positive!). The above lemma states that it even gives a minimum matching of the original graph.

If the zero-cost graph has no perfect matching we continue to modify the cost matrix until the resulting zero-cost graph has a perfect matching. These corrections are performed in a systematic way, based on the crossing-out lemma:

Lemma: The minimum cardinality of a set of lines (horizontal and vertical) covering all zeroes in an n x n matrix equals the size of a maximum cardinality matching in zero-cost graph.

Proof: A matching in the zero-cost graph cannot have more matched edges than needed to cover all zeroes: such a matching would use two edges that lie either in one row or one column (covered by one line). This shows that the cardinality of a matching is at most as large as the cardinality of a set of lines covering all zeroes. It is not easy to see that the maximum of the first equals the minimum of the second. This proof is not given. End.

This lemma is one of many "min-max" lemmas, which are typical for primal-dual algorithms (the minimum solution of the dual algorithm corresponds to a maximum solution of the primal, and the values are equal in this optimum).

The following steps are repeated until the zero-cost graph has a perfect matching:

  1. Construct the zero-cost graph and compute a maximum cardinality matching.
  2. Construct the corresponding set of lines (horizontal and vertical) covering all zeroes in the current cost matrix.
  3. Determine the minimum non-zero value x among all uncovered positions in the current cost matrix.
The operations in the third step are the same as subtracting x from all uncovered rows and adding x to all covered columns. The fun of this, is that the position (i, j) which had value x before becomes a 0 now, while most existing zeroes remain zero and no negative values arise. Only zeroes that are on the intersection of a covered row and column may get a positive value. But these were apparently not very interesting. The zero-cost graph Z' is a subgraph of the weighted graph G' corresponding to the corrected cost matrix M'. If Z has a perfect matching, this means that the weighted matching problem for G has a zero-cost solution. This is clearly optimal, because there are no negative edge weights. Thus, this gives an optimum solution of the assignment problem of M'. Because M' is obtained from the original cost matrix M by modifying the entries in the rows and columns by the same amount, the above lemma gives that this optimum solution for M' is also an optimum solution for M.

We continue the example: a maximum matching might be (a, 1), (b, 5), (d, 5) and (e, 2). Because the matching is not perfect we still have to continue. The minimum covering of the zeroes corresponding to the matching is given by row b, row d, row e and column 1. The smallest value x which is not covered is the 2 at position (a, 5). The values are changed according to the given rules by x. Due to the modifications a new zero is created. There is always at least one new zero. This additional zero is precisely what we need: the new zero-cost graph has a perfect matching: (a, 5), (b, 3), (c, 1), (d, 2) and (e, 4). In the original graph the cost of this matching is 26 + 25 + 21 + 27 + 50 = 149, the same value we found with the branch-and-bound algorithm.

Crossing Out

There is an important, but without further explanation somewhat obscure rule which must be applied to assure that substantial progress is made. The crossing-out sometimes leaves freedom: some lines can just as well be drawn horizontally as vertically. Making the wrong choices, it may happen that the size of the matching of the zero-cost graph remains the same during many rounds. The result may be that the running time becomes exponential in the size of the input. The theory underlying the algorithm assures that when the crossing-out is performed so that vertical lines are used only when required to cover all zeroes with the minimum number of lines, the size of the matching increases by at least one after each round until reaching size n.

In the following example, we start with a cost matrix for which the maximum matching of the corresponding zero-cost graph has cardinality 4, matching the edges (a, 3), (c, 1), (d, 5) and (e, 4). The lines that cover all zeroes in the cost matrix must be drawn through these positions, but there are three possible ways to do so: the lines through (a, 3) and (c, 1) can be drawn almost any way. Drawing the line through (a, 3) horizontally and the line through (c, 1) vertically, the minimum uncovered value is the 1 at position (c, 3). Modifying the matrix and proceeding similarly in the next round, gives back essentially the same matrix as the one with which we started: it takes x / 2 rounds before finding a larger matching. Drawing the lines through (a, 3) and (c, 1) both horizontally, such an increase is obtained after one round.

Importance of  Crossing Out Systematically

Exercises

  1. Consider the heap-based program for computing minimum spanning trees. We know that in general the running time is bounded by O((n + m) * log n) due to the operations performed on the heap. Experimentally determine the running time as a function of n and m: test all values of n = 2^k, for k = 12, 14, ..., 20 and m = 4 * n, 16 * n, ..., 256 * n, as far as these problems can be solved in reasonable time on the available computer. Try to fit a function with constant, linear and linear times logarithmic terms through these results. How important is the term n * log m? Now replace the binary heap by a pairing heap. Repeat the above experiments and compare the performance.

  2. Using a binary heap in Prim's algorithm has a worst-case running time of O((n + m) * log n). Construct a class of inputs for which the worst case running time Omega(n^2 * log n) is needed. This shows that for dense graphs, in particular for complete graphs, a trivial array-based implementation of a priority queue with deletemins in linear and decreasekeys in constant time may be substantially better, because then the algorithm is guaranteed to run in O(m + n^2), which is O(n^2) for all graphs without multiple edges.

  3. The given implementation of the Bellman-Ford algorithm is not terminating if a negative-cost cycle arises. This is undesirable, even though in this case it makes no sense to compute the shortest paths, the procedure should terminate with a decent output. Ideally, the correct distances should be computed for all good nodes, nodes which are not reachable from s by a path leading through a negative-cost cycle, while the distance is set to some special value for the other nodes, which we here call bad nodes.
    1. How many times can a good node be enqueued at most?
    2. How frequently is a bad node enqueued? How many rounds may pass at most before a bad node is enqueued again?
    3. How many rounds must be performed before counting the number enqueue operations allows to distinguish good and bad nodes?
    4. After how many rounds has dist[v] certainly reached its final value?
    5. Suggest a minor modification to the program realizing the same worst-case running time correctly handling negative-cost cycles.
    6. A tremendous disadvantage of this idea is that now many rounds are performed, even though the shortest paths to all good nodes may contain only few edges. At the expense of O(n) extra storage and slightly higher costs per round, the algorithm can be modified such that even in the presence of negative cycles the algorithm is guaranteed to perform hardly more than the minimum number of rounds. Describe how.

  4. Consider the following strategy for finding a matching on an unweighted graph: first an optimum solution is constructed to the LP, then the maximum value f(e) is determined. f(e) is fixed at 1. For all other edges e' leading to u(e) and v(e), f(e') is fixed at 0. All edges for which the value of f has been fixed are removed from the graph, and the algorithm is repeated until there are no edges left. Give an example of a graph for which this algorithm does not lead to an optimum matching.

  5. Consider the simple unit-capacity network corresponding to the bipartite graph in the text above Perform the computation of maxflow showing the steps of Karzanov's algorithm, that is, by drawing the sequence of arising N(f) and AN(f) networks. The nodes on the left side are considered before the nodes on the right side. Nodes are considered from the highest to the lowest and an edge (u, v) is considered before an edge (u, v') when v stands higher than v'.

  6. Consider the network in the following picture.

    Flow Network with 11 Nodes

    Indicate the mincut and determine the value of the maxflow. Repeatedly compute N(f) and AN(f) as was done above to find the maxflow. Draw the whole sequence of pictures.

  7. On a unit-capacity network a stage of Karzanov's algorithm can be implemented to run in O(n + m) time. For arbitrary weights, the cost of a round is bounded by O(n^2 + m). This time consumption has four contributions: computing all throughputs; at most n times selecting the node with minimum throughput; in each push-and-pull operation addressing each node at most once; in each push-and-pull operation partially saturating one edge for each node; in total over all operations saturating each edge at most once.
    1. Among the listed contributions there are three that, with the suggested implementation take O(n^2) time. Which three?
    2. For two of these contributions it is proven in the lemma that they are actually bounded by O(n + m) for a uni-capacity network. Notice that this is only a matter of analysis, it is not necessary to modify the algorithm. Which O(n^2) contribution does require a modified algorithm?
    3. Give a detailed description how even this problematic contribution can be reduced to O(n + m).
    4. Does the given approach even work for networks with arbitrary capacities? If not, describe an improved implementation for the general case and specify its time consumption.

  8. An independent set I in an undirected graph G = (V, E) is a subset of E so that no two nodes of I are connected by an edge in E. Many graph algorithms are based on repeatedly removing an independent set. The performance of such algorithms depends on the size of the removed independent sets. Therefore, it is natural to consider the problem of finding maximum independent sets, the maximum-cardinality independent-set problem. In general this problem is NP-hard, implying that we have little hope of solving it in polynomial time. For such problems one may try heuristics and special cases, the topic of this exercise.

    For a graph G, an independent set can be constructed as follows: consider the nodes in order, add a node u to I if none of the neighbors of I was added to I before. Give an example of a graph that shows that this algorithm can be arbitrarily bad in the sense that (asymptotically) the ratio between the number of selected nodes and the size of the maximum independent set can be arbitrarily large.

    A better idea is to slightly modify the above algorithm: repeatedly select a node u with currently minimal degree and add u to I, then remove all neighbors of u and their edges. Continue with these operations until no nodes are left. Work the algorithm out to an efficient algorithm. What is the time complexity of your algorithm?

    Give an example of a simple graph (a graph without selfloops and multiple edges) for which this algorithm does not find the maximum independent set. Hint: there is a graph with 20 nodes for which the algorithm selects 7 instead of 8 nodes. Possibly there are even smaller examples.

    Generalize the given example to show that even for the improved algorithm the ratio of the maximum possible number of nodes and the selected number is not bounded by a constant.

    Show that including a node u with degree one does not compromise the possibility to find a maximum independent set. Hint: consider the optimum solution and distinguish two cases.

    Using the above observation give an algorithm to construct a maximum independent set in trees running in O(n) time.

  9. A generalization of the above considered maximum-cardinality independent-set problem is the maximum-weight independent-set problem: every node u has a weight w(u), and the task is to construct an independent set I for which sum_{u in I} w(u) is maximized. Clearly this problem is even harder to approximate. Therefore, we only consider it for trees.

    Denote the maximum weight of an independent set in a tree T by W(T). Consider a specific node s in T. s does not need to be the root of T. Express W(T) inductively, distinguishing the cases that s in I or that s is not in I.

    The inductive expression immediately leads to a recursive algorithm for computing W(T). Formulate the algorithm, leaving the choice of s unspecified. This is an example of a divide-and-conquer algorithm, the underlying idea is similar to that of quicksort.

    Let Z(n) be the time for running your algorithm on a tree with n nodes. Give a recurrence relation specifying the time complexity in terms of the sizes of the subproblems to solve. Which nodes are the worst choices for s? Show, that for any tree T the time Z(n) may be exponential.

    Which nodes are the ideal choices for s? How much time does it take to find such a node? Now refine the algorithm always choosing these ideal nodes.

    Formulate the recurrence relation for the time consumption Z'(n) of the improved algorithm and estimate it as good as you can, in any case proving a polynomial time bound.

  10. Consider again the Hungarian algorithm for computing a maximum matching of a weighted bipartite graph. The crossing-out lemma is of great importance, because it allows to efficiently find a minimum cardinality set of lines covering all zeroes by first solving a maximum cardinality matching of an unweighted bipartite graph. Here we encounter the central idea of primal-dual constructions: a weighted problem is reduced to repeatedly solving an unweighted problem. Nevertheless, the matching of the graph with edges corresponding to the zeroes in the cost matrix does not immediately give the set of lines we are looking for.
    1. What problem remains to be solved?
    2. Reformulate this problem as a well-known decision problem.
    3. The crossing-out lemma guarantees that this decision problem has a solution. How much time does it take to find it?
    4. It was mentioned that in order to assure polynomial running time the crossing-out must be performed so that columns are used only if necessary. This turns the decision problem into an optimization problem. Formulate the optimization problem and consider the time needed for solving it.

  11. Consider the time complexity of the Hungarian algorithm for computing a minimum-weight matching of a complete bipartite graph with n nodes on each side.
    1. How much time would a basic implementation take for each complete iteration of the main loop?
    2. Mention some ideas that may help to reduce the time consumption per iteration.
    3. Mention some ideas that may help to reduce the time consumption of an iteration when using the computations performed in the previous iteration.

  12. Consider the size s_t of the maximum cardinality matching of the zero-cost graph after t iterations of the main loop of the Hungarian algorithm for computing a minimum-weight matching of a complete bipartite graph with n nodes on each side.
    1. Give a cost matrix for which s_0 is particularly small.
    2. Estimate the expected value of s_0 for the case that the values in the cost matrix are chosen independently and uniformly from an interval 0, ..., M - 1.


This page was created by Jop Sibeyn.
Last update Saturday, 12 March 05 - 08:30.
For any comments: send an email.