The idea in Prim's algorithm is to grow the minimum spanning tree from a single node: Initially the tree is empty. The set of reached nodes consists of a single node s, which we will call the starting node. At any later time, the lightest edge leading from a tree node to a non-tree node is added.
This algorithm is similar, but not identical to Dijkstra's algorithm: it is not true that for any node v all nodes on the shortest path from s to v are in the spanning tree. Nevertheless, we can implement Prim's algorithm just like Dijkstra's algorithm. In this case the entry of a non-tree node v in the priority queue is not the length of the shortest discovered path from s to v, but the weight of the lightest edge leading from a tree node u to v.
This algorithm does not need a union-find data structure: for testing whether edges are potentially interesting, it is sufficient to maintain an array of bits marking the tree nodes. When using a priority queue, in the course of the algorithm at most n inserts, n deletemins and m decreasekey operations are performed. With an appropriate priority queue, these operations can be performed in O(n * log n + m) time. For graphs with m > n * log n, this is less than the time O(m * log n) of Kruskal's algorithm.
Click here to see the algorithm integrated in a running Java program. In this program the priority queue is implemented using a binary heap. The heap has been implemented using arrays. The array key[] contains the keys and the array ind[] contains the indices of the elements in the heap. The array pos[] gives the position of an element with specified index in the heap. So, for each element i represented in the heap, ind[pos[i]] == i. Here we exploit that the indices of the entries in the priority queue are from a finite domain. So, we can use direct addressing. In general one should use a dictionary ADT for this, for example by using hashing. In the given implementation deletemin and decreasekey take O(log n), most other operations take constant time. So, the overall running time is O((n + m) * log n). However, in practice the cost of the decreasekeys will tend to take much less than m * log n because not all edges result in a decreasekey, and because nodes do not move every time from the bottom of the heap to its top. The file also contains two alternative MST algorithms. They lead to fewer percolate operations and for sparse graphs, for example for graphs with n = m, this reduced number of percolates even leads to a reduced time consumption.
Of course, in many practically important cases the time for Kruskal's algorithm can be reduced to either O(n * log n + alpha(n, m) * m) or even to O(alpha(n, m) * m). Here O(alpha(n, m) * m) is the time for performing m find operations on a structure of size n. Already for m a little bit larger than n alpha(n, m) is a constant. The first happens when the trick of starting with a small subset of the edges is successful, the second happens when the weights of the edges are polynomial in n, so that the sorting can be performed in linear time.
The proof of correctness for Prim's algorithm goes analogously to that of Kruskal's algorithm. Assuming that until a certain point of time the partial solution is promising, we can concentrate on the first mistake. So, assume that until step t the constructed partial solution S is promising and that adding the edge e in step t leads to a non-promising partial solution S + e. Let F be the minimum spanning tree containing S. F + e contains a cycle. Let e' be the other edge in F connecting S and F - S. e' has weight at least as large as e, so F' = F - e' + e is not heavier than F and must therefore also be a minimum spanning tree. But F' contains S + e, in contradiction with the assumption that S + e is not promising.
Because each node chooses an edge, and because each edge is chosen at most twice, the number of different chosen edges is at least n / 2. Thus, at most n / 2 nodes survive a round, which means that after at most log n rounds only one node is remaining.
The correctness of Boruvka's algorithm can be proven with an extension of the argument of Prim's algorithm. The running time is more interesting. The number of nodes gets strongly reduced in every round, but this is not necessarily true for the number of edges. Each round can be performed in O(m) time, so the whole algorithm takes O(m * log n) time.
The worst-case time consumption is not better than the above algorithms, but this method has nevertheless importance. The most important aspect is that this algorithm can be parallelized quite well. Observing that a graph with n nodes can never have more than n * (n - 1) / 2 edges, it also follows that for very dense graphs the number of edges does decrease along with the number of nodes. This puts a bound on the complexity of O(n^2).
More generally, for a graph with m = n^2 / 4^k, the time can be bounded on O(log (n^2 / m) * m). We prove this. Denote by n_t the number of nodes after round t, m_t is defined analogously. n_0 = n, m_0 = m. We know that n_t <= n / 2^t and that m_t <= min{m, n_t^2}. Thus, using k = log_4 (n^2 / m), we get
T_total = sum_{t >= 0} (a * n_t + b * m_t ) <= sum_{t >= 0} (a * n / 2^t + b * min{m, n^2 / 4^t}) = O(n) + b * (sum_{t = 0}^k m + sum_{t > k} n^2 / 4^{t - k}) = O(n) + O(k * m) + O(m).
Depending on the graph structure, Boruvka's algorithm may in practice be even better. As an example we consider the problem of computing a minimum spanning tree on a planar graph. On planar graphs the nodes can have arbitrary degree (a star graph with one central nodes and n - 1 neighbors is planar), but, on planar graphs without self-loops and parallel edges, in total the number of edges is linear in n (it is at most 3 * n - 6). Fusing adjacent nodes does not impair the planarity of the graph. So, for planar graphs we may assume that at all times m_t = O(n_t), thus the whole algorithm is running in O(n) time. This holds independently of the weights or the details of the structure. Neither Kruskal's nor Prim's algorithm achieves this.
In the unweighted case the SSSP Problem can be solved using breadth-first search in O(n + m) time. In the weighted case with all edge weights positive, Dijkstra's algorithm is the standard solution to the problem, requiring O(log n * (n + m)) when using binary heaps and O(n * log n + m) when using Fibonacci heaps.
Fortunately, there are several simple solutions. Unfortunately, these algorithms are considerably slower. One is known under the name Floyd-Warshall Algorithm and has running time O(n^3). Because of its simplicity and the very small constants hidden in the O(), this algorithm is nevertheless rather practical. It actually solves the all-pairs-shortest-path problem. If this is what one needs, it is a good option. Another algorithm, much closer to Dijkstra's is called the Bellman-Ford algorithm. The only data structure it needs is a queue (not a priority queue). It solves the SSSP problem in O(n * m). For sparse graphs this is quite ok, for dense graphs Floyd-Warshall may be better.
In general, the notion of shortest paths is not well-defined when there are so-called negative-cost cycles, cycles on which the sum of all costs is negative (or zero). One must either assume that no such cycles exist (that is what we will do in the following), or one must have a mechanism for detecting them.
void negativelyWeightedSSSP(int s, int[] dist) { for (v = 0; v < n; v++) dist[v] = INFINITY; Queue q = new Queue(n); dist[s] = 0; q.enqueue(s); while (q.notEmpty()) { v = q.dequeue(); for (each neighbor w of v) if (dist[w] > dist[v] + weight[v, w]) // shorter path { dist[w] = dist[v] + weight[v, w]; if (! q.isInQueue(w)) q.enqueue(w); } } }It is essential to test that an element is not on the queue before enqueuing it, otherwise the capacity of the queue may be exceeded. This algorithm is really very simple, but also quite dumb: all nodes on the queue are dequeued, processed and possibly enqueued again and again. Nevertheless, there are no substantially better algorithms.
Does the algorithm terminate at all? Yes, if there are no negative-cost cycles. How long does it take to terminate? For the analysis of this we need to introduce the concept of a round:
Lemma: For any node v whose shortest path from s goes over r edges, dist[v] has reached its final value at the end of round r.
Proof: For any node v which lies on a negative-cost cycle, there is no shortest path consisting of a finite number of edges, so for these v the claim is void. So, in the following we only consider the subgraph consisting of all nodes not lying on a negative cost-cycle. If this subgraph is empty, there is nothing to prove. Otherwise, we proceed by induction. For r = 0, it trivially holds: s has reached its final distance. Now assume the claim holds for r - 1. Consider a node w whose shortest path from s goes over r edges. Let v be the last node on this path before w. v has reached its final distance in some round r' <= r - 1. When v was getting its final distance, it was enqueued a last time, and in round r' + 1 <= r the algorithm sets distance[w] = distance[v] + weight[v, w], which is the correct value. End.
Corollary: If there are no negative-cost cycles, then at most n - r + 1 nodes are processed in round r.
Proof: If there are no negative-cost cycles, then there is at least one node v whose shortest path from s consists of r' edges for any r' <= r - 2. These at least r - 1 nodes are reaching their final dist-value no later than round r - 2, and thus are not inserted in the queue anymore in round r - 1 or later. At worst the other n - r + 1 nodes may arise in the queue processed in round r. End.
Theorem The Bellman-Ford algorithm has running time O(n * m).
Proof: The corollary implies that there are at most n rounds in each of which at most m edges must be processed. End.
The theorem is sharp in the sense that there are inputs for which the time consumption indeed is Omega(n * m), this can even happen for dense graphs. So, in theory the Bellman-Ford algorithm may take up to Omega(n^3) time and is therefore in the worst-case not better than the Floyd-Warshall algorithm. However, this view is too pessimistic: in practice there will often be fewer rounds and for those graphs that require many rounds, typically one has not to process all edges every time.With the BFS-based algorithm and Dijkstra's algorithm, it is always explicitly known which nodes have reached there final distance values. The Bellman-Ford algorithm is different: at the end the distances are correct but during the algorithm it is not yet known which nodes have reached their final values: a node that does not appear in the queue may reappear later on.
Click here to see the Bellman-Ford algorithm integrated in a working Java program. In this implementation there is no test for negative-cost cycles. So, it may happen that the program does not terminate!
In the following picture the operation of the Bellman-Ford algorithm is illustrated. Because the order in which the neighbors of a node is not part of the specification of the algorithm, there are various correct possibilities, the picture shows one of them. The marked nodes are those that are not in the queue at the beginning of this stage and will not appear in the queue anymore.
Fact: A graph has an Euler tour if and only if all nodes have even degree. A graph has an Euler path if exactly two nodes have odd degree.
The positive side of this claim will follow from the construction hereafter. the negative side is not hard to prove. First we reduce the Euler path problem to the Euler tour problem: if v_1 and v_2 are the two nodes with odd degree, then we can add a single extra edge (v_1, v_2). Now all nodes have even degree. Construct a tour for this graph, and start the tour in v_1 first traversing the edge (v_2, v_1) (if the tour runs the other direction, then one starts with (v_1, v_2) or one can reverse the tour). This tour finishes in v_1 again. Omitting the first edge of the cycle gives a path running from v_1 to v_2.
How to construct a tour? The algorithm is simple:
The whole algorithm takes O(n + m) time, which is surprisingly little for an apparently complex problem. There is an alternative algorithm, a modified DFS search, which constructs the tour without first explicitly constructing cycles. Looking into the algorithm it becomes clear that it is essentially doing the same. The efficiency of both algorithms is comparable. The given construction has the advantage of being easier to understand. Furthermore, in applications it is often not required to construct a single tour. In that case the simple algorithm based on "walk along the edges until you get stuck" is good enough and simpler.
Another important application is as a subroutine in a slightly more advanced problem. Consider a group of friends who have rented a fitness center for two hours. There are n_1 friends and n_2 training machines. Each of them wants to train for 30 minutes on 4 (not necessarily different) machines. Two questions arise:
There is a big difference between these questions. The first is a question about existence. So far we were not confronted with existence questions. Clearly the smallest element in a set can be selected, clearly a sorted order can be constructed, clearly membership can be tested, and elementary algorithms are obvious. For our problem it is not a priori clear that there exists any solution. Maybe we are just asking too much.
The second question is of a different nature. Here we ask about computability. Proving existence might for example go by contradiction and does not need to be constructive. Clearly any construction implies existence, but the other implication does not hold, and there are many problems for which we know that a solution exists, but for which so far no one could give an algorithm with acceptable (= less than exponential) running time.
Fortunately, in the case of the fitness center, there is a surprisingly simple algorithm. It is based on our accumulated knowledge. If there are more than four persons wanting to use the same machine, than clearly they cannot be scheduled in 4 time slots. So, in that case the answer to the first question is negative. Therefore, in the remainder of the discussion, we assume that there are at most four persons wanting to use any of the machines.
We first abstract the problem, reformulating it as a problem for graphs. There is a node for each of the friends and a node for each of the machines. There is an edge for each of the wishes. That is, if person i wants to use machine j, there is an edge (i, j). This graph is bipartite: all edges run between the subset V_1 of n_1 nodes corresponding to the friends and the subset V_2 of n_2 nodes corresponding to the machines.
This graph has degree 4 (each person wants to train on four machines, each machine is selected at most four times). If we succeed to allocate a number from {0, 1, 2, 3} to all edges so that for each node of V_1 and V_2 no two edges have the same number we are done. An assignment of a value x to edge (i, j) can namely be viewed as assigning person i to machine j in time slot x. The condition that all nodes in V_1 and V_2 get each number at most once means that a person is not scheduled to more than one machine at a time and that a machine is not allocated to more than one person at a time. Assigning values from {0, 1, 2, 3} is equivalent to coloring the edges with four different colors.
An edge coloring of a graph is an assignment of numbers to the edges so that no two edges incident upon a node have the same number. In general it is very hard to compute a coloring using a minimum number of colors, but for bipartite graphs this is much easier. Particularly it is generally true that it is possible to construct a coloring with d colors when d is the maximum degree of any node. This is a consequence of Hall's (also called König's) theorem which states that on bipartite graphs there is a matching which matches all nodes of maximum degree. This provides the step in an inductive proof: a coloring can certainly be found by repeatedly constructing such a maximum matching. Surprisingly coloring bipartite graphs can be performed much more efficiently than this. A single matching in a bipartite graph with n nodes and m edges takes O(sqrt(n) * m) time. For a regular bipartite graph of degree g, m = g * n, so repeatedly matching takes O(sqrt(n) * g * m) time. A coloring can be constructed in just O(log g * m) time.
Consider a bipartite graph with node sets V_1 and V_2, each with n nodes. Assume that the graph is regular of degree g. A first idea for constructing a coloring is to start allocating the colors to the first node of V_1, then the second and so on. When assigning the colors of node i, we should respect the conditions imposed by earlier assigned colors. If one is lucky this may work, but in general this approach will get stuck when we must assign a color c to an edge (i, j) while node j has an edge (i', j) for some i' < i, which was already assigned color c while coloring node i'. Another "greedy" strategy may also work, but not always. The idea is that one tries to assign the colors one by one. The algorithm gets stuck when, while assigning color c, one comes to a node i for which all the uncolored edges lead to nodes which already have an edge that was given color c before.
A working and efficient strategy is based on Euler splittings. For a g-regular graph G with g even, the graph can easily be split in two edge-disjoint subgraphs G_0 and G_1 each of degree g / 2. This is done by constructing an Euler tour of the graph, numbering the edges on the tour alternatingly 0 and 1. All edges which have been numbered 0 belong to G_0, all other edges to G_1. Because an Euler tour of a graph with n nodes and m edges can be constructed in O(n + m) time, this splitting costs time O(g * n).
In general, for a graph of degree g = 2^k, the algorithm consists of k rounds. In round i, 0 <= i < k, the algorithm is working on 2^i subgraphs in which all nodes have degree 2^{k - i}. Each of the 2^i operations in round i takes O(2^{k - i} * n) time, so the total amount of work in any of the rounds is O(2^k * n) = O(g * n) = O(m). Thus, in total over all rounds the algorithm takes O(k * m) = O(log g * m) time. For g which are not powers of 2, the construction of such colorings is considerably harder. After much research it has been established (by Cole, Ost and Schirra in Combinatorica, Vol. 21, 2001) that even the general case can be solved in O(log g * m) time.
The greedy algorithm, picking the edges in any order and considering whether they can be added is not optimal. For example, for a graph with four nodes connected with three nodes in a linear way, the greedy algorithm may start by picking the middle edge, blocking any further edge.
If for an edge e, u(e) denotes one endpoint of the edge and v(e) the other endpoint, then the matching problem can also be formulated as an ILP. The problem is to define a function f mapping the edges to numbers so that sum_e f(e) is maximized under the following conditions:
sum_{e | u(e) = w or v(e) = e} f(e) <= 1, for all w in V
f(e) = 0 or 1, for all e in E
First constructing an optimum solution to the corresponding LP and then rounding to integral values is much better than the greedy algorithm, but it does not necessarily give an optimum solution as must be shown in one of the exercises.
Lemma: An augmenting path with respect to a matching M is simple.
Proof: Assume that the path is not simple. Then either the path is traversing some node i more than once, or it is finishing in a node i that was traversed before. The first would imply that i is incident on two or more matched edges, which is in contradiction with M being a matching. The second implies that the endpoint is incident on at least one matched edge, which is in contradiction with the path being augmenting, which means among other things that the last node is free. End.
Lemma: Let P be the set of edges on an augmenting path p = [u_1, ..., u_{2 * k}] with respect to a matching M, then M' = M ^ P is a matching of cardinality |M| + 1. Here A ^ B denotes the symmetric difference of A and B: A ^ B = (A - B) + (T - S).
Proof: Check that it is a matching by distinguishing three cases after assuming that two edges e and e' in M' = M ^ P are incident upon the same nodes:
For the number of edges, we notice that P has 2 * k - 1 edges. k - 1 of these are in M, the other k are free. So, |M'| = |M ^ P| = |M| + k - (k - 1) = |M| + 1. End.
Theorem: A matching M in a graph G is maximum if and only if there is no augmenting path in G with respect to M.
Proof: If there is an augmenting path, then the matching is not maximum because of the lemma. Now assume that M is not maximum. Let M' be maximum instead. Consider the edges in M ^ M'. Because this is the union of two subsets of matchings, all nodes in the subgraph G' = (V, M ^ M') have degree 2 or less. If the degree is 2, one of the incident edges is from M, the other from M'. A graph of degree two is a composition of cycles and paths. In the cycles the edges of M and M' alternate, and therefore they all contain the same number of edges from M and M'. Because |M'| > |M|, there must be at least one path with more edges from M' than from M. Even here the edges alternate, and therefore this path begins and ends with an edge from M'. So, this is an augmenting path. End.
This theorem gives a perfect characterization of the maximality of matchings in terms of augmenting paths. In the remainder we will consider how to find such paths: all presented matching algorithms are based on repeatedly finding augmenting paths.This algorithm sounds extremely natural and one might believe that it holds without limitations. However, this is not true. On a general graph (that is, one that is not bipartite), the above theorem holds, but it is not true that all augmenting paths are found by performing a BFS. Consider the following graph:
So, why is the algorithm correct for bipartite graphs? Let the set V of nodes be composed of V_1 and V_2, so that all edges are running between V_1 and V_2. Assume that we are starting the BFS searches from the nodes in V_1. Any pair of consecutive edges, an unmatched edge followed by a uniquely defined matched edge can be replaced by a single edge in a reduced graph consisting only of the nodes in the subset V_1. A target node is now any node of V_1 from which there is an edge to an unmatched node in V_2. A path in this reduced graph corresponds to an alternating path, and a path to a target node can be extended to an augmenting path. Most important, and it is here that we use that the graph is bipartite, is that any augmenting path starting from a node in V_1 is of this form.
The algorithm for the non-bipartite case is considerably more complex. The easiest implementation takes O(n^4) time.
sum_j f(j, i) = sum_j f(i, j), for every i in V: flow conservationThe goal is to find a maximum flow, that is, to find an f for which the flow out of s or into t is maximized:
0 <= f(i, j) <= b(i, j), for every i, j: capacity constraints
sum_i f(s, i) = sum_i f(i, t)If an edge (i, j) with capacity b(i, j) is carrying a flow f(i, j), then the amount b(i, j) - f(i, j) is called the residual capacity. An edge with residual capacity zero is said to be saturated.
A cut through a network is a division of the set of nodes, s and t in two subsets V_s containing s and V_t containing t, which are mutually disjoint and together contain all nodes. The capacity of a cut is the sum of the capacities of all edges leading from V_s to V_t. Notice that we do not count, positive nor negative, edges running in the other direction. One of the most famous theorems in this domain, which we here give without proof, is the so called maxflow-mincut theorem:
Theorem: The value of the maximum flow equals the capacity of the minimum cut.
It appears that this theorem cannot be turned into an algorithm: finding the mincut is, in general, hard. However, the theorem provides us with a mean to argue. We will use it in our analysis of networks in which all edges have capacity one. An example of a flow network together with a maximum flow and the mincut is given in the following picture:
Applying this idea to the following example in which all edges have capacity one, then, the algorithm will find the direct path from s to t, transporting one flow unit. No further augmenting paths can be found. So, we get stuck with a flow of one unit, whereas one can clearly transport two units.
Without proof we state the following result (it is directly related to the maxflow-mincut theorem):
Theorem: A flow f is maximum, if and only if there is no path from s to t in N(f).
Even though the approach of repeatedly constructing N(f) and searching paths from s to t in it eventually gives a maxflow, this is not a particularly efficient way to compute a maximum flow. In the following figure the edge (u, v) has capacity 1, while all other edges have capacity x, for some large number x, then we may find as paths an alternation of (s, u, v, t) and (s, ..., v, u, ..., t), increasing the flow by one unit after every operation. So, it takes 2 * x operations to find the maximum flow of 2 * x units. Because the size of the input is bounded by O(log x), the running time is exponential in the size of the input.
AN(f) is the so-called auxiliary network with respect to f. It is a subnetwork of N(f): containing precisely all those edges of N(f) that lie on a shortest path from s to t. Because in our algorithm we are only looking for shortest paths, this will still allow us to find all interesting paths. At the same time this network is much simpler: it is layered: a network is layered, if there are subsets V_0 = {s}, V_1, ..., V_{d-1}, V_d = {t}, so that all nodes of V lie in one of the V_i, and so that all edges run between two consecutive subsets.
Notice that N(f) can be constructed from N and f in O(n + m) time in a trivial way, and that AN(f) can be found from N(f) by BFS (one from s and one from t, an edge (i, j) lies on a shortest path from s to t if d_s(i) + 1 + d_t(j) = d_s(t)).
It is very wasteful to throw away AN(f) after finding a single path. It is a much better idea to first find all possible paths in AN(f), and then adding this maximal flow f' in AN(f) to f. It can be shown that when doing this, the distance from s to t in AN(f + f') is at least one larger than it was in AN(f). This immediately implies that we need at most n of these stages before we come to a network in which s and t are disconnected: the search is over. So, the total time of an algorithm working along these lines will be O(n * time per stage).
Define the throughput of a node by the minimum of the sum of the capacities of the ingoing edge and the sum of the capacities of the outgoing edges. In general it is not true that we can construct a flow so that throughput(i) is flowing through node i. But, if we start at the node i with the smallest throughput x of all nodes, then it is certainly possible to push x units of flow from i to t and to pull x units of flow from s to i: we will not get stuck on the way. Afterwards the throughput of node i is reduced to 0, and we might remove i together with all its incident edges from AN(f). This already shows that we must perform at most n of these push-and-pull operations.
How to organize the push-and-pull operations? In a systematic way of course. We apply two rules:
How expensive is the above algorithm? During a stage every edge gets saturated at most once. So, this gives O(m) in total for all operations of a stage. During every push-and-pull operation, each node is visited only once (because we are working level by level!) and except for possibly saturating many edges, the work performed for a node is constant. So, this gives O(n * n) for the at most n operations performed during a stage.
Theorem: Using the following ideas, maximum flows can be computed in O(n^3) time:
Lemma: On a unit-capacity network a stage takes O(n + m) time.
Proof: Every edge is handled at most once, particularly there is no partial filling, so this takes O(m) time. When visiting a node, this will either be to remove the node from the network or at least one of the incident edges will be saturated. The first cost is bounded by O(n), the latter costs can be attributed to the first edge saturated and is thus bounded by O(m). End.
Lemma: On a simple unit-capacity network the distance d from s to t satisfies d <= n / maxflow + 1.
Proof: Divide the node set V in subsets V_i, 0 <= i <= d, constituted of all nodes at distance i from s. Notice, that we work with distances in the original network N, not those in an associated network N(f). The whole flow of x units is running through each of the V_i (because there are no bypassing edges, as the definition of the V_i is based on the distance of the nodes from s). So, because each node can transfer only one flow unit, each of the V_i must have size at least x. That is, there can be at most n / x of these subsets. End.
Theorem: On a simple unit-capacity network the maxflow algorithm requires at most O(n^{1/2} * m) time.
Proof: The time per stage is bounded to O(m) because of the above lemma. So, it suffices to show that the number of stages is bounded by O(n^{1/2}). We distinguish to cases:
Consider a bipartite graph, with node sets V_1 and V_2. This is turned into a network as follows: we add a source s, from which there is an edge to all nodes in V_1 and a sink t, to which there is an edge from all nodes in V_2. The edges in the graph are directed from the nodes in V_1 to those in V_2.
An integer flow from s to t in this network corresponds to a matching (because every node in V_1 is transferring at most one unit of flow, which means that it has at most one outgoing edge carrying flow (that is why we imposed that the flow should be integral). The same argument shows that also on the other side the matching property is satisfied. A maximum flow corresponds to a maximum matching: a larger matching would show how to pump more flow, a larger flow would show how to construct a larger matching.
In general, as shown above, maximum matchings can be computed in O(n^3) time, but in this case our network is simple (in the above defined sense) and has unit capacities, thus the maximum flow can be found in O(n^{1/2} * m) time, which is much faster than O(n^3), particularly for somewhat sparser graphs. The algorithm is so that a flow is not unnecessarily fractioned, and therefore the resulting flow will be integral. More formally this can be shown by induction: initially the flow is integral (namely equal to 0) and all capacities are integral. Thus, all throughputs are integral, and thus all edges get integral flows (because their residual capacities are integral). So, the next flow is integral again.
Consider a weighted graph, G = (V, E), with weight w_{i, j} associated to the edge running from node i to node j. The goal is to find a matching so that the sum of the weights of the matched edges is maximized. Without loss of generality (though this might have negative influence on the running time), we may assume that the graph is a complete: non-existing edges can be replaced by edges with weight zero, matching these has no impact on the sum of the weights, so any maximum matching in the original graph is also a maximum matching in the complete graph. For bipartite graphs, we can even assume, if this is needed in some argument, that the two subsets have the same cardinality: we can always add nodes to the smaller subset with only zero-capacity edges to the other subset. The maximization problem we are considering can be replaced by a minimization problem: an edge with weight w_{i, j} is replaced by one with "cost" c_{i, j} = W - w_{i, j}, where W is the maximum over all w_{i, j}. In this form the problem becomes the same as the earlier considered assignment problem.
The problem now has a simple formulation as a linear program. We must have
sum_{j = 1}^n x_{i, j} = 1, for all i,and the goal is to minimize
sum_{i = 1}^n x_{i, j} = 1, for all j,
x_{i, j} >= 0, for all i and j.
C = sum_{i = 1}^n sum_{j = 1}^n c_{i, j} * x_{i, j}.
Notice that there is no way to impose that we would like to have integral values of the x_{i, j}. Of course we can impose this, but not as a linear program. And, non-integral solutions may exist to this problem. Examples are easy to construct: 2 nodes on each side, c_a1 = 3, c_b2 = 5, c_a2 = c_b1 = 4. However, fortunately, these non-integral solutions never lead to a value of C that is larger than what can be reached by integral solutions (non-integral solutions are always linear combinations of integral solutions). A chic way of saying this is to say "all basic feasible solutions are integral".
Again we are performing searches for augmenting paths, but this time we do not search through all edges from the start. We start with a small graph, having only few admissible edges. When we get stuck on this graph, some further edges (at least one) are made admissible, and the process is repeated until all nodes are matched. We will certainly find a perfect matching this way. The main question is how to add the edges in such a way that the result will be a cost minimizing matching.
The method will be presented in a way that does not immediately give rise to an efficient algorithm. With some modifications it can be turned into an algorithm running in O(n^3). The central lemma on which the whole method is based is
Lemma: The optimal solution remains the same if a constant (possibly negative) is added to any row or column of the cost matrix.
Proof: Instead of the former values c_{i, j}, we now have cost values c'_{i, j} = c_{i, j} + p_i + q_j, where p_i gives the values added to all entries in row i, and q_j gives the values added to the entries in column j. The objective function (that is, the function to minimize) is now
C' = sum_{i = 1}^n sum_{j = 1}^n c'_{i, j} * x_{i, j}So, this is larger than before by sum_{i = 1}^n p_i + sum_{j = 1}^n q_j, a value which does not depend on the choice of the x_{i, j}. End.
= sum_{i = 1}^n sum_{j = 1}^n c_{i, j} * x_{i, j} + sum_{i = 1}^n sum_{j = 1}^n p_i * x_{i, j} + sum_{i = 1}^n sum_{j = 1}^n q_j * x_{i, j}
= sum_{i = 1}^n sum_{j = 1}^n c_{i, j} * x_{i, j} + sum_{i = 1}^n p_i + sum_{j = 1}^n q_j
The method is explained using the following bipartite graph with five nodes on each side. This is the same graph as considered in the chapter on branch-and-bound algorithms.
In a preparatory step, first in each row we subtract from all entries the minimum value in the row, and thereafter in each column from all entries the minimum value in the column. For the example graph the cost matrix is modified as follows:
Notice that these manipulations do not lead to any negative values, because the column minima are determined after subtracting the row minima. The preparatory step is so that afterwards there is at least one zero in each row and column. This means that in the zero-cost graph, the bipartite graph which has an edge corresponding to each zero in the cost matrix, all nodes have degree at least one. If we are lucky the zero-cost graph has a perfect matching. A matching is said to be perfect if all nodes are matched. On a bipartite graph with n nodes on each side, this means that in total there are n matched edges. Because this matching corresponds to a zero-cost matching of the graph corresponding to the corrected cost matrix, it is a minimum matching of this weighted graph (here we use that all values are still positive!). The above lemma states that it even gives a minimum matching of the original graph.
If the zero-cost graph has no perfect matching we continue to modify the cost matrix until the resulting zero-cost graph has a perfect matching. These corrections are performed in a systematic way, based on the crossing-out lemma:
Lemma: The minimum cardinality of a set of lines (horizontal and vertical) covering all zeroes in an n x n matrix equals the size of a maximum cardinality matching in zero-cost graph.
Proof: A matching in the zero-cost graph cannot have more matched edges than needed to cover all zeroes: such a matching would use two edges that lie either in one row or one column (covered by one line). This shows that the cardinality of a matching is at most as large as the cardinality of a set of lines covering all zeroes. It is not easy to see that the maximum of the first equals the minimum of the second. This proof is not given. End.
This lemma is one of many "min-max" lemmas, which are typical for primal-dual algorithms (the minimum solution of the dual algorithm corresponds to a maximum solution of the primal, and the values are equal in this optimum).The following steps are repeated until the zero-cost graph has a perfect matching:
We continue the example: a maximum matching might be (a, 1), (b, 5), (d, 5) and (e, 2). Because the matching is not perfect we still have to continue. The minimum covering of the zeroes corresponding to the matching is given by row b, row d, row e and column 1. The smallest value x which is not covered is the 2 at position (a, 5). The values are changed according to the given rules by x. Due to the modifications a new zero is created. There is always at least one new zero. This additional zero is precisely what we need: the new zero-cost graph has a perfect matching: (a, 5), (b, 3), (c, 1), (d, 2) and (e, 4). In the original graph the cost of this matching is 26 + 25 + 21 + 27 + 50 = 149, the same value we found with the branch-and-bound algorithm.
There is an important, but without further explanation somewhat obscure rule which must be applied to assure that substantial progress is made. The crossing-out sometimes leaves freedom: some lines can just as well be drawn horizontally as vertically. Making the wrong choices, it may happen that the size of the matching of the zero-cost graph remains the same during many rounds. The result may be that the running time becomes exponential in the size of the input. The theory underlying the algorithm assures that when the crossing-out is performed so that vertical lines are used only when required to cover all zeroes with the minimum number of lines, the size of the matching increases by at least one after each round until reaching size n.
In the following example, we start with a cost matrix for which the maximum matching of the corresponding zero-cost graph has cardinality 4, matching the edges (a, 3), (c, 1), (d, 5) and (e, 4). The lines that cover all zeroes in the cost matrix must be drawn through these positions, but there are three possible ways to do so: the lines through (a, 3) and (c, 1) can be drawn almost any way. Drawing the line through (a, 3) horizontally and the line through (c, 1) vertically, the minimum uncovered value is the 1 at position (c, 3). Modifying the matrix and proceeding similarly in the next round, gives back essentially the same matrix as the one with which we started: it takes x / 2 rounds before finding a larger matching. Drawing the lines through (a, 3) and (c, 1) both horizontally, such an increase is obtained after one round.
Indicate the mincut and determine the value of the maxflow. Repeatedly compute N(f) and AN(f) as was done above to find the maxflow. Draw the whole sequence of pictures.
For a graph G, an independent set can be constructed as follows: consider the nodes in order, add a node u to I if none of the neighbors of I was added to I before. Give an example of a graph that shows that this algorithm can be arbitrarily bad in the sense that (asymptotically) the ratio between the number of selected nodes and the size of the maximum independent set can be arbitrarily large.
A better idea is to slightly modify the above algorithm: repeatedly select a node u with currently minimal degree and add u to I, then remove all neighbors of u and their edges. Continue with these operations until no nodes are left. Work the algorithm out to an efficient algorithm. What is the time complexity of your algorithm?
Give an example of a simple graph (a graph without selfloops and multiple edges) for which this algorithm does not find the maximum independent set. Hint: there is a graph with 20 nodes for which the algorithm selects 7 instead of 8 nodes. Possibly there are even smaller examples.
Generalize the given example to show that even for the improved algorithm the ratio of the maximum possible number of nodes and the selected number is not bounded by a constant.
Show that including a node u with degree one does not compromise the possibility to find a maximum independent set. Hint: consider the optimum solution and distinguish two cases.
Using the above observation give an algorithm to construct a maximum independent set in trees running in O(n) time.
Denote the maximum weight of an independent set in a tree T by W(T). Consider a specific node s in T. s does not need to be the root of T. Express W(T) inductively, distinguishing the cases that s in I or that s is not in I.
The inductive expression immediately leads to a recursive algorithm for computing W(T). Formulate the algorithm, leaving the choice of s unspecified. This is an example of a divide-and-conquer algorithm, the underlying idea is similar to that of quicksort.
Let Z(n) be the time for running your algorithm on a tree with n nodes. Give a recurrence relation specifying the time complexity in terms of the sizes of the subproblems to solve. Which nodes are the worst choices for s? Show, that for any tree T the time Z(n) may be exponential.
Which nodes are the ideal choices for s? How much time does it take to find such a node? Now refine the algorithm always choosing these ideal nodes.
Formulate the recurrence relation for the time consumption Z'(n) of the improved algorithm and estimate it as good as you can, in any case proving a polynomial time bound.