Skip to content

23.2 The algorithms of Kruskal and Prim

23.2-1

Kruskal's algorithm can return different spanning trees for the same input graph GG, depending on how it breaks ties when the edges are sorted into order. Show that for each minimum spanning tree TT of GG, there is a way to sort the edges of GG in Kruskal's algorithm so that the algorithm returns TT.

Suppose that we wanted to pick TT as our minimum spanning tree. Then, to obtain this tree with Kruskal's algorithm, we will order the edges first by their weight, but then will resolve ties in edge weights by picking an edge first if it is contained in the minimum spanning tree, and treating all the edges that aren't in TT as being slightly larger, even though they have the same actual weight.

With this ordering, we will still be finding a tree of the same weight as all the minimum spanning trees w(T)w(T). However, since we prioritize the edges in TT, we have that we will pick them over any other edges that may be in other minimum spanning trees.

23.2-2

Suppose that we represent the graph G=(V,E)G = (V, E) as an adjacency matrix. Give a simple implementation of Prim's algorithm for this case that runs in O(V2)O(V^2) time.

At each step of the algorithm we will add an edge from a vertex in the tree created so far to a vertex not in the tree, such that this edge has minimum weight. Thus, it will be useful to know, for each vertex not in the tree, the edge from that vertex to some vertex in the tree of minimal weight. We will store this information in an array AA, where A[u]=(v,w)A[u] = (v, w) if ww is the weight of (u,v)(u, v) and is minimal among the weights of edges from uu to some vertex vv in the tree built so far. We'll use A[u].1A[u].1 to access vv and A[u].2A[u].2 to access ww.

PRIM-ADJ(G, w, r)
    initialize A with every entry = (NIL, ∞)
    T = {r}
    for i = 1 to V
        if Adj[r, i] != 0
            A[i] = (r, w(r, i))
    for each u in V - T
        k = min(A[i].2)
        T = T ∪ {k}
        k.π = A[k].1
        for i = 1 to V
            if Adf[k, i] != 0 and Adj[k, i] < A[i].2
                A[i] = (k, Adj[k, i])

23.2-3

For a sparse graph G=(V,E)G = (V, E), where E=Θ(V)|E| = \Theta(V), is the implementation of Prim's algorithm with a Fibonacci heap asymptotically faster than the binary-heap implementation? What about for a dense graph, where E=Θ(V2)|E| = \Theta(V^2)? How must the sizes E|E| and V|V| be related for the Fibonacci-heap implementation to be asymptotically faster than the binary-heap implementation?

Prim's algorithm implemented with a Binary heap has runtime O((V+E)lgV)O((V + E)\lg V), which in the sparse case, is just O(VlgV)O(V\lg V). The implementation with Fibonacci heaps is

O(E+VlgV)=O(V+VlgV)=O(VlgV).O(E + V\lg V) = O(V + V\lg V) = O(V \lg V).

  • In the sparse case, the two algorithms have the same asymptotic runtimes.
  • In the dense case.

    • The binary heap implementation has a runtime of

      O((V+E)lgV)=O((V+V2)lgV)=O(V2lgV).O((V + E)\lg V) = O((V + V^2)\lg V) = O(V^2\lg V).

    • The Fibonacci heap implementation has a runtime of

      O(E+VlgV)=O(V2+VlgV)=O(V2).O(E + V\lg V) = O(V^2 + V\lg V) = O(V^2).

    So, in the dense case, we have that the Fibonacci heap implementation is asymptotically faster.

  • The Fibonacci heap implementation will be asymptotically faster so long as E=ω(V)E = \omega(V). Suppose that we have some function that grows more quickly than linear, say ff, and E=f(V)E = f(V).

  • The binary heap implementation will have runtime of

    O((V+E)lgV)=O((V+f(V))lgV)=O(f(V)lgV).O((V + E)\lg V) = O((V + f(V))\lg V) = O(f(V)\lg V).

However, we have that the runtime of the Fibonacci heap implementation will have runtime of

O(E+VlgV)=O(f(V)+VlgV).O(E + V\lg V) = O(f(V) + V\lg V).

This runtime is either O(f(V))O(f(V)) or O(VlgV)O(V\lg V) depending on if f(V)f(V) grows more or less quickly than VlgVV\lg V respectively.

In either case, we have that the runtime is faster than O(f(V)lgV)O(f(V)\lg V).

23.2-4

Suppose that all edge weights in a graph are integers in the range from 11 to V|V|. How fast can you make Kruskal's algorithm run? What if the edge weights are integers in the range from 11 to WW for some constant WW?

(Removed)

23.2-5

Suppose that all edge weights in a graph are integers in the range from 11 to V|V|. How fast can you make Prim's algorithm run? What if the edge weights are integers in the range from 11 to WW for some constant WW?

For the first case, we can use a van Emde Boas tree to improve the time bound to O(ElglgV)O(E \lg \lg V). Comparing to the Fibonacci heap implementation, this improves the asymptotic running time only for sparse graphs, and it cannot improve the running time polynomially. An advantage of this implementation is that it may have a lower overhead.

For the second case, we can use a collection of doubly linked lists, each corresponding to an edge weight. This improves the bound to O(E)O(E).

23.2-6 \star

Suppose that the edge weights in a graph are uniformly distributed over the halfopen interval [0,1)[0, 1). Which algorithm, Kruskal's or Prim's, can you make run faster?

For input drawn from a uniform distribution I would use bucket sort with Kruskal's algorithm, for expected linear time sorting of edges by weight. This would achieve expected runtime O(Eα(V))O(E\alpha(V)).

23.2-7 \star

Suppose that a graph GG has a minimum spanning tree already computed. How quickly can we update the minimum spanning tree if we add a new vertex and incident edges to GG?

(Removed)

23.2-8

Professor Borden proposes a new divide-and-conquer algorithm for computing minimum spanning trees, which goes as follows. Given a graph G=(V,E)G = (V, E), partition the set VV of vertices into two sets V1V_1 and V2V_2 such that V1|V_1| and V2|V_2| differ by at most 11. Let E1E_1 be the set of edges that are incident only on vertices in V1V_1, and let E2E_2 be the set of edges that are incident only on vertices in V2V_2. Recursively solve a minimum-spanning-tree problem on each of the two subgraphs G1=(V1,E1)G_1 = (V_1, E_1) and G2=(V2,E2)G_2 = (V_2, E_2). Finally, select the minimum-weight edge in EE that crosses the cut (V1,V2)(V_1, V_2), and use this edge to unite the resulting two minimum spanning trees into a single spanning tree.

Either argue that the algorithm correctly computes a minimum spanning tree of GG, or provide an example for which the algorithm fails.

The algorithm fails. Suppose E={(u,v),(u,w),(v,w)}E = \{(u, v), (u, w), (v, w)\}, the weight of (u,v)(u, v) and (u,w)(u, w) is 11, and the weight of (v,w)(v, w) is 10001000, partition the set into two sets V1={u}V_1 = \{u\} and V2={v,w}V_2 = \{v, w\}.