41 Problem Definition Prims Minimum Spanning Tree Algorithm DSA 2 By Tim Roughgarden

So in this sequence of tutorials, we’re going to apply the greedy algorithm design paradigm to a fundamental graph problem, the problem of computing minimum spanning trees. The MST problem is a really fun playground for greedy algorithm design, because it’s the singular problem in which pretty much any greedy algorithm you come up with seems to work. So we’ll talk about a couple of the famous ones, show why they’re correct, and show how they can be implemented using suitable data structures to be blazingly fast. So, I’ll give you the formal problem.

Definition on the next slide but first let me just say informally what it is we’re trying to accomplish. Essentially, what we want do is connect a bunch of points together as cheaply as possible. And, as usual with an abstract problem the objects can mean something very literal. So maybe the points we’re trying to connect are servers in some computer network, or it could represent something more abstract. Like maybe we have a model of documents like Web Pages where we represent them as points in space.

And we want to somehow connect those together. Now the main reason I’m going to spend time on the minimum expenditure problem is pedagogical. It’s just a great problem for sharpening your skills with greedy algoritum design and proof of correctness. It’ll also give us another opportunity to see the beautiful interplay between data structures and fast limitation of graph algorithms. That said that minimum expenditure problem does have applications. One very cool one is in clustering, and that I’ll talk about in detail in a later.

Tutorial, it also comes up in networking. So if you do a web search on spanning tree protocol you’ll also find some information about that. So as I said at the beginning the minimum spanning tree problem is remarkable in that it doesn’t just admit one greedy algorithm that’s correct, but in fact it admits multiple greedy algorithms that are correct. we’re going to talk about two of them, the two most well known ones. But there are even some others believe it or not. So the first one we’re going to discuss beginning in the next tutorial is Prim’s MST.

Algorithm. This dates back over 50 years to 1957. in fact as you’ll see Prim’s algorithm shows a remarkable number of similarities with Dijkstra’s shortest path algorithm. So you might not be surprised to know that Dijkstra also independently had discovered this algorithm a couple of years later. But in fact it was only noticed much later that this exact same algorithm had been first discovered over 25 years earlier by a mathematician named Jarnick. For that reason you’ll sometimes hear this called Jarnick’s algorithm or the PrimJarnick algorithm.

For gravity and to be consistent with some of the main text books in the area I’m just going to call this Prim’s algorithm throughout the lectures. The other algorithm we’re going to cover which is also rightfully famous is Kruskal’s MST algorithm. As far as I know this was indeed first discovered by Kruskal roughly the same time as Prim was doing his algorithm in the mid 50s. And in what sense do I say these algorithms are blazingly fast? Well, they run in almost linear time, linear in the number of edges of the graph.

Specifically we’ll see how using appropriate data structures will get each of them to run in time big O of M log N, where M is the number of edges in the graph, and N is the number of vertices in the graph. We’ll employ data structures to speed up Prim’s algorithm in exactly the same way we did for Dijkstra’s algorithm, that is we’ll be using the heap data structure, One thing that’s cool about Crystal’s algorithm is it’ll give us an opportunity to study a new data structure, mainly the union fine data structure and that’s a lot of fun to think about, in its own right, as you’ll see.

Leave a Reply