This page contains the content of all the slides for the material that we covered in Chapters 7,8,10, and 11 of Algorithms, by Jeff Erickson. If you notice any errors, please create a new issue.
Most of these trees had a preferred root, and a preferred direction (away from the root).
A tree is a connected, acyclic graph.
By default, a generic tree is undirected. To understand this definition, we need to be precise about the meaning of acyclic.
Draw an example of a walk that is not a path.
Draw an example of a closed walk that is not a cycle.
What is a closed path? Can you draw an example?
Draw a tree with 6 vertices. How many edges does it have?
How many edges does a tree with \(n\) vertices have? Can you prove your answer?
Table #1 | Table #2 | Table #3 | Table #4 | Table #5 | Table #6 | Table #7 |
---|---|---|---|---|---|---|
Bri | Logan | Kristen | Trevor | Claire | Graham | Blake |
Andrew | Jordan | Levi | Josiah | Ethan | James | Nathan |
Jack | Grace | Isaac | Talia | Kevin | Drake | John |
Lemma. Suppose \(T\) is a tree, and \(u\) and \(v\) are two vertices in \(T\). Then there is a unique path from \(u\) to \(v\).
The converse of the Lemma is also true: If \(G\) is a graph with the property that every pair of vertices are joined by a unique path, then \(G\) is a tree.
A graph is called weighted if there is an assignment of real numbers (weights) to the edges. For example,
Given a weighted graph \(G\), a minimum spanning tree \(T\) is a subgraph of \(G\) such that:
Why do the above two conditions guarantee that \(T\) is a tree?
Draw an example of a weighted graph with two different minimum spanning trees (if possible).
Draw an example of a weighted graph with a unique smallest edge \(e\), such that the minimum spanning tree does not contain \(e\) (if possible).
Lemma. If all edge weights in a connected graph \(G\) are distinct, then \(G\) has a unique minimum spanning tree.
A tree is a connected, acyclic graph. (By default, a generic tree is undirected.) Equivalently,
(See Jamboard from last time.)
Table #1 | Table #2 | Table #3 | Table #4 | Table #5 | Table #6 | Table #7 |
---|---|---|---|---|---|---|
Levi | Kristen | Claire | Graham | Grace | Blake | Bri |
Kevin | Trevor | Nathan | John | James | Drake | Isaac |
Jordan | Jack | Josiah | Ethan | Andrew | Logan | Talia |
Hints:
Without losing generality, we can assume our edge weights are distinct.
Why is this OK?
sqrt(3)^2 == 3
[1] FALSE
Suppose a connected, weighted, undirected graph \(G\) has distinct edge weights and two minimum spanning trees \(T\) and \(T'\).
- Suppose to the contrary that \(T \neq T'\).
- Then there are edges in \(T\) that are not in \(T'\), and edges in \(T'\) that are not in \(T\). Let \(e\) be the smallest of these edges.
- In other words, \(e\) is the smallest edge in \((T\setminus T')\cup(T'\setminus T)\).
- WLOG, \(e\in T\setminus T'\).
- Adding this edge to \(T'\) creates a cycle in \(T'\).
- One of the edges in this cycle is not in \(T\).
- Exchange this edge for \(e\), and you reduce the weight of \(T'\), a contradiction.
We have proved the following.
Lemma. If all the edge weights of a graph are distinct, then its minimum spanning tree is unique.
So we will henceforth refer to “the” minimum spanning tree of a graph.
An intermediate spanning forest of a weighted graph \(G\) is a subgraph of the minimum spanning tree of \(G\).
For any intermediate spanning forest \(F\) of a graph \(G\),
In the following graph \(G\), the bold edges form an intermediate spanning forest \(F\).
Identify all the safe edges and all the useless edges.
What happens if you add a useless edge to \(F\)?
Add all the safe edges to \(F\). Is the resulting forest \(F'\) still an intermediate spanning forest?
Repeat: Add all of the safe edges of \(F'\) to \(F'\), and continue. How will this process terminate?
1.1
0--------0
6.7/ \ \2.4
/ \5.2 0
0 \ /2.3
0-----0
3.3
(Common wrong answer on Jamboard.)
Suppose there exists a minimum spanning tree \(T\) that includes a maximum edge \(e_\text{max} = uv\) of some cycle. Remove \(e_\text{max}\) from \(T\).
The tree is no longer a minimum spanning tree but instead two components, one containing \(u\) and the other containing \(v\). But since there was a cycle in \(G\) containing \(e_{max}\), there is a path in \(G\) from \(u\) to \(v\). Therefore, some edge \(e\) on this path must connect the two components.
Exchange \(e\) and \(e_\text{max}\), contradicting minimality.
(10) (3) (11)
e --------- a --- b ----------- d
\ \ / /
\_ (2) \ / (1) _/
\ c /
\_ _/
\ /
(16) \_ _/ (15)
\ /
\_ _/
\ /
\_ _/
f
For any intermediate spanning forest \(F\) of a graph \(G\),
Theorem. Let \(F\) be an intermediate spanning forest of a weighted graph \(G\), and suppose \(e\) is a safe edge. Then the minimum spanning tree contains \(e\).
- Let \(C\) be the component of \(F\) that \(e\) is “safe” for, so \(e\) has an endpoint in \(C\) and an endpoint in \(G \setminus C\), and \(e\) is the smallest such edge.
- Suppose to the contrary that the minimum spanning tree \(T\) does not contain \(e\). The endpoints \(u\) and \(v\) of \(e\) must be connected by a path in \(T\), and since exactly one of the endpoints of \(e\) is in \(C\), one of the edges \(e'\) of this path must have an endpoint in \(C\) and an endpoint not in \(C\).
- Removing \(e'\) from \(T\) creates a spanning forest with two components, one containing \(u\) and another containing \(v\). Adding \(e\) to this forest connects these components, and so creates a tree. But this new tree has smaller total weight than \(T\), because \(e\) is smaller than \(e'\). That’s a contradiction.
Let \(G\) be a weighted graph with vertex set \(V\) and edge set \(E\).
MetaMST(V,E)
F = (V, ∅)
while F is not a tree
add a safe edge to F // NOTE: This choice needs to be specified!!
return F
(See Jamboard from last time.)
Use MetaMST
, but add all the safe edges inside the while
-loop.
MetaMST(V,E)
F = (V, ∅)
while F is not a tree
add a safe edge to F // NOTE: This choice needs to be specified!!
return F
In order to analyze the time, we need to specify how the safe edges are located.
Table #1 | Table #2 | Table #3 | Table #4 | Table #5 | Table #6 | Table #7 |
---|---|---|---|---|---|---|
John | Claire | Logan | Graham | Kevin | Josiah | Isaac |
James | Jack | Blake | Trevor | Grace | Drake | Talia |
Ethan | Kristen | Bri | Levi | Andrew | Nathan | Jordan |
CountAndLabel(G): LabelOneComponent(s,count):
count <- 0 Enqueue(s)
for all vertices v while the queue is not empty
unmark v v <- Dequeue
for all vertices v if v is unmarked
if v is unmarked mark v // Line #1
count <- count + 1 v.comp <- count
LabelOneComponent(v, count) for each edge vw
return count Enqueue(w) // Line #2
Suppose \(G\) has \(V\) vertices and \(E\) edges.
What is another name for the LabelOneComponent
algorithm?
How many times does CountAndLabel
execute Line #1?
How many times does CountAndLabel
execute Line #2?
What is the running time of CountAndLabel
?
Borůvka(V, E): AddAllSafeEdges(E, F, count):
F = (V, ∅) for i <- 1 to count
count <- CountAndLabel(F) safe[i] <- Null
while count > 1 for each edge uv ∈ E
AddAllSafeEdges(E, F, count) if comp(u) != comp(v)
count <- CountAndLabel(F) if safe[comp(u)] = Null or w(uv) < w(safe[comp(u)])
return F safe[comp(u)] <- uv
if safe[comp(v)] = Null or w(uv) < w(safe[comp(v)])
safe[comp(v)] <- uv
for i <- 1 to count
add safe[i] to F
Borůvka(V, E): AddAllSafeEdges(E, F, count):
F = (V, ∅) for i <- 1 to count
count <- CountAndLabel(F) safe[i] <- Null
while count > 1 for each edge uv ∈ E
AddAllSafeEdges(E, F, count) if comp(u) != comp(v)
count <- CountAndLabel(F) if safe[comp(u)] = Null or w(uv) < w(safe[comp(u)])
return F safe[comp(u)] <- uv
if safe[comp(v)] = Null or w(uv) < w(safe[comp(v)])
safe[comp(v)] <- uv
for i <- 1 to count
add safe[i] to F
What is the running time of the first CountAndLabel
call?
After the first AddAllSafeEdges
call, how many components (at most) are there? (i.e., what is the maximum possible value of count
?)
What is the worst-case number of iterations of the while
-loop?
What is the running time of subsequent CountAndLabel
calls?
Give an upper bound on the running time of AddAllSafeEdges
.
Read pp. 262-263. Upshot: Borůvka’s Algorithm often beats its worst-case \(O(E \log V)\) running time.
Dr. Wonjun Lee will be interviewing next week for our open CS professor position.
I have to attend the Teaching Demo, so class is cancelled for Monday, 3/29. (Also office hours will end early on Tuesday.)
(See Jamboard)
Need to add some details to this to specify completely. \(O(V+E)\)?
Boruvka(G) {
initalize set edges_to_collapse
for each vertex v in G {
min_edge = null
min_weight = infinity
for each edge uv according to the adjacency list {
if uv.weight < min_weight {
min_edge = uv
min_weight = uv.weight
}
}
add min_edge to edges_to_collapse
}
for each edge uv in edges_to_collapse {
add new node x to adjacency list
for each neighbor n of u
add n to x's linked list
change "u" in n's linked list to "x"
for each neighbor n of v
if n is already in x's linked list
if nv.weight > nu.weight
nx.weight = nv.weight
else
add n to x's linked list
change "v" in n's linked list to "x"
remove u and v nodes from adjacency list
}
}
BoruvkaContraction(A[1..V]):
//iterate over all edges and mark those to be contracted
for i <- 1..V:
edges <- A[i] //has indices 1..n
min <- (edge with weight of infinity)
for j <- 1..n:
if edges[j].weight < min.weight:
min <- the edge from i to edges[j]
mark the edge min
for i <- 1..V:
if A[i] contains a marked edge uv:
//add v's out-edges to u
new <- A[u]
old <- A[v]
for j <- 1..n:
if old[j] is not in new:
add old to A[u] // new
else: //only add smallest parallel edge
parallel <- from new to old[j]
if old[j].weight < parallel.weight:
delete parallel from A[u] // new
add A[v][j] to A[u]
replace v in A[old[j]] with u
delete old
Theorem. Let \(F\) be an intermediate spanning forest of a weighted graph \(G\), and suppose \(e\) is a safe edge. Then the minimum spanning tree contains \(e\).
MetaMST(V,E)
F = (V, ∅)
while F is not a tree
add a safe edge to F // NOTE: This choice needs to be specified!!
return F
Exercise: Describe this completely, and verify its running time.
Kruskal(V, E):
sort E by increasing weight (e.g., MergeSort)
F <- (V, ∅)
for each vertex v ∈ V
MakeSet(v) // Set data structure partitioning V
for i <- 1 to |E|
uv <- ith lightest edge in E
if Find(u) != Find(v) // Are u and v in different partitions?
Union(u, v)
add uv to F
return F
Table #1 | Table #2 | Table #3 | Table #4 | Table #5 | Table #6 | Table #7 |
---|---|---|---|---|---|---|
Jordan | Blake | Logan | Josiah | Bri | Grace | Isaac |
Andrew | Kristen | Levi | Kevin | James | Drake | Claire |
Trevor | Talia | Graham | Ethan | Nathan | John | Jack |
On the Jamboard, step through Jarník’s algorithm on the given graph. Label the edges in the order they get added. Start at the vertex at the top.
On the Jamboard, step through Kruskal’s algorithm on the given graph. Label the edges in the order they get added.
Weighted graphs can be represented with \(V \times V\) matrix:
\[ \begin{bmatrix} a_{1,1} & a_{1,2} & \cdots & a_{1,V} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,V} \\ \vdots & \vdots && \vdots \\ a_{V,1} & a_{V,2} & \cdots & a_{V,V} \end{bmatrix} \]
Order of addition: KH KJ JI IF FC FG FE GD DB DA
Use a priority queue that does DecreaseKey
, Insert
, and ExtractMin
each in \(O(\log V)\) time.
Jarník(G):
pick random vertex v
T <- graph containing just v
Insert all neighboring edges into priority queue (key = weight)
while the priority queue is not empty:
ExtractMin edge uv
if uv is unmarked and u and v are not both in T:
mark uv
add uv to T
for each unmarked edge neighboring u and v
Insert edge (key = weight)
ExtractMin
and Insert
’s.JarnikTree(G, start_vertex):
tree = Tree()
active_edges = PriorityQueue()
add G[start_vertex].edges to active_edges (sorted by edge weight)
while tree.node_count != G.node_count:
safe = pop from active_edges
if G[safe] in tree: continue
add G[safe] to tree
add G[safe].edges to active_edges (sorted by edge weight)
return tree
A B C D E F G H I J K
A 0 7 0 3.2 0 0 0 0 0 0 0
B 7 0 8.9 3 0 0 0 0 0 0 0
C 0 8.9 0 9 6 2 0 0 0 0 0
D 3.2 3 9 0 0 5 4.1 0 0 0 0
E 0 0 6 0 0 4 0 5.7 0 0 0
F 0 0 2 5 4 0 2.6 0 3.7 0 0
G 0 0 0 4.1 0 2.6 0 0 0 7.5 0
H 0 0 0 0 5.7 0 0 0 8 0 4.7
I 0 0 0 0 0 3.7 0 8 0 1 6.2
J 0 0 0 0 0 0 7.5 0 1 0 5.3
K 0 0 0 0 0 0 0 4.7 6.2 5.3 0
// s is starting vertex
// M is adjancency matrix
Jarnik(M[V,V],s):
T <-({s},∅)
neighbors <- []
neighbors.add(s)
while neighbors is not empty
for each e ∈ neighbors
min <- infinity
for i <- 1 to |V|
if M[e,i] != 0 and M[e,i] < min
min <- M[e,i]
newNeighbor <- i
add newNeighbor to T
neighbors.add(newNeighbor)
return T
Time: \(O(V^3)\)
We find the shortest path by updating the tentative shortest paths until we can’t make them any better.
An edge \(u\rightarrow v\) is called tense if \(\text{dist}(u) + w(u\rightarrow v) < \text{dist}(v)\).
InitSSSP(s): Relax(uv):
dist(s) <- 0 dist(v) <- dist(u) + w(uv)
pred(s) <- Null pred(v) <- u
for all vertices v != s
dist(v) <- Infinity
pred(v) ← Null
MetaSSSP(s):
InitSSSP(s)
while there is at least one tense edge // NOTE: need to specify how to check
Relax any tense edge // NOTE: need to specify how to choose
Table #1 | Table #2 | Table #3 | Table #4 | Table #5 | Table #6 | Table #7 |
---|---|---|---|---|---|---|
Kristen | Claire | Trevor | Levi | Andrew | Drake | Jordan |
Jack | Logan | Talia | Isaac | Kevin | Blake | Josiah |
Graham | Grace | John | James | Bri | Nathan | Ethan |
Starting at the leftmost vertex \(s\), apply repeated Relax
ings to produce the SSSP tree for the given graph. Make note of a relaxing that you do where none of the distances are \(\infty\).
Use a priority queue that does DecreaseKey
, Insert
, and ExtractMin
each in \(O(V)\) time.
DijkstraSSSP(s):
InitSSSP(s)
for all vertices v
Insert(v, dist(v))
while the priority queue is not empty
u <- ExtractMin()
for all edges uv
if uv is tense
Relax(uv)
DecreaseKey(v, dist(v))
(See Figure 8.12)
Step through DijkstraSSSP
on the Jamboard. Keep track of each Relax
call (which edge gets relaxed, and in which order).
DijkstraSSSP(s):
InitSSSP(s)
for all vertices v
Insert(v, dist(v))
while the priority queue is not empty
u <- ExtractMin()
for all edges uv
if uv is tense
Relax(uv)
DecreaseKey(v, dist(v))
DecreaseKey
, Insert
, and ExtractMin
are all \(O(\log V)\).DecreaseKey
, at most \(V\) Insert
, and ExtractMin
.dist[]
to store the distances for each vertex.BellmanFordSSSP(s)
dist[s] <- 0
for every vertex v != s
dist[v] <- Infinity
for i <- 1 to V − 1
for every edge uv
if dist[v] > dist[u] + w(uv)
dist[v] <- dist[u] + w(uv)
Time: \(O(VE)\)
InitSSSP(s): Relax(uv):
dist(s) <- 0 dist(v) <- dist(u) + w(uv)
pred(s) <- Null pred(v) <- u
for all vertices v != s
dist(v) <- Infinity
pred(v) ← Null
MetaSSSP(s):
InitSSSP(s)
while there is at least one tense edge // NOTE: need to specify how to check
Relax any tense edge // NOTE: need to specify how to choose
DijkstraSSSP(s):
InitSSSP(s)
for all vertices v
Insert(v, dist(v))
while the priority queue is not empty
u <- ExtractMin()
for all edges uv
if uv is tense
Relax(uv)
DecreaseKey(v, dist(v))
Relaxation order: 2.0, 4.5, 4.7, 6.4, 5.9, 5.5, 3.6, 5.1, 3.4, 4.8, 1.1, 5.4, 3.7
distanceFromRoot
attribute.
An edge \(u\rightarrow v\) is called tense if \(\text{dist}(u) + w(u\rightarrow v) < \text{dist}(v)\).
InitSSSP(s): Relax(uv):
dist(s) <- 0 dist(v) <- dist(u) + w(uv)
pred(s) <- Null pred(v) <- u
for all vertices v != s
dist(v) <- Infinity
pred(v) ← Null
MetaSSSP(s):
InitSSSP(s)
while there is at least one tense edge // NOTE: need to specify how to check
Relax any tense edge // NOTE: need to specify how to choose
pred(v)
and dist(v)
get updated as the algorithm metaSSSP
runs.
Table #1 | Table #2 | Table #3 | Table #4 | Table #5 | Table #6 | Table #7 |
---|---|---|---|---|---|---|
Ethan | John | Isaac | Grace | James | Graham | Bri |
Talia | Drake | Nathan | Trevor | Jordan | Levi | Josiah |
Kristen | Jack | Claire | Andrew | Logan | Blake | Kevin |
Suppose that \(s = \text{pred}(v) = \text{pred}(u)\), but the path \(s \rightarrow v\) is longer than the path \(s \rightarrow u \rightarrow v\). Which edge is tense?
Suppose that \(s \rightarrow \cdots \rightarrow \text{pred}(\text{pred}(v)) \rightarrow \text{pred}(v) \rightarrow v\) is not a shortest path from \(s\) to \(v\). That is, suppose there is a shorter path \(s \rightarrow u_1 \rightarrow u_2 \rightarrow \cdots \rightarrow u_k \rightarrow v\).
TopologicalSort(G):
Call DFSAll(G) to compute finishing times v.post for all v in G
Return vertices in order of decreasing finishing times
So we can topologically sort the vertices in \(O(V+E)\) time.
Recursive Structure: Compute the length of the shortest path from \(s\) to \(v\).
LengthSP(v):
if v = s
return 0
else
minLength <- Infinity
for all edges uv
tryLength <- LengthSP(u) + w(uv)
if tryLength < minLength
minLength <- tryLength
v[1..n]
be the vertex set in topological order. (s = v[1]
)LSP[1..n]
be the length of the shortest path.LSP[v]
depends on LSP[u]
for \(u \prec v\).DagSSSP(s):
for all vertices v in topological order
if v = s
LSP(v) <- 0
else
LSP(v) <- Infinity
for all edges uv
if LSP(v) > LSP(u) + w(uv)
LSP(v) <- LSP(u) + w(uv)
pred(v) <- u
Secretly, LSP[i]
is the same as dist(v[i])
. See Figure 8.9.
DagSSSP
timeDagSSSP(s):
for all vertices v in topological order // O(V + E)
if v = s
LSP(v) <- 0
else
LSP(v) <- Infinity
for all edges uv // need rev(G) to do this
if LSP(v) > LSP(u) + w(uv)
LSP(v) <- LSP(u) + w(uv)
pred(v) <- u
rev(G)
can be computed in \(O(V+E)\) time.Total Time: \(O(V+E)\)
pred(s)
ever changes?If pred(s)
changes after InitSSSP
, it would signify that the algorithm had taken a path through the input graph already that had led it back to s
, meaning that there was some path s -> u-> u_1 ... -> u_k -> s
, and the edge u_k -> s
would have to be tense. This would mean that the cycle was negative, since s
starts with a distance of zero. Thus, the input graph contains a negative cycle that travels back through s
.
Prove that, if the input graph has no negative cycles, then \(P \cup X\) is always a dag.
Prove that, if \(P \cup X\) is always a dag, then the input graph has no negative cycles.
MetaSSSP
, and start by relaxing edges to form a path to this negative cycle from \(s\).Applications: Optimizing transportation, scheduling, distributed systems, image processing, baseball elimination, etc.
Let \(G\) be a directed graph where each edge \(e\) has capacity \(c(e)\).
\[ \sum_a f(a\rightarrow v) = \sum_z f(v \rightarrow z) \] - The value \(|f|\) of a flow \(f\) is the total net flow out of \(s\) (or equivalently, into \(t\).)
\[ |f| = \sum_z f(s \rightarrow z) - \sum_a f(a \rightarrow s) \]
Table #1 | Table #2 | Table #3 | Table #4 | Table #5 | Table #6 | Table #7 |
---|---|---|---|---|---|---|
Levi | Claire | Talia | Bri | Drake | Ethan | Graham |
Kevin | Isaac | Jordan | Josiah | Kristen | James | Blake |
Logan | Andrew | Nathan | John | Grace | Jack | Trevor |
Can you increase the flow in Figure 10.2, keeping it feasible? If so, try to make its value as big as possible.
Let \(G\) be a directed graph where each edge \(e\) has capacity \(c(e)\).
Find a cut in the graph in Figure 10.2, and try to make its capacity as small as possible.
Try to find a flow whose value is bigger than the capacity of the cut that you found.
Lemma. The value of any feasible \((s,t)\)-flow is less than or equal to the capacity of any \((S,T)\)-cut.
Proof.
- The value of the flow is the net flow out of \(s\).
- The net flow out of \(s\) is the sum of the net flows through all the vertices in \(S\) (since all the others have a net of zero).
- This remains true if we remove the edges that go from \(S\) to \(S\) (zero sum).
- The only edges left go from \(S\) to \(T\).
- So the total flow from \(S\) to \(T\) can’t be more than the capacity.
- The max happens when all the edges across are saturated, and all the edges back are avoided.
- Lemma. The value of any feasible \((s,t)\)-flow is less than or equal to the capacity of any \((S,T)\)-cut.
- Corollary. If you can find a cut and a flow such that the capacity of the cut equals the value of the flow, you have found a maximum flow.
- Theorem. You can always find such a cut and flow.
- (Proof next week.)
- Note: a minimum cut of 1 would be a street both dogs were forced to walk on.
Maxflow-Mincut Theorem. In every flow network with source \(s\) and target \(t\), the value of the maximum \((s,t)\)-flow is equal to the capacity of the minimum \((s,t)\)-cut.
Proof. Ford-Fulkerson Algorithm
Given a flow network \(G\) and a flow \(f\), the residual graph \(G_f\) is the graph with the same vertex set as \(G\), but whose edges indicate how much \(f\) could be increased/decreased on each edge of \(G\).
See Figure 10.5.
Draw the residual graph for the given graph. (See Jamboard)
Now suppose \(G_f\) has a path from \(s\) to \(t\).
See Figure 10.5 again.
Suppose there is no path from \(s\) to \(t\) in \(G_f\).
That means we’ve found a cut with the same capacity as the value of the flow.
- This proves the theorem! (You can always find a matching cut/flow).
- This gives us the algorithm too.
Repeat until no path from \(s\) to \(t\) exists.
Start with a zero flow and construct the residual graph \(G_f\). Repeat the following until there’s no path from \(s\) to \(t\) in \(G_f\).
Each iteration is linear time: \(O(E)\) since the graph is connected.
Total Time: \(O(E|f^*|)\).
So when estimating running times of algorithms that use maximum flows, you can assume the maximum flow part has running time \(O(VE)\).
- Example. Given a bipartite graph where \(L\) represents workers \(R\) represents jobs, and an edge from a worker to a job means the worker can do that job:
- Assign each worker a job.
- Do so maximally (assign as many jobs as possible)
- Bipartite matching reduces to a maximum flow problem.
The flow network on the Jamboard purports to solve a bipartite matching problem by finding a maximum flow. Show that this really is a maximum flow by finding a minimum cut.
You have to assign rooms, times, and proctors for final exams in 9 different classes. There are 5 rooms, 4 exam times, and 7 proctors.
See Figure 11.5.
Can a team still finish with the most wins?
Team | Won–Lost | Left | NYY | BAL | BOS | TOR | DET |
---|---|---|---|---|---|---|---|
New York Yankees | 75–59 | 28 | 3 | 8 | 7 | 3 | |
Baltimore Orioles | 71–63 | 28 | 3 | 2 | 7 | 4 | |
Boston Red Sox | 69–66 | 27 | 8 | 2 | 0 | 0 | |
Toronto Blue Jays | 63–72 | 27 | 7 | 7 | 0 | 0 | |
Detroit Tigers | 49–86 | 27 | 3 | 4 | 0 | 0 |
See Figure 11.7.
Suppose there are \(n\) teams. The flow model has a start vertex \(s\), a list of pairing vertices, a list of team vertices, and a terminal vertex \(t\).
How many possible pairings of teams are there? (Give an exact answer, and an asymptotic estimate.)
How many vertices are in the flow model? (asymptotically)
How many edges leave each pairing vertex? How many edges are in the flow model?
What is the running time, using Orlin’s algorithm?
Would using the Floyd-Fulkerson algorithm ever have a lower asymptotic running time?
Reduces to bipartite matching (which reduces to maxflow).
See Figure 11.6.