1. Introduction to Dynamic Programming
1.1 Key Properties for DP
1.2 DP Approaches
Top-Down (Memoization)
- Recursive approach
- Store results of subproblems
- Computes only needed subproblems
- Easier to implement
Bottom-Up (Tabulation)
- Iterative approach
- Build solution from smallest subproblems
- Computes all subproblems
- More efficient (no recursion overhead)
2. Greedy vs Dynamic Programming
| Aspect | Greedy | Dynamic Programming |
|---|---|---|
| Choice | Local optimal (never reconsidered) | Global optimal (considers all options) |
| Subproblems | Single subproblem after each choice | Multiple overlapping subproblems |
| Approach | Top-down only | Top-down or bottom-up |
| Optimality | Not always optimal | Always optimal (if applicable) |
| Efficiency | Generally faster | More space/time for storing |
| Knapsack | Fractional (works) | 0/1 (required) |
Example: Capacity W = 50
| Item | Weight | Value | Ratio |
|---|---|---|---|
| 1 | 10 | 60 | 6.0 |
| 2 | 20 | 100 | 5.0 |
| 3 | 30 | 120 | 4.0 |
Greedy (by ratio): Item 1 (w=10, v=60) + Item 2 (w=20, v=100) = Total: w=30, v=160
Can still fit Item 3? No (30+30=60 > 50)
Greedy Result: Value = 160, Weight = 30
Optimal (DP): Item 2 (w=20, v=100) + Item 3 (w=30, v=120) = Total: w=50, v=220
Optimal Result: Value = 220, Weight = 50
Greedy gives 160, but optimal is 220!
3. Dijkstra's Algorithm (Single Source Shortest Path)
3.1 Algorithm
- Initialize distances: dist[source] = 0, dist[all others] = โ
- Create set of unvisited vertices
- While unvisited set not empty:
- Select vertex u with minimum distance from unvisited set
- Mark u as visited
- For each neighbor v of u:
- If dist[u] + weight(u,v) < dist[v]:
- dist[v] = dist[u] + weight(u,v)
function DIJKSTRA(graph, source):
dist[] = {โ, โ, ..., โ}
dist[source] = 0
visited[] = {false, false, ..., false}
for i = 1 to |V|:
// Find unvisited vertex with minimum distance
u = vertex with MIN(dist[u]) where visited[u] = false
visited[u] = true
// Relax all edges from u
for each neighbor v of u:
if not visited[v]:
if dist[u] + weight(u,v) < dist[v]:
dist[v] = dist[u] + weight(u,v)
return dist[]
4
A โโโโโโโโโโบ B
โ \ โ |
2 \1 3/ |2
โ โ โ โ
S C โโโโโโโบ D
โ 7
5
โ
E โโโโโโโโโโโโโบ
6
| Step | Visit | dist[A] | dist[B] | dist[C] | dist[D] | dist[E] |
|---|---|---|---|---|---|---|
| Init | S | 2 | โ | โ | โ | 5 |
| 1 | A (min=2) | 2 | 6 (2+4) | 3 (2+1) | โ | 5 |
| 2 | C (min=3) | 2 | 6 | 3 | 10 (3+7) | 5 |
| 3 | E (min=5) | 2 | 6 | 3 | 10 | 5 |
| 4 | B (min=6) | 2 | 6 | 3 | 8 (6+2) | 5 |
| 5 | D (min=8) | 2 | 6 | 3 | 8 | 5 |
SโA: 2, SโB: 6, SโC: 3, SโD: 8, SโE: 5
3.2 Time Complexity
| Implementation | Time Complexity |
|---|---|
| Array (simple) | O(Vยฒ) |
| Binary Heap | O((V+E) log V) |
| Fibonacci Heap | O(E + V log V) |
4. Bellman-Ford Algorithm
4.1 Algorithm
- Initialize distances: dist[source] = 0, dist[all others] = โ
- Repeat (V-1) times:
- For each edge (u, v) with weight w:
- If dist[u] + w < dist[v]: dist[v] = dist[u] + w
- Check for negative cycles:
- For each edge (u, v) with weight w:
- If dist[u] + w < dist[v]: NEGATIVE CYCLE EXISTS
function BELLMAN_FORD(graph, source):
dist[] = {โ, โ, ..., โ}
dist[source] = 0
// Relax all edges V-1 times
for i = 1 to |V| - 1:
for each edge (u, v, w) in graph:
if dist[u] + w < dist[v]:
dist[v] = dist[u] + w
// Check for negative cycle
for each edge (u, v, w) in graph:
if dist[u] + w < dist[v]:
return "Negative cycle detected"
return dist[]
4.2 Dijkstra vs Bellman-Ford
| Aspect | Dijkstra | Bellman-Ford |
|---|---|---|
| Negative weights | No | Yes |
| Negative cycle detection | No | Yes |
| Time Complexity | O(E log V) | O(VE) |
| Approach | Greedy | Dynamic Programming |
4.3 Time Complexity
O(V ร E) - iterate V-1 times over all E edges
5. Floyd-Warshall Algorithm (All-Pairs Shortest Path)
5.1 Key Idea
For each pair (i, j), consider all possible intermediate vertices k. If going through k is shorter, update the distance.
5.2 Algorithm
function FLOYD_WARSHALL(graph):
// Initialize distance matrix
dist[][] = graph adjacency matrix
// dist[i][j] = weight of edge (i,j) if exists, else โ
// dist[i][i] = 0 for all i
// Consider each vertex as intermediate
for k = 1 to n:
for i = 1 to n:
for j = 1 to n:
if dist[i][k] + dist[k][j] < dist[i][j]:
dist[i][j] = dist[i][k] + dist[k][j]
return dist[][]
Initial Distance Matrix Dโฐ:
| 1 | 2 | 3 | 4 | |
|---|---|---|---|---|
| 1 | 0 | 3 | โ | 7 |
| 2 | 8 | 0 | 2 | โ |
| 3 | 5 | โ | 0 | 1 |
| 4 | 2 | โ | โ | 0 |
k=1: Consider paths through vertex 1
Dยน[i][j] = min(Dโฐ[i][j], Dโฐ[i][1] + Dโฐ[1][j])
- Dยน[2][4] = min(โ, 8+7) = 15
- Dยน[3][2] = min(โ, 5+3) = 8
- Dยน[3][4] = min(1, 5+7) = 1 (no change)
- Dยน[4][2] = min(โ, 2+3) = 5
After k=1 (Dยน):
| 1 | 2 | 3 | 4 | |
|---|---|---|---|---|
| 1 | 0 | 3 | โ | 7 |
| 2 | 8 | 0 | 2 | 15 |
| 3 | 5 | 8 | 0 | 1 |
| 4 | 2 | 5 | โ | 0 |
Continue for k=2, k=3, k=4...
Final Result Dโด:
| 1 | 2 | 3 | 4 | |
|---|---|---|---|---|
| 1 | 0 | 3 | 5 | 6 |
| 2 | 5 | 0 | 2 | 3 |
| 3 | 3 | 6 | 0 | 1 |
| 4 | 2 | 5 | 7 | 0 |
Example paths:
- 1 โ 4: Direct = 7, via 2,3 = 3+2+1 = 6 (shorter!)
- 2 โ 1: Direct = 8, via 3,4 = 2+1+2 = 5 (shorter!)
5.3 Time & Space Complexity
| Aspect | Complexity |
|---|---|
| Time | O(Vยณ) - three nested loops |
| Space | O(Vยฒ) - distance matrix |
6. 0/1 Knapsack Problem (DP)
6.1 DP Recurrence
K[i][w] = max value using items 1..i with capacity w
v[i] = value of item i, w[i] = weight of item i
6.2 Algorithm
function KNAPSACK_01(W, weights[], values[], n):
// Create DP table K[0..n][0..W]
for i = 0 to n:
for w = 0 to W:
if i == 0 or w == 0:
K[i][w] = 0
else if weights[i] <= w:
// Can include item i
K[i][w] = max(values[i] + K[i-1][w-weights[i]],
K[i-1][w])
else:
// Cannot include item i
K[i][w] = K[i-1][w]
return K[n][W]
M = 21, n = 4
(p1..p4) = (2, 5, 8, 1)
(w1..w4) = (10, 15, 6, 9)
Items:
| Item | Weight | Value |
|---|---|---|
| 1 | 10 | 2 |
| 2 | 15 | 5 |
| 3 | 6 | 8 |
| 4 | 9 | 1 |
DP Table K[i][w]: (showing key values)
| i\w | 0 | 6 | 9 | 10 | 15 | 16 | 19 | 21 |
|---|---|---|---|---|---|---|---|---|
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 1 (w=10,v=2) | 0 | 0 | 0 | 2 | 2 | 2 | 2 | 2 |
| 2 (w=15,v=5) | 0 | 0 | 0 | 2 | 5 | 5 | 5 | 5 |
| 3 (w=6,v=8) | 0 | 8 | 8 | 8 | 8 | 10 | 10 | 13 |
| 4 (w=9,v=1) | 0 | 8 | 8 | 8 | 9 | 10 | 10 | 13 |
Building the table (key steps):
- K[3][21] = max(K[2][21], 8 + K[2][15]) = max(5, 8+5) = 13
- K[4][21] = max(K[3][21], 1 + K[3][12]) = max(13, 1+8) = 13
Traceback to find selected items:
- K[4][21] = 13 = K[3][21], so item 4 NOT included
- K[3][21] = 13 โ K[2][21] = 5, so item 3 IS included (w=6, v=8)
- Remaining: K[2][15] = 5 โ K[1][15] = 2, so item 2 IS included (w=15, v=5)
- Remaining: K[1][0] = 0, done
Selected Items: 2 and 3
Total Weight: 15 + 6 = 21
6.3 Time & Space Complexity
| Aspect | Complexity |
|---|---|
| Time | O(n ร W) |
| Space | O(n ร W) or O(W) with optimization |
7. Travelling Salesman Problem (TSP)
7.1 DP Approach (Held-Karp Algorithm)
Use bitmask to represent visited cities. State: (current city, set of visited cities).
function TSP_DP(dist[][], n):
// dp[mask][i] = min cost to reach i with visited cities = mask
dp[1][0] = 0 // Start at city 0
for mask = 1 to (2^n - 1):
for last = 0 to n-1:
if mask has bit 'last' set:
for next = 0 to n-1:
if mask doesn't have bit 'next' set:
new_mask = mask | (1 << next)
dp[new_mask][next] = min(dp[new_mask][next],
dp[mask][last] + dist[last][next])
// Find min cost to return to city 0
full_mask = (2^n - 1)
answer = min(dp[full_mask][i] + dist[i][0]) for all i
return answer
Distance Matrix:
| 0 | 1 | 2 | 3 | |
|---|---|---|---|---|
| 0 | 0 | 10 | 15 | 20 |
| 1 | 10 | 0 | 35 | 25 |
| 2 | 15 | 35 | 0 | 30 |
| 3 | 20 | 25 | 30 | 0 |
Optimal Tour: 0 โ 1 โ 3 โ 2 โ 0
Cost: 10 + 25 + 30 + 15 = 80
7.2 Time & Space Complexity
| Approach | Time | Space |
|---|---|---|
| Brute Force | O(n!) | O(n) |
| DP (Held-Karp) | O(nยฒ ร 2โฟ) | O(n ร 2โฟ) |
8. Practice Questions
M=15, n=4, (p1..p4)=(10,5,15,7), (w1..w4)=(2,3,5,7)
Build table K[i][w] for i=0..4, w=0..15
Maximum Value = 30
Selected items: 1, 3, 4 (w=2+5+7=14, v=10+15+7=32)
Or items 1, 2, 3 (w=2+3+5=10, v=10+5+15=30)
Divide & Conquer:
- Subproblems are independent
- Top-down only
- No overlapping subproblems
- Examples: Merge Sort, Quick Sort
Dynamic Programming:
- Overlapping subproblems
- Top-down or bottom-up
- Stores solutions (memoization/tabulation)
- Examples: Knapsack, LCS, Floyd-Warshall
See Section 3 for algorithm and example.
Why no negative weights: Dijkstra marks vertices as "done" after visiting. With negative edges, a "done" vertex might have a shorter path discovered later, which Dijkstra won't consider.
See Section 5 (Floyd-Warshall) for complete algorithm and worked example.
Key: Three nested loops - k (intermediate), i (source), j (destination)
Recurrence: dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])
Key Points Summary
- DP requires: Optimal Substructure + Overlapping Subproblems
- Greedy fails for 0/1 Knapsack because it can't undo choices
- Dijkstra: Single source, no negative weights, O(E log V)
- Bellman-Ford: Single source, handles negative weights, O(VE)
- Floyd-Warshall: All pairs, O(Vยณ), uses intermediate vertices
- 0/1 Knapsack: K[i][w] = max(not take, take if fits), O(nW)
- TSP: NP-Hard, DP gives O(nยฒร2โฟ) vs O(n!) brute force
- Always show: recurrence relation, table filling, traceback