Module V

Dynamic Programming

Master DP concepts with Dijkstra's, Bellman-Ford, Floyd-Warshall algorithms, 0/1 Knapsack, and Travelling Salesman Problem.

1. Introduction to Dynamic Programming

Dynamic Programming (DP)
Dynamic Programming is an algorithmic paradigm that solves complex problems by breaking them into simpler overlapping subproblems, solving each subproblem only once, and storing the results to avoid redundant computation.
๐ŸŽฏ
Exam Tip (100% Frequency): 0/1 Knapsack using DP appears in every paper! You must know how to build the DP table AND trace back to find selected items. Floyd-Warshall (All-Pairs) has appeared in recent papers with increasing frequency.

1.1 Key Properties for DP

1
Optimal Substructure
An optimal solution to the problem contains optimal solutions to its subproblems.
2
Overlapping Subproblems
The same subproblems are solved multiple times. DP stores results to avoid recomputation.

1.2 DP Approaches

Top-Down (Memoization)

  • Recursive approach
  • Store results of subproblems
  • Computes only needed subproblems
  • Easier to implement

Bottom-Up (Tabulation)

  • Iterative approach
  • Build solution from smallest subproblems
  • Computes all subproblems
  • More efficient (no recursion overhead)

2. Greedy vs Dynamic Programming

Aspect Greedy Dynamic Programming
Choice Local optimal (never reconsidered) Global optimal (considers all options)
Subproblems Single subproblem after each choice Multiple overlapping subproblems
Approach Top-down only Top-down or bottom-up
Optimality Not always optimal Always optimal (if applicable)
Efficiency Generally faster More space/time for storing
Knapsack Fractional (works) 0/1 (required)
๐Ÿ“ Why Greedy Fails for 0/1 Knapsack

Example: Capacity W = 50

ItemWeightValueRatio
110606.0
2201005.0
3301204.0

Greedy (by ratio): Item 1 (w=10, v=60) + Item 2 (w=20, v=100) = Total: w=30, v=160

Can still fit Item 3? No (30+30=60 > 50)

Greedy Result: Value = 160, Weight = 30

Optimal (DP): Item 2 (w=20, v=100) + Item 3 (w=30, v=120) = Total: w=50, v=220

Optimal Result: Value = 220, Weight = 50

Greedy gives 160, but optimal is 220!

3. Dijkstra's Algorithm (Single Source Shortest Path)

Dijkstra's Algorithm
Finds the shortest path from a single source vertex to all other vertices in a weighted graph with non-negative edge weights. Uses a greedy approach but has optimal substructure.
โš ๏ธ
Limitation: Dijkstra's algorithm does NOT work correctly with negative edge weights. Use Bellman-Ford for graphs with negative weights.

3.1 Algorithm

๐Ÿ“‹ Dijkstra's Algorithm Steps
  1. Initialize distances: dist[source] = 0, dist[all others] = โˆž
  2. Create set of unvisited vertices
  3. While unvisited set not empty:
    • Select vertex u with minimum distance from unvisited set
    • Mark u as visited
    • For each neighbor v of u:
    •   If dist[u] + weight(u,v) < dist[v]:
    •     dist[v] = dist[u] + weight(u,v)
Pseudocode
function DIJKSTRA(graph, source):
    dist[] = {โˆž, โˆž, ..., โˆž}
    dist[source] = 0
    visited[] = {false, false, ..., false}

    for i = 1 to |V|:
        // Find unvisited vertex with minimum distance
        u = vertex with MIN(dist[u]) where visited[u] = false
        visited[u] = true

        // Relax all edges from u
        for each neighbor v of u:
            if not visited[v]:
                if dist[u] + weight(u,v) < dist[v]:
                    dist[v] = dist[u] + weight(u,v)

    return dist[]
Solved Example (33% Frequency) 10 Marks
Find shortest paths from source S using Dijkstra's algorithm.
Weighted Graph
            4
      A โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–บ B
     โ†— \          โ†— |
    2   \1      3/  |2
   โ†—     โ†˜    โ†—     โ†“
  S       C โ”€โ”€โ”€โ”€โ”€โ”€โ–บ D
   โ†˜            7
    5
     โ†˜
      E โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–บ
            6
โœ“ Solution
Step Visit dist[A] dist[B] dist[C] dist[D] dist[E]
Init S 2 โˆž โˆž โˆž 5
1 A (min=2) 2 6 (2+4) 3 (2+1) โˆž 5
2 C (min=3) 2 6 3 10 (3+7) 5
3 E (min=5) 2 6 3 10 5
4 B (min=6) 2 6 3 8 (6+2) 5
5 D (min=8) 2 6 3 8 5
Shortest Distances from S:
Sโ†’A: 2, Sโ†’B: 6, Sโ†’C: 3, Sโ†’D: 8, Sโ†’E: 5

3.2 Time Complexity

Implementation Time Complexity
Array (simple) O(Vยฒ)
Binary Heap O((V+E) log V)
Fibonacci Heap O(E + V log V)

4. Bellman-Ford Algorithm

Bellman-Ford Algorithm
Finds shortest paths from a single source to all vertices, even with negative edge weights. Also detects negative weight cycles.

4.1 Algorithm

๐Ÿ“‹ Bellman-Ford Algorithm
  1. Initialize distances: dist[source] = 0, dist[all others] = โˆž
  2. Repeat (V-1) times:
    • For each edge (u, v) with weight w:
    • If dist[u] + w < dist[v]: dist[v] = dist[u] + w
  3. Check for negative cycles:
    • For each edge (u, v) with weight w:
    • If dist[u] + w < dist[v]: NEGATIVE CYCLE EXISTS
Pseudocode
function BELLMAN_FORD(graph, source):
    dist[] = {โˆž, โˆž, ..., โˆž}
    dist[source] = 0

    // Relax all edges V-1 times
    for i = 1 to |V| - 1:
        for each edge (u, v, w) in graph:
            if dist[u] + w < dist[v]:
                dist[v] = dist[u] + w

    // Check for negative cycle
    for each edge (u, v, w) in graph:
        if dist[u] + w < dist[v]:
            return "Negative cycle detected"

    return dist[]

4.2 Dijkstra vs Bellman-Ford

Aspect Dijkstra Bellman-Ford
Negative weights No Yes
Negative cycle detection No Yes
Time Complexity O(E log V) O(VE)
Approach Greedy Dynamic Programming

4.3 Time Complexity

O(V ร— E) - iterate V-1 times over all E edges

5. Floyd-Warshall Algorithm (All-Pairs Shortest Path)

Floyd-Warshall Algorithm
Finds shortest paths between all pairs of vertices. Works with negative weights (but not negative cycles). Uses dynamic programming with 2D matrix.

5.1 Key Idea

For each pair (i, j), consider all possible intermediate vertices k. If going through k is shorter, update the distance.

dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])
For each intermediate vertex k from 1 to n

5.2 Algorithm

Pseudocode
function FLOYD_WARSHALL(graph):
    // Initialize distance matrix
    dist[][] = graph adjacency matrix
    // dist[i][j] = weight of edge (i,j) if exists, else โˆž
    // dist[i][i] = 0 for all i

    // Consider each vertex as intermediate
    for k = 1 to n:
        for i = 1 to n:
            for j = 1 to n:
                if dist[i][k] + dist[k][j] < dist[i][j]:
                    dist[i][j] = dist[i][k] + dist[k][j]

    return dist[][]
Solved Example (33% Frequency - Recent Papers) 10 Marks
Find all-pairs shortest paths using Floyd-Warshall algorithm.
โœ“ Solution

Initial Distance Matrix Dโฐ:

1234
103โˆž7
2802โˆž
35โˆž01
42โˆžโˆž0

k=1: Consider paths through vertex 1

Dยน[i][j] = min(Dโฐ[i][j], Dโฐ[i][1] + Dโฐ[1][j])

  • Dยน[2][4] = min(โˆž, 8+7) = 15
  • Dยน[3][2] = min(โˆž, 5+3) = 8
  • Dยน[3][4] = min(1, 5+7) = 1 (no change)
  • Dยน[4][2] = min(โˆž, 2+3) = 5

After k=1 (Dยน):

1234
103โˆž7
280215
35801
425โˆž0

Continue for k=2, k=3, k=4...

Final Result Dโด:

1234
10356
25023
33601
42570

Example paths:

  • 1 โ†’ 4: Direct = 7, via 2,3 = 3+2+1 = 6 (shorter!)
  • 2 โ†’ 1: Direct = 8, via 3,4 = 2+1+2 = 5 (shorter!)

5.3 Time & Space Complexity

Aspect Complexity
Time O(Vยณ) - three nested loops
Space O(Vยฒ) - distance matrix

6. 0/1 Knapsack Problem (DP)

0/1 Knapsack Problem
Given n items with weights and values, and a knapsack capacity W, select items to maximize total value. Each item can be taken completely or not at all (0/1 = take or don't take).

6.1 DP Recurrence

K[i][w] = max(K[i-1][w], v[i] + K[i-1][w-w[i]])
Where:
K[i][w] = max value using items 1..i with capacity w
v[i] = value of item i, w[i] = weight of item i

6.2 Algorithm

Pseudocode
function KNAPSACK_01(W, weights[], values[], n):
    // Create DP table K[0..n][0..W]
    for i = 0 to n:
        for w = 0 to W:
            if i == 0 or w == 0:
                K[i][w] = 0
            else if weights[i] <= w:
                // Can include item i
                K[i][w] = max(values[i] + K[i-1][w-weights[i]],
                             K[i-1][w])
            else:
                // Cannot include item i
                K[i][w] = K[i-1][w]

    return K[n][W]
Solved Example (100% Frequency) 10 Marks
Write the algorithm for 0/1 knapsack using dynamic programming. Solve:
M = 21, n = 4
(p1..p4) = (2, 5, 8, 1)
(w1..w4) = (10, 15, 6, 9)
โœ“ Solution

Items:

ItemWeightValue
1102
2155
368
491

DP Table K[i][w]: (showing key values)

i\w0691015161921
000000000
1 (w=10,v=2)00022222
2 (w=15,v=5)00025555
3 (w=6,v=8)08888101013
4 (w=9,v=1)08889101013

Building the table (key steps):

  • K[3][21] = max(K[2][21], 8 + K[2][15]) = max(5, 8+5) = 13
  • K[4][21] = max(K[3][21], 1 + K[3][12]) = max(13, 1+8) = 13

Traceback to find selected items:

  1. K[4][21] = 13 = K[3][21], so item 4 NOT included
  2. K[3][21] = 13 โ‰  K[2][21] = 5, so item 3 IS included (w=6, v=8)
  3. Remaining: K[2][15] = 5 โ‰  K[1][15] = 2, so item 2 IS included (w=15, v=5)
  4. Remaining: K[1][0] = 0, done
Maximum Value = 13
Selected Items: 2 and 3
Total Weight: 15 + 6 = 21

6.3 Time & Space Complexity

Aspect Complexity
Time O(n ร— W)
Space O(n ร— W) or O(W) with optimization

7. Travelling Salesman Problem (TSP)

Travelling Salesman Problem
Given a set of cities and distances between them, find the shortest route that visits each city exactly once and returns to the starting city. This is an NP-Hard problem.

7.1 DP Approach (Held-Karp Algorithm)

Use bitmask to represent visited cities. State: (current city, set of visited cities).

dp[S][i] = min cost to reach city i, having visited exactly cities in set S
dp[S][i] = min(dp[S-{i}][j] + dist[j][i]) for all j in S-{i}
Pseudocode
function TSP_DP(dist[][], n):
    // dp[mask][i] = min cost to reach i with visited cities = mask
    dp[1][0] = 0  // Start at city 0

    for mask = 1 to (2^n - 1):
        for last = 0 to n-1:
            if mask has bit 'last' set:
                for next = 0 to n-1:
                    if mask doesn't have bit 'next' set:
                        new_mask = mask | (1 << next)
                        dp[new_mask][next] = min(dp[new_mask][next],
                                                 dp[mask][last] + dist[last][next])

    // Find min cost to return to city 0
    full_mask = (2^n - 1)
    answer = min(dp[full_mask][i] + dist[i][0]) for all i

    return answer
๐Ÿ“ Example: 4 Cities TSP

Distance Matrix:

0123
00101520
11003525
21535030
32025300

Optimal Tour: 0 โ†’ 1 โ†’ 3 โ†’ 2 โ†’ 0

Cost: 10 + 25 + 30 + 15 = 80

7.2 Time & Space Complexity

Approach Time Space
Brute Force O(n!) O(n)
DP (Held-Karp) O(nยฒ ร— 2โฟ) O(n ร— 2โฟ)
๐Ÿ’ก
Note: TSP is NP-Hard, so no polynomial-time algorithm is known. The DP solution is exponential but better than factorial brute force. For large n, approximation algorithms are used.

8. Practice Questions

Question 1 (100% Frequency) 10 Marks
Write algorithm for 0/1 knapsack using dynamic programming and solve:
M=15, n=4, (p1..p4)=(10,5,15,7), (w1..w4)=(2,3,5,7)
View Answer โ–ผ

Build table K[i][w] for i=0..4, w=0..15

Maximum Value = 30

Selected items: 1, 3, 4 (w=2+5+7=14, v=10+15+7=32)

Or items 1, 2, 3 (w=2+3+5=10, v=10+5+15=30)

Question 2 5 Marks
What is the difference between Divide and Conquer and Dynamic Programming?
View Answer โ–ผ

Divide & Conquer:

  • Subproblems are independent
  • Top-down only
  • No overlapping subproblems
  • Examples: Merge Sort, Quick Sort

Dynamic Programming:

  • Overlapping subproblems
  • Top-down or bottom-up
  • Stores solutions (memoization/tabulation)
  • Examples: Knapsack, LCS, Floyd-Warshall
Question 3 (33% Frequency) 10 Marks
Explain Dijkstra's algorithm with an example. Why doesn't it work with negative weights?
View Answer โ–ผ

See Section 3 for algorithm and example.

Why no negative weights: Dijkstra marks vertices as "done" after visiting. With negative edges, a "done" vertex might have a shorter path discovered later, which Dijkstra won't consider.

Question 4 (33% Frequency) 10 Marks
Give an algorithm to solve the All-pairs shortest path problem. Apply it to find all-pairs shortest paths.
View Answer โ–ผ

See Section 5 (Floyd-Warshall) for complete algorithm and worked example.

Key: Three nested loops - k (intermediate), i (source), j (destination)

Recurrence: dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])

Key Points Summary

๐Ÿ“Œ Remember for Exam
  • DP requires: Optimal Substructure + Overlapping Subproblems
  • Greedy fails for 0/1 Knapsack because it can't undo choices
  • Dijkstra: Single source, no negative weights, O(E log V)
  • Bellman-Ford: Single source, handles negative weights, O(VE)
  • Floyd-Warshall: All pairs, O(Vยณ), uses intermediate vertices
  • 0/1 Knapsack: K[i][w] = max(not take, take if fits), O(nW)
  • TSP: NP-Hard, DP gives O(nยฒร—2โฟ) vs O(n!) brute force
  • Always show: recurrence relation, table filling, traceback