Floyd Warshall All Pairs Shortest Path in DSA C - Time & Space Complexity
We want to understand how the time needed grows when finding shortest paths between all pairs of points using Floyd Warshall.
How does the work increase as the number of points grows?
Analyze the time complexity of the following code snippet.
for (int k = 0; k < n; k++) {
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
if (dist[i][k] + dist[k][j] < dist[i][j]) {
dist[i][j] = dist[i][k] + dist[k][j];
}
}
}
}
This code updates shortest distances between all pairs of points by checking if going through an intermediate point is shorter.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The triple nested loops that update distances.
- How many times: Each loop runs n times, so the inner update runs n x n x n = n³ times.
As the number of points n grows, the number of operations grows very fast because we check every pair through every possible intermediate point.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 1,000 |
| 100 | 1,000,000 |
| 1000 | 1,000,000,000 |
Pattern observation: Operations grow roughly by the cube of n, so doubling n makes work about eight times bigger.
Time Complexity: O(n³)
This means the time needed grows very quickly as the number of points increases, because we check all pairs through all intermediates.
[X] Wrong: "Since there are only two loops inside, the time complexity is O(n²)."
[OK] Correct: There are actually three nested loops, each running n times, so the total work is n x n x n, which is O(n³).
Understanding this cubic time complexity helps you explain why Floyd Warshall is good for small graphs but not for very large ones, showing your grasp of algorithm efficiency.
"What if we used a different algorithm that only checks edges instead of all pairs? How would the time complexity change?"