Bellman-Ford algorithm in Data Structures Theory - Time & Space Complexity
We want to understand how the time needed by the Bellman-Ford algorithm grows as the graph size increases.
Specifically, how does the number of edges and vertices affect the work done?
Analyze the time complexity of the Bellman-Ford algorithm below.
function BellmanFord(graph, source):
distance = array of size V with all values infinity
distance[source] = 0
for i in 1 to V-1:
for each edge (u, v) in graph.edges:
if distance[u] + weight(u, v) < distance[v]:
distance[v] = distance[u] + weight(u, v)
return distance
This code finds shortest paths from one vertex to all others by relaxing edges repeatedly.
Look at the loops and repeated steps:
- Primary operation: Checking and updating distances for each edge.
- How many times: The inner loop runs once for every edge, repeated V-1 times (where V is number of vertices).
The work grows with both the number of vertices (V) and edges (E).
| Input Size (V, E) | Approx. Operations |
|---|---|
| V=10, E=20 | About 9 * 20 = 180 checks |
| V=100, E=500 | About 99 * 500 = 49,500 checks |
| V=1000, E=5000 | About 999 * 5000 = 4,995,000 checks |
Pattern observation: The total work increases roughly by multiplying vertices and edges.
Time Complexity: O(V * E)
This means the time needed grows proportionally to the number of vertices times the number of edges.
[X] Wrong: "The algorithm runs in O(V^2) time because it has two loops over vertices."
[OK] Correct: The inner loop runs over edges, not vertices. The number of edges can be very different from vertices, so the complexity depends on both V and E, not just V squared.
Understanding this complexity helps you explain how graph size affects performance and why Bellman-Ford is slower than some other shortest path algorithms.
"What if the graph is dense, meaning E is close to V squared? How would that affect the time complexity?"