BFS traversal and applications in Data Structures Theory - Time & Space Complexity
We want to understand how the time needed for BFS traversal grows as the graph gets bigger.
Specifically, how does BFS handle more nodes and edges in a graph?
Analyze the time complexity of the following BFS traversal code.
function BFS(graph, startNode) {
let queue = []
let visited = new Set()
queue.push(startNode)
visited.add(startNode)
while (queue.length > 0) {
let node = queue.shift()
for (let neighbor of graph[node]) {
if (!visited.has(neighbor)) {
visited.add(neighbor)
queue.push(neighbor)
}
}
}
}
This code visits all nodes reachable from the startNode using a queue, exploring neighbors level by level.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Visiting each node and checking its neighbors.
- How many times: Each node is enqueued and dequeued once; each edge is checked once.
As the number of nodes and edges grows, BFS visits each node once and checks all edges once.
| Input Size (n nodes, m edges) | Approx. Operations |
|---|---|
| 10 nodes, 15 edges | About 10 node visits + 15 edge checks = 25 operations |
| 100 nodes, 200 edges | About 100 node visits + 200 edge checks = 300 operations |
| 1000 nodes, 5000 edges | About 1000 node visits + 5000 edge checks = 6000 operations |
Pattern observation: Operations grow roughly in proportion to nodes plus edges.
Time Complexity: O(n + m)
This means BFS takes time proportional to the number of nodes plus the number of edges in the graph.
[X] Wrong: "BFS always takes time proportional to n squared because it checks all pairs of nodes."
[OK] Correct: BFS only checks edges that actually exist, not all pairs of nodes, so it depends on edges, not all possible pairs.
Understanding BFS time complexity helps you explain how graph algorithms scale and why BFS is efficient for many problems.
"What if the graph is represented as an adjacency matrix instead of adjacency lists? How would the time complexity change?"