DFS traversal and applications in Data Structures Theory - Time & Space Complexity
When we use Depth-First Search (DFS) to explore a graph or tree, we want to know how the time it takes grows as the structure gets bigger.
We ask: How many steps does DFS need to visit all nodes and edges?
Analyze the time complexity of the following DFS code snippet.
function DFS(node, visited) {
visited.add(node);
for (const neighbor of node.neighbors) {
if (!visited.has(neighbor)) {
DFS(neighbor, visited);
}
}
}
// Start DFS from a given node
const visited = new Set();
DFS(startNode, visited);
This code visits each node and explores all its neighbors recursively to traverse the entire graph or tree.
Look at what repeats during DFS:
- Primary operation: Visiting each node and checking its neighbors.
- How many times: Each node is visited once, and each edge is checked once or twice depending on graph type.
As the graph grows, DFS visits every node and edge once.
| Input Size (n nodes, m edges) | Approx. Operations |
|---|---|
| 10 nodes, 15 edges | About 25 steps (10 nodes + 15 edges) |
| 100 nodes, 200 edges | About 300 steps |
| 1000 nodes, 3000 edges | About 4000 steps |
Pattern observation: The steps grow roughly in proportion to the number of nodes plus edges.
Time Complexity: O(n + m)
This means DFS takes time proportional to all nodes plus all edges in the graph.
[X] Wrong: "DFS only depends on the number of nodes, so it is O(n)."
[OK] Correct: DFS also checks edges to find neighbors, so edges affect the time too, especially in dense graphs.
Understanding DFS time complexity helps you explain how your code scales when exploring networks or trees, a common skill interviewers look for.
"What if the graph is represented as an adjacency matrix instead of adjacency lists? How would the time complexity change?"