Vertical Order Traversal of Binary Tree in DSA Go - Time & Space Complexity
We want to understand how the time needed to do vertical order traversal grows as the tree gets bigger.
Specifically, how does the number of nodes affect the work done?
Analyze the time complexity of the following code snippet.
func verticalOrder(root *TreeNode) [][]int {
if root == nil {
return [][]int{}
}
type nodeInfo struct {
node *TreeNode
col int
}
queue := []nodeInfo{{root, 0}}
colTable := map[int][]int{}
minCol, maxCol := 0, 0
for len(queue) > 0 {
curr := queue[0]
queue = queue[1:]
colTable[curr.col] = append(colTable[curr.col], curr.node.Val)
if curr.node.Left != nil {
queue = append(queue, nodeInfo{curr.node.Left, curr.col - 1})
if curr.col-1 < minCol {
minCol = curr.col - 1
}
}
if curr.node.Right != nil {
queue = append(queue, nodeInfo{curr.node.Right, curr.col + 1})
if curr.col+1 > maxCol {
maxCol = curr.col + 1
}
}
}
result := [][]int{}
for i := minCol; i <= maxCol; i++ {
result = append(result, colTable[i])
}
return result
}
This code does a breadth-first search to group nodes by their vertical columns.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The main loop processes each node once using a queue.
- How many times: Exactly once per node, so n times for n nodes.
As the number of nodes grows, the loop runs once per node, so the work grows linearly.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 steps |
| 100 | About 100 steps |
| 1000 | About 1000 steps |
Pattern observation: Doubling the nodes roughly doubles the work.
Time Complexity: O(n)
This means the time grows directly in proportion to the number of nodes in the tree.
[X] Wrong: "Because we track columns, the complexity is higher than O(n)."
[OK] Correct: We only visit each node once, and column tracking is done with simple map insertions, so it does not add extra loops over nodes.
Understanding this helps you explain how breadth-first search can be combined with extra information to solve problems efficiently.
"What if we used depth-first search instead of breadth-first search? How would the time complexity change?"