Recover usage in Go - Time & Space Complexity
When using recover in Go, we want to know how it affects the program's speed.
We ask: does calling recover change how long the program takes as input grows?
Analyze the time complexity of the following code snippet.
func safeDivide(nums []int, divisor int) []int {
results := []int{}
for _, num := range nums {
func() {
defer func() {
if r := recover(); r != nil {
// handle division by zero
}
}()
results = append(results, num/divisor)
}()
}
return results
}
This code divides each number in a list by a divisor, using recover to handle any division by zero errors safely.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Looping through each number in the input slice.
- How many times: Once for each element in the input slice.
As the input list gets bigger, the number of divisions and recover checks grows at the same pace.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 divisions and recover checks |
| 100 | About 100 divisions and recover checks |
| 1000 | About 1000 divisions and recover checks |
Pattern observation: The work grows directly with the number of items; doubling input doubles work.
Time Complexity: O(n)
This means the time to run grows in a straight line with the input size.
[X] Wrong: "Using recover inside the loop makes the code run slower exponentially."
[OK] Correct: Recover only runs when a panic happens, and the loop still runs once per item, so the time grows linearly, not exponentially.
Understanding how recover affects time helps you write safe Go code without worrying about hidden slowdowns as your data grows.
What if we moved the recover call outside the loop? How would the time complexity change?