Appending to slices in Go - Time & Space Complexity
When we add items to a slice in Go, the time it takes can change depending on how many items are already there.
We want to understand how the time to append grows as the slice gets bigger.
Analyze the time complexity of the following code snippet.
package main
func appendItems(n int) []int {
var s []int
for i := 0; i < n; i++ {
s = append(s, i)
}
return s
}
This code adds numbers from 0 up to n-1 to a slice one by one.
- Primary operation: The append operation inside the loop.
- How many times: It runs exactly n times, once for each number added.
Each append usually adds one item, but sometimes the slice needs to grow its storage, which takes extra time.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 appends, with a few extra steps for growing storage |
| 100 | About 100 appends, with some extra steps for growing storage |
| 1000 | About 1000 appends, with a few more extra steps for growing storage |
Pattern observation: The total work grows roughly in a straight line with n, because growing the slice happens only a few times.
Time Complexity: O(n)
This means the time to append n items grows roughly in direct proportion to n.
[X] Wrong: "Each append always takes the same small amount of time."
[OK] Correct: Sometimes the slice needs to grow its storage, which takes extra time, so not every append is equally fast.
Understanding how appending to slices works helps you explain how data structures grow and perform in real programs.
"What if we pre-allocate the slice with enough capacity before appending? How would the time complexity change?"