Slice length and capacity in Go - Time & Space Complexity
When working with slices in Go, it's important to understand how their length and capacity affect performance.
We want to see how operations grow as the slice size changes.
Analyze the time complexity of the following code snippet.
package main
func appendItems(slice []int, n int) []int {
for i := 0; i < n; i++ {
slice = append(slice, i)
}
return slice
}
This code appends n items to a slice, growing its length and possibly its capacity.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The loop runs n times, each time appending one item to the slice.
- How many times: Exactly n times, once per loop iteration.
Appending items grows with the number of items n, but capacity changes can cause extra work.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 appends, some may cause capacity increase |
| 100 | About 100 appends, with a few capacity increases |
| 1000 | About 1000 appends, with several capacity increases |
Pattern observation: Most appends are simple, but occasionally the slice grows its capacity, causing extra copying work.
Time Complexity: O(n)
This means the total work grows roughly in direct proportion to the number of items appended.
[X] Wrong: "Each append always takes the same small amount of time."
[OK] Correct: Sometimes the slice needs to grow its capacity, which involves copying all elements and takes more time.
Understanding how slice operations scale helps you explain performance in real programs and shows you know how Go manages memory behind the scenes.
What if we pre-allocate the slice with enough capacity before appending? How would the time complexity change?