Common slice operations in Go - Time & Space Complexity
When working with slices in Go, it's important to know how the time needed changes as the slice grows.
We want to understand how common slice actions take longer or stay the same as the slice gets bigger.
Analyze the time complexity of the following code snippet.
// Append elements to a slice
func appendElements(slice []int, n int) []int {
for i := 0; i < n; i++ {
slice = append(slice, i)
}
return slice
}
// Access element by index
func accessElement(slice []int, index int) int {
return slice[index]
}
This code shows two common slice operations: adding elements one by one and accessing an element by its position.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The loop that appends elements to the slice.
- How many times: The loop runs n times, once for each element added.
- Accessing an element by index happens just once, no repetition.
As the number of elements n increases, the time to append grows roughly in direct proportion.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 append steps |
| 100 | About 100 append steps |
| 1000 | About 1000 append steps |
Accessing an element by index takes the same small amount of time no matter the size.
Pattern observation: Appending grows linearly, accessing stays constant.
Time Complexity: O(n) for appending elements, O(1) for accessing an element.
This means adding many elements takes longer as you add more, but getting one element is always quick.
[X] Wrong: "Appending one element to a slice always takes the same tiny time no matter what."
[OK] Correct: Sometimes appending triggers the slice to grow its storage, which takes extra time, so appending many elements adds up.
Understanding how slice operations scale helps you write efficient code and explain your choices clearly in conversations.
"What if we changed the append loop to add elements in batches instead of one by one? How would the time complexity change?"