In-place operations for memory efficiency in NumPy - Time & Space Complexity
We want to see how fast numpy code runs when it changes data directly in memory.
How does doing work inside the same space affect the time it takes?
Analyze the time complexity of the following code snippet.
import numpy as np
arr = np.arange(1_000_000)
arr += 5 # add 5 to each element in-place
This code adds 5 to every number in a large array without making a new array.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Adding 5 to each element in the array.
- How many times: Once for each element, so 1,000,000 times here.
As the array gets bigger, the time to add 5 to each number grows in a straight line.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 additions |
| 100 | 100 additions |
| 1000 | 1000 additions |
Pattern observation: Doubling the input doubles the work needed.
Time Complexity: O(n)
This means the time grows directly with the number of elements you change.
[X] Wrong: "In-place operations are always faster because they use less memory, so time doesn't grow with input size."
[OK] Correct: Even if memory stays the same, the computer still needs to visit each element to change it, so time grows with the number of elements.
Understanding how in-place changes affect speed helps you write efficient code that handles big data smoothly.
"What if we replaced the in-place addition with creating a new array for the result? How would the time complexity change?"