Scalar operations on arrays in NumPy - Time & Space Complexity
We want to understand how the time to do scalar operations on arrays changes as the array gets bigger.
How does the number of calculations grow when we add or multiply a number to every element?
Analyze the time complexity of the following code snippet.
import numpy as np
arr = np.arange(1000)
result = arr * 5
result2 = arr + 10
This code creates an array and then multiplies and adds a scalar to every element.
- Primary operation: Multiplying and adding a scalar to each element in the array.
- How many times: Once for each element in the array, so as many times as the array length.
When the array size grows, the number of operations grows the same way.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 multiplications and 10 additions |
| 100 | About 100 multiplications and 100 additions |
| 1000 | About 1000 multiplications and 1000 additions |
Pattern observation: The work grows directly with the number of elements. Double the elements, double the work.
Time Complexity: O(n)
This means the time to do scalar operations grows in a straight line with the array size.
[X] Wrong: "Scalar operations on arrays are constant time because it's just one operation."
[OK] Correct: Even though it looks like one operation, it actually applies to every element, so the total work grows with the array size.
Understanding how array operations scale helps you explain performance in data tasks and shows you can reason about efficiency clearly.
"What if we used a loop in Python to multiply each element instead of numpy's vectorized operation? How would the time complexity change?"