NumPy array foundation review in SciPy - Time & Space Complexity
When working with NumPy arrays, it is important to understand how the time to perform operations grows as the array size increases.
We want to know how fast basic array operations run when the array gets bigger.
Analyze the time complexity of the following code snippet.
import numpy as np
arr = np.arange(n)
result = np.sum(arr)
This code creates a NumPy array of size n and then sums all its elements.
- Primary operation: Summing each element in the array.
- How many times: Once for each of the
nelements.
As the array size grows, the number of operations grows proportionally.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 sums |
| 100 | 100 sums |
| 1000 | 1000 sums |
Pattern observation: Doubling the array size roughly doubles the work needed.
Time Complexity: O(n)
This means the time to sum the array grows linearly with the number of elements.
[X] Wrong: "NumPy sum is instant no matter the array size because it is optimized."
[OK] Correct: Even though NumPy is fast, it still needs to look at each element once to add it, so time grows with array size.
Understanding how array operations scale helps you explain performance in data tasks and shows you know how to handle large data efficiently.
What if we used np.sum on a 2D array with shape (n, n)? How would the time complexity change?