ndarray as the core data structure in NumPy - Time & Space Complexity
We want to understand how the time to work with numpy's ndarray changes as the data size grows.
How does the number of operations grow when we do common tasks on ndarrays?
Analyze the time complexity of the following code snippet.
import numpy as np
n = 10 # example size
arr = np.arange(n) # create an array of size n
result = arr * 2 # multiply each element by 2
This code creates an ndarray of size n and multiplies every element by 2.
Look for repeated work done as the array size grows.
- Primary operation: Multiplying each element in the array by 2.
- How many times: Once for each element, so n times.
As the array size n grows, the number of multiplications grows the same way.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 multiplications |
| 100 | 100 multiplications |
| 1000 | 1000 multiplications |
Pattern observation: The operations increase directly with n, so doubling n doubles the work.
Time Complexity: O(n)
This means the time to multiply all elements grows linearly with the number of elements.
[X] Wrong: "Since numpy uses fast C code, the operation is constant time regardless of size."
[OK] Correct: Even though numpy is fast, it still must touch each element once, so time grows with n.
Understanding how ndarray operations scale helps you explain performance in data tasks clearly and confidently.
"What if we replaced element-wise multiplication with a matrix multiplication of two n x n arrays? How would the time complexity change?"