When NumPy is not fast enough - Time & Space Complexity
Sometimes, even fast tools like NumPy can take longer when data grows big.
We want to see how the time to run code changes as input size grows.
Analyze the time complexity of the following code snippet.
import numpy as np
def pairwise_sum(arr):
n = arr.size
result = np.zeros(n * (n - 1) // 2)
idx = 0
for i in range(n):
for j in range(i + 1, n):
result[idx] = arr[i] + arr[j]
idx += 1
return result
arr = np.arange(1000)
This code calculates the sum of every pair of elements in the array.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Nested loops adding pairs of elements.
- How many times: The inner loop runs fewer times each outer loop, but total pairs are about n*(n-1)/2.
Explain the growth pattern intuitively.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 45 |
| 100 | 4,950 |
| 1000 | 499,500 |
Pattern observation: The number of operations grows roughly with the square of the input size.
Time Complexity: O(n²)
This means if you double the input size, the work about quadruples, making it slower quickly.
[X] Wrong: "NumPy always runs fast no matter what."
[OK] Correct: Even fast tools do more work when input grows, especially with nested loops causing many operations.
Understanding how nested operations grow helps you explain performance and choose better methods in real projects.
"What if we replaced the nested loops with a vectorized NumPy operation? How would the time complexity change?"