Common broadcasting patterns in NumPy - Time & Space Complexity
When using NumPy, broadcasting lets us do math on arrays of different shapes without extra loops.
We want to know how the time to run these operations grows as arrays get bigger.
Analyze the time complexity of the following code snippet.
import numpy as np
A = np.ones((1000, 1000))
B = np.arange(1000).reshape(1000, 1)
C = A + B # Broadcasting B to match A's shape
This code adds a 1000x1000 array to a 1000x1 array by broadcasting the smaller array.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Element-wise addition over the full 1000x1000 array.
- How many times: Once for each of the 1,000,000 elements in the result.
As the arrays get bigger, the number of additions grows with the total number of elements in the output.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 100 (10x10) |
| 100 | 10,000 (100x100) |
| 1000 | 1,000,000 (1000x1000) |
Pattern observation: Operations grow roughly with the product of the output array dimensions.
Time Complexity: O(n*m)
This means the time grows proportionally to the total number of elements in the output array.
[X] Wrong: "Broadcasting makes the operation run only as fast as the smaller array."
[OK] Correct: Even though broadcasting avoids copying data, the operation still processes every element in the output array, so time depends on the output size.
Understanding broadcasting time helps you explain how vectorized code scales and why it's efficient compared to loops.
What if we changed the smaller array to have shape (1, m) instead of (n, 1)? How would the time complexity change?