Vectorization over loops in NumPy - Time & Space Complexity
We want to see how using vectorization instead of loops changes the time it takes to run code with numpy.
How does the work grow when we handle bigger data?
Analyze the time complexity of the following code snippet.
import numpy as np
n = 10 # Example value for n
arr = np.arange(n)
squared = arr * arr
This code creates an array of numbers and then squares each number using vectorized multiplication.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Multiplying each element in the array by itself.
- How many times: Once for each element in the array, all done together using vectorized operation.
As the array size grows, the time to square all elements grows roughly in direct proportion.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 multiplications |
| 100 | 100 multiplications |
| 1000 | 1000 multiplications |
Pattern observation: Doubling the input doubles the work needed.
Time Complexity: O(n)
This means the time to complete the operation grows linearly with the number of elements.
[X] Wrong: "Vectorized operations run instantly no matter the input size."
[OK] Correct: Vectorization speeds things up by doing many operations at once, but it still needs to process each element, so time grows with input size.
Understanding how vectorization changes time complexity helps you write faster code and explain your choices clearly in real projects.
"What if we replaced vectorized multiplication with a Python for-loop to square each element? How would the time complexity change?"