np.mean() for average in NumPy - Time & Space Complexity
We want to understand how the time to calculate an average using np.mean() changes as the data size grows.
How does the work needed grow when we have more numbers to average?
Analyze the time complexity of the following code snippet.
import numpy as np
arr = np.array([1, 2, 3, 4, 5])
avg = np.mean(arr)
print(avg)
This code creates a numpy array and calculates its average value using np.mean().
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Summing all elements in the array.
- How many times: Once for each element in the array.
As the number of elements grows, the time to add them all grows too.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 additions |
| 100 | 100 additions |
| 1000 | 1000 additions |
Pattern observation: The work grows directly with the number of elements.
Time Complexity: O(n)
This means the time to find the average grows in a straight line as the list gets longer.
[X] Wrong: "Calculating the average is instant no matter how big the array is."
[OK] Correct: The function must look at every number to add them up, so more numbers mean more work.
Knowing how simple operations like averaging scale helps you explain efficiency clearly and shows you understand how data size affects performance.
"What if we used np.mean() on a 2D array instead of 1D? How would the time complexity change?"