Monitoring memory usage in NumPy - Time & Space Complexity
When we monitor memory usage in numpy, we want to know how the cost of checking memory grows as data size grows.
We ask: How much work does it take to measure memory for bigger arrays?
Analyze the time complexity of the following code snippet.
import numpy as np
def memory_usage(arr):
return arr.nbytes
large_array = np.arange(1000000)
usage = memory_usage(large_array)
This code creates a large numpy array and checks its memory size in bytes.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Accessing the stored byte size property of the array.
- How many times: Exactly once, no loops or traversals happen during this check.
Explain the growth pattern intuitively.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 1 |
| 100 | 1 |
| 1000 | 1 |
Pattern observation: Checking memory size is a simple property access and does not grow with array size.
Time Complexity: O(1)
This means checking memory usage takes the same small amount of time no matter how big the array is.
[X] Wrong: "Measuring memory usage requires looking at every element in the array."
[OK] Correct: Numpy stores the total byte size as a property, so it does not need to check each element to know the memory used.
Understanding how simple property access differs from looping helps you explain efficient monitoring in real projects.
"What if we wrote a function that sums all elements to estimate memory? How would the time complexity change?"