np.empty() for uninitialized arrays in NumPy - Time & Space Complexity
We want to understand how the time to create an uninitialized array with np.empty() changes as the array size grows.
Specifically, how does the work done scale when making bigger arrays?
Analyze the time complexity of the following code snippet.
import numpy as np
size = 1000000
arr = np.empty(size, dtype=float)
This code creates a large uninitialized array of floats with one million elements.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Allocating memory for
sizeelements. - How many times: Once, but the memory allocation internally depends on the number of elements.
Creating an array with np.empty() means reserving space for all elements without setting values.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 memory slots reserved |
| 100 | 100 memory slots reserved |
| 1000 | 1000 memory slots reserved |
Pattern observation: The work grows directly with the number of elements because memory must be allocated for each.
Time Complexity: O(n)
This means the time to create the array grows linearly with the number of elements.
[X] Wrong: "np.empty() is instant and does not depend on size because it does not initialize values."
[OK] Correct: Even though values are not set, memory allocation still depends on the number of elements, so time grows with size.
Understanding how memory allocation time scales helps you reason about performance in data science tasks, especially when working with large datasets.
"What if we used np.zeros() instead of np.empty()? How would the time complexity change?"