np.unravel_index() for multi-dim positions in NumPy - Time & Space Complexity
We want to understand how the time needed to find multi-dimensional positions grows when using np.unravel_index().
Specifically, how does the work change as the input size increases?
Analyze the time complexity of the following code snippet.
import numpy as np
indices = np.array([5, 23, 45, 67])
shape = (4, 5, 6)
positions = np.unravel_index(indices, shape)
print(positions)
This code converts flat indices into multi-dimensional positions based on the given shape.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: For each index, the function calculates its position by dividing and taking remainders for each dimension.
- How many times: This happens once per index and once per dimension in the shape.
As the number of indices or the number of dimensions grows, the work increases proportionally.
| Input Size (n = number of indices) | Dimensions (d) | Approx. Operations |
|---|---|---|
| 10 | 3 | About 30 (10 * 3) |
| 100 | 3 | About 300 (100 * 3) |
| 1000 | 3 | About 3000 (1000 * 3) |
Pattern observation: The total work grows linearly with both the number of indices and the number of dimensions.
Time Complexity: O(n * d)
This means the time grows in direct proportion to the number of indices and the number of dimensions.
[X] Wrong: "The time to convert indices does not depend on the number of dimensions."
[OK] Correct: Each dimension requires a division and remainder operation, so more dimensions mean more work per index.
Understanding how functions like np.unravel_index() scale helps you reason about performance in data processing tasks.
What if we changed the input to a single large index instead of many small ones? How would the time complexity change?