reshape() for changing dimensions in NumPy - Time & Space Complexity
We want to understand how the time needed to reshape an array changes as the array size grows.
Specifically, how does numpy's reshape() handle bigger arrays?
Analyze the time complexity of the following code snippet.
import numpy as np
arr = np.arange(1000) # Create an array with 1000 elements
reshaped_arr = arr.reshape(100, 10) # Change shape to 100 rows and 10 columns
This code creates a 1D array and reshapes it into a 2D array without copying data.
Look for loops or repeated steps inside reshape.
- Primary operation: Adjusting the view metadata to new shape.
- How many times: No element-wise copying or looping over all elements.
Reshape changes the shape info but does not move data, so time stays almost the same.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | Very few operations |
| 100 | Very few operations |
| 1000 | Very few operations |
Pattern observation: Time does not grow with input size because no data copying happens.
Time Complexity: O(1)
This means reshaping an array takes about the same time no matter how big the array is.
[X] Wrong: "Reshape loops through all elements and takes longer for bigger arrays."
[OK] Correct: Reshape only changes how data is viewed, not the data itself, so it runs quickly regardless of size.
Knowing that reshape is fast helps you explain efficient data handling in real projects.
"What if reshape had to copy data instead of just changing the view? How would the time complexity change?"