Views share memory with originals in NumPy - Time & Space Complexity
We want to understand how fast operations run when using views in numpy.
Specifically, how does sharing memory affect the time it takes to work with arrays?
Analyze the time complexity of the following code snippet.
import numpy as np
arr = np.arange(1000000)
view = arr[1000:2000]
view[0] = 999
This code creates a large array, makes a view of a slice, and modifies one element in the view.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Creating the view slice and modifying one element.
- How many times: The slice operation accesses a range but does not copy data; modifying one element is a single operation.
Creating a view does not copy data, so time does not grow with slice size.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | Few operations, constant time |
| 100 | Few operations, constant time |
| 1000 | Few operations, constant time |
Pattern observation: Time stays about the same no matter how big the slice is.
Time Complexity: O(1)
This means creating and modifying a view takes constant time, regardless of the slice size.
[X] Wrong: "Creating a view copies all the data, so it takes longer for bigger slices."
[OK] Correct: Views share memory with the original array, so no data is copied and time stays constant.
Understanding views helps you write faster code and explain memory use clearly, a useful skill in data work.
"What if we used a copy instead of a view? How would the time complexity change?"