What is the output of the following code showing the memory size of a numpy array?
import numpy as np arr = np.arange(1000) print(arr.nbytes)
Each element in the array is an integer of 8 bytes (int64) by default.
The array has 1000 integers, each 8 bytes (int64), so total memory is 8000 bytes.
Which option shows the correct memory size in bytes for a numpy array of 1000 elements with dtype float64?
import numpy as np arr = np.arange(1000, dtype=np.float64) print(arr.nbytes)
float64 uses 8 bytes per element.
1000 elements * 8 bytes each = 8000 bytes.
Why is managing memory important when working with large datasets in numpy?
Think about what happens if your computer runs out of memory.
Large datasets can use a lot of memory. Managing memory helps avoid crashes and slowdowns.
Which option shows the code that wastes the most memory when creating large arrays?
import numpy as np # Option A arr_a = np.arange(1000000, dtype=np.int32) # Option B arr_b = np.arange(1000000, dtype=np.int64) # Option C arr_c = np.arange(1000000, dtype=np.float64) # Option D arr_d = np.arange(1000000, dtype=np.int16)
Check the size in bytes of each data type.
int64 and float64 use 8 bytes per element, which is more than int32 and int16.
You have a numpy array of 1 million integers ranging from 0 to 255. Which dtype should you choose to minimize memory usage without losing data?
Consider the range of values and the size of each data type.
Since values are between 0 and 255, unsigned 8-bit integer (uint8) uses 1 byte per element and fits all values.