NumPy arrays are often faster than Python lists when doing math with many numbers. Why is that?
Think about how data is stored and how operations are done behind the scenes.
NumPy arrays store numbers in a continuous block of memory, which helps the computer access data faster. Also, NumPy uses optimized C code for calculations, making operations much faster than Python's list loops.
What is the output of this code?
import numpy as np arr1 = np.array([1, 2, 3]) arr2 = 2 result = arr1 * arr2 print(result)
NumPy can multiply arrays by single numbers by applying the operation to each element.
NumPy broadcasts the scalar 2 to each element of the array [1, 2, 3], multiplying each element by 2, resulting in [2, 4, 6].
Given these arrays, what is the shape of the result after adding them?
import numpy as np arr1 = np.array([[1, 2, 3], [4, 5, 6]]) arr2 = np.array([10, 20, 30]) result = arr1 + arr2 print(result.shape)
Think about how broadcasting works when adding a 2D array and a 1D array.
The 1D array [10, 20, 30] is broadcast across the rows of the 2D array with shape (2, 3). The result keeps the shape (2, 3).
What error does this code raise?
import numpy as np arr = np.array([1, 2, 'three', 4]) print(arr * 2)
What happens when you multiply a NumPy array of strings by 2?
The array contains a string 'three', so NumPy treats the whole array as strings. Multiplying by 2 repeats each string twice, so the output is ['11' '22' 'threethree' '44'].
You want to multiply two large matrices efficiently. Which approach uses NumPy's strengths best?
NumPy has built-in functions optimized for matrix math.
NumPy's np.dot() function and the @ operator use optimized C and BLAS libraries for fast matrix multiplication, much faster than Python loops or map.