What will be the output of the following code snippet that compares the sum of squares using NumPy and a Python loop?
import numpy as np import time arr = np.arange(1_000_000) start = time.time() result_numpy = np.sum(arr ** 2) end = time.time() numpy_time = end - start start = time.time() result_loop = 0 for x in arr: result_loop += x ** 2 end = time.time() loop_time = end - start print(f"NumPy sum: {result_numpy}, Time: {numpy_time:.4f} seconds") print(f"Loop sum: {result_loop}, Time: {loop_time:.4f} seconds")
NumPy uses optimized C code internally, so it runs much faster than Python loops.
The NumPy sum of squares is correct and computed very fast. The Python loop computes the same sum but takes much longer.
Given the code below, what is the shape of the output array?
import numpy as np def slow_func(x): return x ** 2 + 1 vec_func = np.vectorize(slow_func) arr = np.arange(12).reshape(3,4) result = vec_func(arr) print(result.shape)
np.vectorize preserves the input array shape.
np.vectorize applies the function element-wise and returns an array with the same shape as the input.
Consider this code using Numba's JIT to speed up a function. Why might it run slower than the pure NumPy version?
import numpy as np from numba import jit @jit(nopython=True) def sum_squares(arr): total = 0 for i in range(arr.size): total += arr[i] ** 2 return total arr = np.arange(1_000_000) import time start = time.time() result_numba = sum_squares(arr) end = time.time() numba_time = end - start start = time.time() result_numpy = np.sum(arr ** 2) end = time.time() numpy_time = end - start print(f"Numba time: {numba_time:.4f} seconds") print(f"NumPy time: {numpy_time:.4f} seconds")
Numba speeds up loops but cannot beat fully vectorized NumPy operations.
Numba speeds up loops but vectorized NumPy operations are implemented in optimized C code and often run faster than explicit loops, even JIT compiled.
Which scenario is the best reason to consider alternatives like Numba or Cython instead of pure NumPy?
Think about when vectorization is not possible or practical.
NumPy is fast for vectorized operations. For complex element-wise logic that can't be vectorized, tools like Numba or Cython can speed up loops.
You have a custom function that applies a complex formula element-wise on a large NumPy array. The pure Python loop is too slow. Which approach will most likely give the best speedup?
Consider which method compiles loops to fast machine code.
Numba's JIT with nopython=True compiles loops to fast machine code, giving large speedups for complex element-wise functions on arrays.