np.vectorize() for custom functions in NumPy - Time & Space Complexity
We want to understand how the time it takes to run a custom function on many numbers changes when using np.vectorize().
How does the work grow when we apply the function to bigger arrays?
Analyze the time complexity of the following code snippet.
import numpy as np
def my_func(x):
return x ** 2 + 1
n = 10
vec_func = np.vectorize(my_func)
arr = np.arange(n)
result = vec_func(arr)
This code applies a custom function to each element of an array using np.vectorize.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Applying
my_functo each element of the array. - How many times: Once for each element in the input array (n times).
As the array size grows, the function runs once per element, so the total work grows in direct proportion to the number of elements.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 function calls |
| 100 | 100 function calls |
| 1000 | 1000 function calls |
Pattern observation: Doubling the input size roughly doubles the work done.
Time Complexity: O(n)
This means the time to run grows linearly with the number of elements in the array.
[X] Wrong: "np.vectorize makes the function run faster by doing things all at once like built-in numpy operations."
[OK] Correct: np.vectorize is a convenience tool that still calls the function once per element in Python, so it does not speed up the function itself.
Understanding how np.vectorize works helps you explain performance when using custom functions on arrays, a useful skill for data tasks and coding interviews.
What if we replaced np.vectorize with a true numpy ufunc? How would the time complexity change?