Minimizing multivariate functions (minimize) in SciPy - Time & Space Complexity
When we use scipy to find the lowest point of a function with many variables, it takes some steps to get there.
We want to know how the number of steps grows as the problem size grows.
Analyze the time complexity of the following code snippet.
from scipy.optimize import minimize
import numpy as np
def func(x):
return np.sum(x**2)
x0 = np.zeros(100) # start point with 100 variables
result = minimize(func, x0, method='BFGS')
print(result.success)
This code tries to find the minimum of a function with 100 variables using the BFGS method.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Each iteration computes the function value and its gradient over all variables.
- How many times: The optimizer repeats these calculations many times until it converges.
As the number of variables grows, each step takes longer because it must check all variables.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | Few hundred operations per iteration |
| 100 | Thousands of operations per iteration |
| 1000 | Hundreds of thousands of operations per iteration |
Pattern observation: The work per iteration grows roughly with the square of the number of variables.
Time Complexity: O(n^2)
This means the time to do one iteration grows roughly with the square of the number of variables.
[X] Wrong: "The time to minimize grows linearly with the number of variables."
[OK] Correct: Because methods like BFGS update matrices of size n x n, the work per step grows faster than just the number of variables.
Understanding how optimization time grows helps you explain choices in real problems and shows you know what affects performance.
What if we changed the method from BFGS to a simpler gradient descent? How would the time complexity change?