Simulated annealing (dual_annealing) in SciPy - Time & Space Complexity
We want to understand how the time needed to find a solution using simulated annealing grows as the problem size increases.
Specifically, how does the number of steps in dual_annealing change with input size?
Analyze the time complexity of the following code snippet.
from scipy.optimize import dual_annealing
n = 10
def objective(x):
return sum((x - 2) ** 2)
bounds = [(-5, 5)] * n
result = dual_annealing(objective, bounds)
This code tries to find the minimum of a simple function using simulated annealing over n variables.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Evaluating the objective function many times during the annealing process.
- How many times: The number of function evaluations depends on the number of iterations and temperature schedule, which grows with problem size.
As the number of variables n increases, each function evaluation takes longer because it sums over more elements.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | Thousands of function evaluations x 10 operations each |
| 100 | Thousands of function evaluations x 100 operations each |
| 1000 | Thousands of function evaluations x 1000 operations each |
Pattern observation: The total work grows roughly linearly with the number of variables times the number of evaluations.
Time Complexity: O(n * k)
This means the time grows with the number of variables n and the number of function evaluations k during annealing.
[X] Wrong: "The time only depends on the number of variables n."
[OK] Correct: The algorithm runs many steps, each evaluating the function. So total time depends on both n and how many steps k it takes.
Understanding how optimization time grows helps you explain trade-offs in algorithms and shows you can think about performance beyond just code correctness.
"What if the objective function was more complex and took longer per evaluation? How would that affect the time complexity?"