Root finding (root, brentq) in SciPy - Time & Space Complexity
When we use scipy's root finding methods like root or brentq, we want to know how long it takes to find the solution as the problem changes.
We ask: How does the time needed grow when the function or input changes?
Analyze the time complexity of the following code snippet.
import numpy as np
from scipy.optimize import brentq
def f(x):
return x**3 - x - 2
root = brentq(f, 1, 2, xtol=1e-6)
print(root)
This code finds a root of the function f between 1 and 2 using the brentq method.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Evaluating the function
fmultiple times to narrow down the root. - How many times: The method repeats function evaluations until the answer is precise enough.
The number of function checks grows as we ask for more precision, but not directly with input size.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 20-30 function calls |
| 100 | About 20-30 function calls |
| 1000 | About 20-30 function calls |
Pattern observation: The number of steps grows slowly and depends mostly on the precision, not the input size.
Time Complexity: O(log(1/ε))
This means the time grows roughly with how many decimal places of accuracy we want, not the size of the input range.
[X] Wrong: "The time to find the root grows directly with the size of the input interval."
[OK] Correct: The method focuses on precision, so it splits the interval repeatedly, making time depend on how close we want to get, not the interval length itself.
Understanding how root finding methods scale helps you explain algorithm efficiency clearly and shows you can think about precision and input size separately.
What if we changed the precision parameter xtol to a much smaller value? How would the time complexity change?