Simpson's rule (simpson) in SciPy - Time & Space Complexity
We want to understand how the time it takes to calculate an integral using Simpson's rule changes as we increase the number of points.
How does the work grow when we use more points for better accuracy?
Analyze the time complexity of the following code snippet.
import numpy as np
from scipy.integrate import simpson
n = 100 # example number of points
x = np.linspace(0, 10, n) # n points
y = np.sin(x)
result = simpson(y, x)
This code calculates the integral of the sine function using Simpson's rule with n points.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Summing weighted values of y over n points.
- How many times: The summation loops through all n points once.
As we increase n, the number of points, the work grows roughly in direct proportion.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 sums and multiplications |
| 100 | About 100 sums and multiplications |
| 1000 | About 1000 sums and multiplications |
Pattern observation: Doubling the points roughly doubles the work.
Time Complexity: O(n)
This means the time to compute the integral grows linearly with the number of points used.
[X] Wrong: "Simpson's rule takes constant time no matter how many points we use."
[OK] Correct: More points mean more values to sum, so the work grows with n, not fixed.
Knowing how numerical integration scales helps you explain efficiency when working with large datasets or fine-grained calculations.
"What if we used a fixed number of points but increased the complexity of the function being integrated? How would the time complexity change?"