Peak finding (find_peaks) in SciPy - Time & Space Complexity
When we use peak finding in data, we want to know how long it takes to find all peaks as data grows.
We ask: How does the time to find peaks change when the data list gets bigger?
Analyze the time complexity of the following code snippet.
import numpy as np
from scipy.signal import find_peaks
data = np.array([1, 3, 2, 5, 4, 6, 3, 2, 7, 5])
peaks, _ = find_peaks(data)
print(peaks)
This code finds the positions of peaks in a 1D numeric array.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Scanning through the data array once to compare neighbors.
- How many times: Each element is checked once except the first and last.
As the data size grows, the time to find peaks grows roughly in direct proportion.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 8 comparisons |
| 100 | About 98 comparisons |
| 1000 | About 998 comparisons |
Pattern observation: Doubling the input roughly doubles the work because each element is checked once.
Time Complexity: O(n)
This means the time to find peaks grows linearly with the size of the input data.
[X] Wrong: "Finding peaks takes longer than checking each element once because it's complicated."
[OK] Correct: The method only compares neighbors in one pass, so it does not do extra repeated work.
Understanding how peak finding scales helps you explain efficiency clearly and shows you can analyze real data tasks.
"What if we wanted to find peaks in a 2D array instead? How would the time complexity change?"