Why signal processing extracts information in SciPy - Performance Analysis
We want to understand how the time needed to extract information from signals grows as the signal size increases.
How does the processing time change when we analyze longer or more complex signals?
Analyze the time complexity of this signal processing code using scipy.
import numpy as np
from scipy.signal import find_peaks
def extract_signal_info(signal):
peaks, _ = find_peaks(signal)
peak_values = signal[peaks]
return peaks, peak_values
signal = np.random.rand(1000)
extract_signal_info(signal)
This code finds peaks in a signal array and extracts their values.
Look for loops or repeated steps in the code.
- Primary operation: Scanning the entire signal array to find peaks.
- How many times: Once through all signal points (length n).
The time to find peaks grows roughly in direct proportion to the signal length.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 checks |
| 100 | About 100 checks |
| 1000 | About 1000 checks |
Pattern observation: Doubling the signal length roughly doubles the work.
Time Complexity: O(n)
This means the time to extract information grows linearly with the signal size.
[X] Wrong: "Finding peaks takes constant time regardless of signal length."
[OK] Correct: The algorithm must check each point to decide if it is a peak, so time grows with signal size.
Understanding how signal processing scales helps you explain efficiency when working with real data streams or sensor inputs.
What if we used a more complex peak detection method that compares each point to many neighbors? How would the time complexity change?