Power spectral density in SciPy - Time & Space Complexity
We want to understand how the time to compute power spectral density grows as the input signal gets longer.
How does the computation time change when we analyze bigger data?
Analyze the time complexity of the following code snippet.
import numpy as np
from scipy.signal import welch
fs = 1000 # Sampling frequency
x = np.random.randn(10000) # Random signal
frequencies, psd = welch(x, fs=fs, nperseg=256)
This code computes the power spectral density of a signal using Welch's method.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Splitting the signal into overlapping segments and computing FFT on each segment.
- How many times: Number of segments depends on signal length and segment size.
The total work grows roughly in proportion to the number of segments, which grows as the signal length increases.
| Input Size (n) | Approx. Operations |
|---|---|
| 10,000 | About 40 FFT computations |
| 100,000 | About 400 FFT computations |
| 1,000,000 | About 4,000 FFT computations |
Pattern observation: Increasing the input size by a factor of 10 roughly increases the number of FFT computations by a factor of 10, so the total work grows linearly with input size.
Time Complexity: O(n)
This means the time grows linearly with input size. Each segment uses FFT on a fixed segment size, which takes constant time (O(m log m) with m fixed), and the number of segments grows linearly with n.
[X] Wrong: "The computation time is O(n log n) because FFT is used."
[OK] Correct: FFT is on fixed-size segments (nperseg=256), so each takes constant time. Number of segments is O(n), so total time is O(n).
Understanding how signal length and segment size affect computation time helps you explain performance trade-offs clearly in real data analysis tasks.
"What if we change the segment size to be larger or smaller? How would the time complexity change?"