Power spectral density estimation in Signal Processing - Time & Space Complexity
When estimating power spectral density, we want to know how the time to compute grows as the signal length increases.
We ask: How does the processing time change when the input signal gets longer?
Analyze the time complexity of the following code snippet.
import numpy as np
def estimate_psd(signal):
n = len(signal)
fft_vals = np.fft.fft(signal)
psd = (np.abs(fft_vals) ** 2) / n
return psd[:n // 2]
This code computes the power spectral density by applying a Fast Fourier Transform (FFT) on the input signal and then calculating the squared magnitude.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The FFT algorithm processes the entire signal of length n.
- How many times: The FFT internally performs operations proportional to n times log n.
As the signal length increases, the number of operations grows a bit faster than just the length itself.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 33 (10 * log2(10)) |
| 100 | About 664 (100 * log2(100)) |
| 1000 | About 9966 (1000 * log2(1000)) |
Pattern observation: The operations grow faster than the input size but much slower than its square. This means doubling the input length slightly more than doubles the work.
Time Complexity: O(n log n)
This means the time to estimate the power spectral density grows a bit faster than the signal length, but not as fast as checking every pair of points.
[X] Wrong: "The FFT runs in linear time, so doubling the signal length doubles the time exactly."
[OK] Correct: FFT uses a divide-and-conquer approach, so its time grows like n times log n, which is a bit more than just n.
Understanding how FFT time grows helps you explain why it is efficient for signal analysis and why it is preferred over slower methods.
"What if we used a simple Discrete Fourier Transform (DFT) instead of FFT? How would the time complexity change?"