FFT computation (fft) in SciPy - Time & Space Complexity
We want to understand how the time to compute the FFT grows as the input size increases.
How does the number of steps change when we give the FFT more data?
Analyze the time complexity of the following code snippet.
import numpy as np
from scipy.fft import fft
x = np.random.random(1024) # input array of size 1024
result = fft(x) # compute the FFT of x
This code creates a list of 1024 random numbers and computes their FFT using scipy.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The FFT algorithm repeatedly splits and combines the input data.
- How many times: It does this about log base 2 of n times, where n is the input size.
As the input size grows, the number of operations grows a bit faster than just the input size.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 33 |
| 100 | About 664 |
| 1000 | About 9966 |
Pattern observation: The operations grow roughly like n times log n, which is faster than n but much slower than n squared.
Time Complexity: O(n log n)
This means if you double the input size, the work grows a bit more than double, but not as much as square.
[X] Wrong: "FFT takes time proportional to n squared because it looks at all pairs of points."
[OK] Correct: FFT cleverly breaks the problem into smaller parts, so it avoids checking all pairs and runs much faster than n squared.
Understanding FFT time complexity shows you can analyze efficient algorithms that handle big data quickly, a useful skill in many data science tasks.
"What if we used a simple discrete Fourier transform instead of FFT? How would the time complexity change?"