2D FFT (fft2) in SciPy - Time & Space Complexity
We want to understand how the time to compute a 2D FFT changes as the input size grows.
How does the work increase when we process bigger 2D data arrays?
Analyze the time complexity of the following code snippet.
import numpy as np
from scipy.fft import fft2
# Create a 2D array of size n x n
n = 256
data = np.random.rand(n, n)
# Compute the 2D FFT
result = fft2(data)
This code computes the 2D Fast Fourier Transform of a square 2D array with size n by n.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The FFT algorithm applies 1D FFTs repeatedly along rows and columns.
- How many times: It processes all n rows and all n columns, each of length n.
As the input size n grows, the number of operations grows a bit faster than n squared but much slower than n cubed.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 1,000 |
| 100 | About 26,000 |
| 1000 | About 300,000 |
Pattern observation: The operations grow roughly like n squared times log n, so bigger inputs take more time but not as fast as cubic growth.
Time Complexity: O(n^2 log n)
This means the time grows a bit faster than the number of data points, but still much faster than simple loops over all points.
[X] Wrong: "The 2D FFT takes time proportional to n squared because it processes an n by n array."
[OK] Correct: The FFT uses a clever method that is faster than just looping over all points; it adds a log n factor due to the recursive splitting.
Understanding how FFT scales helps you explain performance in image processing or signal analysis tasks, showing you know how algorithms handle big data efficiently.
"What if we used a non-square input array of size n by m? How would the time complexity change?"