Inverse Z-transform in Signal Processing - Time & Space Complexity
We want to understand how the time needed to compute the inverse Z-transform changes as the input size grows.
Specifically, how does the number of calculations grow when we increase the length of the signal?
Analyze the time complexity of the following code snippet.
import numpy as np
def inverse_z_transform(X, n, z):
x = np.zeros(n)
for k in range(n):
for m in range(k+1):
x[k] += X[m] * (z**(-m)) # z is a complex number on contour
return x
This code calculates the inverse Z-transform by summing terms for each output sample up to length n.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Nested loops where the inner loop sums over previous terms.
- How many times: Outer loop runs n times; inner loop runs up to k+1 times for each k.
As n grows, the total number of operations grows roughly like the sum of 1 + 2 + ... + n.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | ~55 |
| 100 | ~5,050 |
| 1000 | ~500,500 |
Pattern observation: The operations grow roughly with the square of n, so doubling n makes the work about four times bigger.
Time Complexity: O(n2)
This means the time to compute the inverse Z-transform grows roughly with the square of the signal length.
[X] Wrong: "The inverse Z-transform calculation time grows linearly with input size because it just loops once."
[OK] Correct: The code has nested loops, so each output depends on summing many terms, making the total work grow faster than just one loop.
Understanding how nested loops affect time helps you explain and improve signal processing code clearly and confidently.
"What if we used a direct formula or FFT-based method instead of nested loops? How would the time complexity change?"