Pole-zero analysis for stability in Signal Processing - Time & Space Complexity
We want to understand how the time to analyze poles and zeros grows as the signal system size increases.
How does the work needed change when we have more poles and zeros to check?
Analyze the time complexity of the following code snippet.
# Given poles and zeros as lists
poles = [...] # list of complex numbers
zeros = [...] # list of complex numbers
# Check stability by verifying all poles inside unit circle
stable = True
for pole in poles:
if abs(pole) >= 1:
stable = False
break
# Output stability result
print('Stable' if stable else 'Unstable')
This code checks if all poles lie inside the unit circle to decide system stability.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Looping through each pole to check its magnitude.
- How many times: Once for each pole in the list.
As the number of poles increases, the time to check stability grows directly with that number.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 checks |
| 100 | 100 checks |
| 1000 | 1000 checks |
Pattern observation: The number of operations grows in a straight line with input size.
Time Complexity: O(n)
This means the time to check stability grows directly with the number of poles.
[X] Wrong: "Checking zeros also adds the same time cost as poles for stability."
[OK] Correct: Stability depends only on poles, so zeros do not affect the time to decide stability.
Understanding how time grows with system size helps you explain your approach clearly and shows you know what matters most in signal analysis.
"What if we also checked zeros for some property? How would that change the time complexity?"