Stability analysis (pole-zero plot) in Signal Processing - Time & Space Complexity
We want to understand how the time to analyze system stability grows as the system size increases.
Specifically, we ask: How does checking poles and zeros scale with input size?
Analyze the time complexity of the following code snippet.
import numpy as np
from scipy.signal import tf2zpk
# Given numerator and denominator coefficients
num = np.array([1, -0.5, 0.25])
den = np.array([1, -1.2, 0.36])
# Compute zeros, poles, and gain
zeros, poles, gain = tf2zpk(num, den)
# Check stability by examining poles
stable = np.all(np.abs(poles) < 1)
This code finds poles and zeros of a system and checks if all poles lie inside the unit circle to decide stability.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Finding roots of the denominator polynomial (poles).
- How many times: The root-finding algorithm processes all coefficients once, internally using iterative methods.
As the polynomial degree (input size) grows, root-finding takes more time because it must handle more coefficients.
| Input Size (degree n) | Approx. Operations |
|---|---|
| 10 | About 1,000 operations |
| 100 | About 1,000,000 operations |
| 1000 | About 1,000,000,000 operations |
Pattern observation: Operations grow roughly with the cube of the polynomial degree.
Time Complexity: O(n^3)
This means the time to find poles grows roughly with the cube of the number of coefficients.
[X] Wrong: "Finding poles is a simple linear scan and takes O(n) time."
[OK] Correct: Root-finding involves complex iterative calculations that grow faster than linearly with polynomial degree.
Understanding how stability checks scale helps you explain system analysis efficiency clearly and confidently.
"What if we used a simpler method that only approximates poles? How would the time complexity change?"