Sparse iterative solvers (gmres, cg) in SciPy - Time & Space Complexity
When solving large sparse systems, it is important to know how the time to find a solution grows as the system size increases.
We want to understand how the solver's work changes when the matrix and vector get bigger.
Analyze the time complexity of this sparse solver code using scipy.
import numpy as np
from scipy.sparse import diags
from scipy.sparse.linalg import cg
n = 1000
A = diags([1, 2, 1], [-1, 0, 1], shape=(n, n))
b = np.ones(n)
x, info = cg(A, b, tol=1e-5)
This code solves a system with a sparse tridiagonal matrix using the conjugate gradient method.
Look at what repeats inside the solver:
- Primary operation: Multiplying the sparse matrix by a vector repeatedly.
- How many times: The solver repeats this multiplication many times until it converges.
As the matrix size grows, each multiplication takes more time, and more steps may be needed.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | ~100 multiplications |
| 100 | ~1,000 multiplications |
| 1000 | ~10,000 multiplications |
Pattern observation: The total work grows roughly linearly with the size of the matrix times the number of iterations.
Time Complexity: O(k \times n)
This means the time grows with the number of iterations k times the size n of the matrix.
[X] Wrong: "The solver always takes the same number of steps regardless of matrix size."
[OK] Correct: Larger systems often need more iterations to reach a good solution, so time grows with both size and iteration count.
Understanding how iterative solvers scale helps you explain performance in real data science tasks involving large sparse data.
"What if the matrix was dense instead of sparse? How would the time complexity change?"