0
0
SciPydata~10 mins

Sparse iterative solvers (gmres, cg) in SciPy - Step-by-Step Execution

Choose your learning style9 modes available
Concept Flow - Sparse iterative solvers (gmres, cg)
Start with sparse matrix A and vector b
Choose solver: GMRES or CG
Initialize guess x0 (usually zeros)
Iterative loop: compute residual r = b - A*x
Check residual norm < tolerance?
YesStop, solution x found
No
Update solution x using solver step
Repeat iterative loop
The solver starts with a guess and improves it step-by-step until the solution fits the equation well enough.
Execution Sample
SciPy
import numpy as np
from scipy.sparse import diags
from scipy.sparse.linalg import cg

A = diags([1, 2, 1], [-1, 0, 1], shape=(4,4))
b = np.array([1, 2, 2, 1])
x, info = cg(A, b)
This code solves Ax = b using the Conjugate Gradient method on a sparse matrix A.
Execution Table
IterationResidual NormCondition (residual < tol)ActionApproximate Solution x
03.16FalseStart with x=0, compute initial residual[0.0, 0.0, 0.0, 0.0]
10.18FalseCG step: update x and compute new residual[0.28, 0.56, 0.56, 0.28]
20.00TrueCG step: residual below tolerance, stop[0.20, 0.60, 0.60, 0.20]
💡 Residual norm reached zero at iteration 2, solution found.
Variable Tracker
VariableInitialAfter 1After 2
x (solution)[0.0, 0.0, 0.0, 0.0][0.28, 0.56, 0.56, 0.28][0.20, 0.60, 0.60, 0.20]
Residual norm3.160.180.00
Key Moments - 2 Insights
Why does the residual norm start high and then decrease?
Because the initial guess x=0 is usually far from the true solution, so the difference b - A*x is large. Each iteration improves x, reducing the residual norm as shown in the execution_table rows 0 to 2.
What does it mean when the residual norm becomes zero?
It means the current solution x satisfies the equation Ax = b perfectly within the tolerance, so the solver stops. See execution_table row 2 where the condition becomes True.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution table, what is the approximate solution x after iteration 1?
A[0.20, 0.60, 0.60, 0.20]
B[0.28, 0.56, 0.56, 0.28]
C[0.0, 0.0, 0.0, 0.0]
D[1.0, 1.0, 1.0, 1.0]
💡 Hint
Check the 'Approximate Solution x' column at iteration 1 in the execution_table.
At which iteration does the residual norm first become less than 1?
AIteration 0
BIteration 2
CIteration 1
DNever
💡 Hint
Look at the 'Residual Norm' column in the execution_table rows.
If the initial guess x was closer to the true solution, how would the residual norm at iteration 1 change?
AIt would be smaller
BIt would be larger
CIt would be zero
DIt would not change
💡 Hint
Refer to variable_tracker showing residual norm changes starting from initial guess.
Concept Snapshot
Sparse iterative solvers like GMRES and CG solve Ax=b for large sparse A.
They start with a guess x0 and improve it iteratively.
Each step reduces the residual r = b - A*x.
Stop when residual norm is below tolerance.
Useful for big sparse systems where direct methods are slow.
Full Transcript
Sparse iterative solvers such as GMRES and CG start with a sparse matrix A and a vector b. They pick an initial guess for the solution x, often zeros. Then they repeatedly compute the residual, which measures how far Ax is from b. If the residual is too large, they update x to get closer to the true solution. This loop continues until the residual is small enough, meaning the solution is good. The example code uses CG to solve a small sparse system, showing how x and residual norm change each iteration until the solution is found.