0
0
SciPydata~10 mins

Preconditioners in SciPy - Step-by-Step Execution

Choose your learning style9 modes available
Concept Flow - Preconditioners
Start: Linear System Ax=b
Choose Preconditioner M
Transform System: M^-1 A x = M^-1 b
Iterative Solver (e.g., CG) uses M to speed up
Check Convergence
Solution x
Preconditioners transform a linear system to help iterative solvers find solutions faster by improving convergence.
Execution Sample
SciPy
import numpy as np
from scipy.sparse.linalg import cg, LinearOperator

A = np.array([[4,1],[1,3]])
b = np.array([1,2])

M_inv = np.linalg.inv(np.diag(np.diag(A)))

x, info = cg(A, b, M=LinearOperator(A.shape, matvec=lambda x: M_inv @ x))
Solves Ax=b using Conjugate Gradient with a diagonal preconditioner to speed up convergence.
Execution Table
StepResidual NormPreconditioned ResidualDirection VectorSolution xInfo
01.8028[0.4507, 0.6009][0.4507, 0.6009][0.0, 0.0]Starting CG
10.2236[0.1118, 0.1491][0.1118, 0.1491][0.0909, 0.1818]Iteration 1
20.0[0.0, 0.0][0.0, 0.0][0.0909, 0.5455]0
💡 Residual norm reached zero, CG converged in 2 iterations with preconditioning
Variable Tracker
VariableStartAfter 1After 2Final
Residual Norm1.80280.22360.00.0
Solution x[0.0, 0.0][0.0909, 0.1818][0.0909, 0.5455][0.0909, 0.5455]
Direction Vector[0.4507, 0.6009][0.1118, 0.1491][0.0, 0.0][0.0, 0.0]
Key Moments - 2 Insights
Why do we apply M^-1 to the residual instead of directly to A or b?
Applying M^-1 to the residual improves the search direction in the iterative solver, making convergence faster, as shown in the 'Preconditioned Residual' column in the execution_table.
What does the 'Info' value represent in the CG solver output?
'Info' indicates the solver status: 0 means convergence achieved, as seen in the last row of the execution_table where 'Info' is '0'.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table at Step 1, what is the approximate value of the Residual Norm?
A0.0
B1.8028
C0.2236
D0.5
💡 Hint
Check the 'Residual Norm' column at Step 1 in the execution_table.
At which step does the CG solver converge according to the execution_table?
AStep 0
BStep 2
CStep 1
DNever converges
💡 Hint
Look at the 'Info' column and residual norm reaching zero in the execution_table.
If we remove the preconditioner M, how would the number of iterations likely change?
AIncrease, more iterations needed
BStay the same
CDecrease to 1 iteration
DSolver would fail immediately
💡 Hint
Preconditioners improve convergence speed; without M, iterative solvers usually take more steps.
Concept Snapshot
Preconditioners help iterative solvers solve Ax=b faster.
They transform the system using M^-1 to improve convergence.
Common preconditioners include diagonal or incomplete factorizations.
Use with solvers like Conjugate Gradient (cg) in scipy.
Preconditioning reduces iterations and computation time.
Full Transcript
Preconditioners are tools used to make solving linear systems faster. When we have a system Ax=b, we choose a preconditioner M to transform it into M^-1 A x = M^-1 b. This helps iterative solvers like Conjugate Gradient find the solution quicker by improving the system's properties. In the example, we used a diagonal preconditioner and saw the residual norm drop quickly, converging in just two iterations. The preconditioned residual guides the solver's search direction. Without preconditioning, the solver would take more steps. The 'Info' output tells us if the solver converged. Preconditioners are important for efficient data science computations involving large linear systems.