Solving linear systems (solve) in SciPy - Time & Space Complexity
When we solve linear systems using scipy, we want to know how the time it takes changes as the system size grows.
We ask: How does solving a bigger system affect the work done?
Analyze the time complexity of the following code snippet.
import numpy as np
from scipy.linalg import solve
n = 10 # example size
A = np.random.rand(n, n) # square matrix of size n
b = np.random.rand(n) # vector of size n
x = solve(A, b) # solve Ax = b
This code solves a system of linear equations with a square matrix A and vector b.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Matrix factorization and substitution steps inside solve.
- How many times: Operations involve nested loops over matrix rows and columns, roughly n times for each dimension.
As the matrix size n grows, the number of calculations grows much faster.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 1,000 operations |
| 100 | About 1,000,000 operations |
| 1000 | About 1,000,000,000 operations |
Pattern observation: The work grows roughly by the cube of n, so doubling n makes the work about eight times bigger.
Time Complexity: O(n^3)
This means if you double the size of the system, the time to solve it grows about eight times.
[X] Wrong: "Solving linear systems with solve grows linearly with n because it just loops once over the matrix."
[OK] Correct: The solve function does more than one loop; it performs matrix factorization which involves nested loops over rows and columns, making the work grow much faster than just n.
Understanding how solving linear systems scales helps you explain performance in data science tasks like regression or simulations, showing you know how algorithms behave with bigger data.
"What if the matrix A is sparse instead of dense? How would the time complexity change?"