Matrix creation and operations in SciPy - Time & Space Complexity
When working with matrices in scipy, it's important to know how the time to create and operate on them grows as the matrix size increases.
We want to understand how the number of steps changes when the matrix gets bigger.
Analyze the time complexity of the following code snippet.
import numpy as np
from scipy import linalg
n = 100
A = np.random.rand(n, n) # Create an n x n matrix
B = np.random.rand(n, n) # Create another n x n matrix
C = A + B # Matrix addition
D = linalg.inv(A) # Matrix inversion
This code creates two square matrices, adds them, and then finds the inverse of one matrix.
- Primary operation: Matrix inversion involves many repeated calculations internally, such as row operations or decompositions.
- How many times: These internal steps scale with the square and cube of the matrix size, depending on the operation.
As the matrix size n grows, the time to add matrices grows slower than the time to invert a matrix.
| Input Size (n) | Approx. Operations for Addition | Approx. Operations for Inversion |
|---|---|---|
| 10 | 100 | 1,000 |
| 100 | 10,000 | 1,000,000 |
| 1000 | 1,000,000 | 1,000,000,000 |
Pattern observation: Addition grows with the square of n, but inversion grows roughly with the cube of n, so inversion gets much slower as n increases.
Time Complexity: O(n^3)
This means that as the matrix size doubles, the time to invert it roughly increases eight times.
[X] Wrong: "Matrix addition and inversion take about the same time because they both work on the whole matrix."
[OK] Correct: Addition just adds each element once, so it grows with n squared, but inversion does many more complex steps, making it much slower as size grows.
Understanding how matrix operations scale helps you explain your choices when working with data or algorithms that use matrices, showing you know what affects performance.
"What if we replaced matrix inversion with matrix multiplication? How would the time complexity change?"