Eigenvalues and eigenvectors (eig) in SciPy - Time & Space Complexity
We want to understand how the time to find eigenvalues and eigenvectors changes as the matrix size grows.
How does the work increase when the matrix gets bigger?
Analyze the time complexity of the following code snippet.
import numpy as np
from scipy.linalg import eig
n = 10 # Example size
A = np.random.rand(n, n) # Create a random n x n matrix
values, vectors = eig(A) # Compute eigenvalues and eigenvectors
This code creates a square matrix and computes its eigenvalues and eigenvectors using scipy.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Internal matrix factorizations and transformations to find eigenvalues and eigenvectors.
- How many times: These operations involve multiple passes over the n x n matrix, roughly proportional to n cubed steps.
As the matrix size n grows, the work needed grows much faster than n or n squared.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 1,000 operations |
| 100 | About 1,000,000 operations |
| 1000 | About 1,000,000,000 operations |
Pattern observation: The operations grow roughly by the cube of n, so doubling n makes the work about eight times bigger.
Time Complexity: O(n^3)
This means the time to find eigenvalues and eigenvectors grows roughly with the cube of the matrix size.
[X] Wrong: "Finding eigenvalues is as fast as simple loops over the matrix, like O(n^2)."
[OK] Correct: The process involves complex matrix operations that require more work than just looking at each element once.
Knowing how eigenvalue computations scale helps you understand performance in data science tasks like PCA or stability analysis.
"What if the matrix is sparse instead of dense? How would the time complexity change?"