Cholesky decomposition in SciPy - Time & Space Complexity
We want to understand how the time needed to do Cholesky decomposition changes as the size of the matrix grows.
This helps us know how fast or slow the method will be for bigger problems.
Analyze the time complexity of the following code snippet.
import numpy as np
from scipy.linalg import cholesky
# Create a random positive definite matrix
n = 100
A = np.random.rand(n, n)
A = np.dot(A, A.T) # Make it symmetric and positive definite
# Compute Cholesky decomposition
L = cholesky(A, lower=True)
This code creates a square matrix and finds its Cholesky decomposition, which breaks it into a product of a lower triangular matrix and its transpose.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Nested loops over matrix rows and columns to compute elements of the lower triangular matrix.
- How many times: For an n x n matrix, roughly n rows and n columns are processed, with inner calculations depending on previous elements.
As the matrix size n grows, the number of calculations grows faster than just n times.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 500 |
| 100 | About 500,000 |
| 1000 | About 500,000,000 |
Pattern observation: The work grows roughly with the cube of n, so doubling n makes the work about eight times bigger.
Time Complexity: O(n^3)
This means if the matrix size doubles, the time to compute the decomposition increases about eight times.
[X] Wrong: "Cholesky decomposition runs in linear time because it just processes each element once."
[OK] Correct: The method involves nested calculations where each element depends on many others, so the total work grows much faster than just the number of elements.
Knowing how the time grows with matrix size helps you explain performance in real projects and shows you understand the cost behind matrix operations.
"What if we used a sparse matrix instead of a dense one? How would the time complexity change?"