Matrix Chain Multiplication in DSA Typescript - Time & Space Complexity
Matrix Chain Multiplication finds the best way to multiply a chain of matrices with minimum cost.
We want to know how the time needed grows as the number of matrices increases.
Analyze the time complexity of this dynamic programming solution.
function matrixChainOrder(p: number[]): number {
const n = p.length - 1;
const dp: number[][] = Array.from({ length: n }, () => Array(n).fill(0));
for (let length = 2; length <= n; length++) {
for (let i = 0; i <= n - length; i++) {
let j = i + length - 1;
dp[i][j] = Infinity;
for (let k = i; k < j; k++) {
const cost = dp[i][k] + dp[k + 1][j] + p[i] * p[k + 1] * p[j + 1];
if (cost < dp[i][j]) dp[i][j] = cost;
}
}
}
return dp[0][n - 1];
}
This code finds the minimum multiplication cost for a chain of matrices using dynamic programming.
Look at the loops that repeat work:
- Primary operation: The innermost loop tries all split points between matrices.
- How many times: For each pair of matrices, it tries all possible splits, repeating many times.
As the number of matrices grows, the operations increase quickly because of three nested loops.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 1,000 |
| 100 | About 1,000,000 |
| 1000 | About 1,000,000,000 |
Pattern observation: Operations grow roughly by the cube of the input size.
Time Complexity: O(n³)
This means if you double the number of matrices, the time needed grows about eight times.
[X] Wrong: "The solution runs in linear time because it just loops over the matrices."
[OK] Correct: There are three nested loops, so the work grows much faster than just going through the matrices once.
Understanding this time complexity helps you explain how dynamic programming improves over brute force and shows your grasp of nested loops and optimization.
"What if we used memoization with recursion instead of bottom-up DP? How would the time complexity change?"