Matrix multiplication with @ operator in NumPy - Time & Space Complexity
We want to understand how the time needed to multiply two matrices grows as the size of the matrices increases.
How does the number of calculations change when the matrices get bigger?
Analyze the time complexity of the following code snippet.
import numpy as np
n = 10 # example size
A = np.random.rand(n, n)
B = np.random.rand(n, n)
C = A @ B
This code multiplies two square matrices A and B of size n by n using the @ operator.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Multiplying and summing elements for each cell in the result matrix.
- How many times: For each of the n rows and n columns, it performs n multiplications and additions.
As the matrix size n grows, the number of calculations grows quickly because each new row and column adds many more multiplications.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 1,000 |
| 100 | About 1,000,000 |
| 1000 | About 1,000,000,000 |
Pattern observation: The operations grow roughly by the cube of n, so doubling n makes the work about eight times bigger.
Time Complexity: O(n^3)
This means the time to multiply two n by n matrices grows roughly with the cube of n, so bigger matrices take much more time.
[X] Wrong: "Matrix multiplication with @ operator runs in linear time like simple addition."
[OK] Correct: Multiplying matrices involves many nested calculations, not just one pass through the data, so it takes much more time as size grows.
Understanding how matrix multiplication scales helps you explain performance in data science tasks like machine learning and graphics, showing you know how big data affects calculations.
"What if we multiply a matrix of size n by another matrix of size n by m? How would the time complexity change?"