Matrix transpose operations in NumPy - Time & Space Complexity
We want to understand how the time to transpose a matrix changes as the matrix gets bigger.
How does the work grow when we flip rows and columns?
Analyze the time complexity of the following code snippet.
import numpy as np
n = 10 # example size
matrix = np.random.rand(n, n)
transposed = matrix.T.copy()
This code creates a square matrix and then transposes it by swapping rows and columns.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Accessing each element to rearrange it in the transposed matrix.
- How many times: Once for every element in the matrix, which is n x n times.
As the matrix size grows, the number of elements grows by the square of n.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 100 |
| 100 | 10,000 |
| 1000 | 1,000,000 |
Pattern observation: Doubling the size of the matrix makes the work about four times bigger.
Time Complexity: O(n²)
This means the time to transpose grows with the square of the matrix size, because every element must be moved.
[X] Wrong: "Transposing a matrix is just flipping it, so it takes constant time."
[OK] Correct: Even though it looks like a simple flip, every element must be accessed and moved, so the work grows with the number of elements.
Understanding how matrix operations scale helps you explain efficiency clearly and shows you can think about data size impact in real tasks.
"What if the matrix was not square but rectangular with dimensions n by m? How would the time complexity change?"