Heatmap with plt.imshow in Matplotlib - Time & Space Complexity
We want to understand how the time to create a heatmap grows as the data size increases.
How does the plotting time change when the matrix gets bigger?
Analyze the time complexity of the following code snippet.
import matplotlib.pyplot as plt
import numpy as np
# Create a random matrix of size n x n
n = 100
data = np.random.rand(n, n)
# Plot heatmap
plt.imshow(data, cmap='viridis')
plt.colorbar()
plt.show()
This code creates a heatmap from a square matrix of size n by n using plt.imshow.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Rendering each cell of the n x n matrix as a colored pixel.
- How many times: Once for each of the n² cells in the matrix.
As the matrix size n grows, the number of cells to draw grows with n squared.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 100 |
| 100 | 10,000 |
| 1000 | 1,000,000 |
Pattern observation: Doubling n makes the work about four times bigger because the area grows with n squared.
Time Complexity: O(n²)
This means the time to draw the heatmap grows roughly with the square of the matrix size.
[X] Wrong: "The time grows linearly with n because we just loop once over the data."
[OK] Correct: The data is two-dimensional, so we actually process n times n cells, not just n.
Understanding how plotting time grows with data size helps you explain performance in data visualization tasks clearly and confidently.
"What if we changed the data from a square matrix to a long vector? How would the time complexity change?"