Cumulative histograms in Matplotlib - Time & Space Complexity
We want to understand how the time to create a cumulative histogram changes as the data size grows.
How does the number of data points affect the work matplotlib does to draw the histogram?
Analyze the time complexity of the following code snippet.
import matplotlib.pyplot as plt
import numpy as np
data = np.random.randn(1000)
plt.hist(data, bins=50, cumulative=True)
plt.show()
This code creates a cumulative histogram from 1000 random data points divided into 50 bins.
- Primary operation: Counting how many data points fall into each bin.
- How many times: Each of the n data points is checked once to find its bin.
As the number of data points increases, the work to count them into bins grows roughly in direct proportion.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 checks |
| 100 | About 100 checks |
| 1000 | About 1000 checks |
Pattern observation: Doubling the data roughly doubles the counting work.
Time Complexity: O(n)
This means the time to build the cumulative histogram grows linearly with the number of data points.
[X] Wrong: "The number of bins affects the time complexity more than the number of data points."
[OK] Correct: The bins are fixed and small compared to data size; the main work is checking each data point once, so data size dominates time.
Understanding how data size affects plotting time helps you explain performance in data visualization tasks clearly and confidently.
"What if we increased the number of bins significantly? How would the time complexity change?"