Normalized histograms in Matplotlib - Time & Space Complexity
We want to understand how the time to create a normalized histogram changes as the data size grows.
How does the work increase when we have more data points?
Analyze the time complexity of the following code snippet.
import matplotlib.pyplot as plt
import numpy as np
n = 1000 # example data size
data = np.random.randn(n) # n data points
plt.hist(data, bins=50, density=True)
plt.show()
This code creates a histogram with 50 bins and normalizes it so the area sums to 1.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Counting how many data points fall into each of the 50 bins.
- How many times: Each of the n data points is checked once to find its bin.
As the number of data points grows, the time to count them into bins grows roughly the same.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 checks to place points in bins |
| 100 | About 100 checks |
| 1000 | About 1000 checks |
Pattern observation: The work grows directly with the number of data points.
Time Complexity: O(n)
This means the time to create the normalized histogram grows linearly with the number of data points.
[X] Wrong: "The number of bins affects the time complexity a lot, so more bins means much slower."
[OK] Correct: The number of bins is usually fixed and small compared to data size, so it does not change the main growth pattern.
Understanding how data size affects histogram creation helps you explain performance in data visualization tasks clearly and confidently.
"What if we increased the number of bins to grow with the data size? How would the time complexity change?"