Pivot tables with pivot_table() in Data Analysis Python - Time & Space Complexity
We want to understand how the time it takes to create a pivot table changes as the data grows.
Specifically, how does the pivot_table() function handle bigger data sets?
Analyze the time complexity of the following code snippet.
import pandas as pd
data = pd.DataFrame({
'Category': ['A', 'B', 'A', 'B', 'C'],
'Value': [10, 20, 30, 40, 50]
})
pivot = data.pivot_table(index='Category', values='Value', aggfunc='sum')
This code creates a pivot table that sums values for each category.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Scanning all rows in the data to group by category.
- How many times: Once for each row in the data (n times).
As the number of rows grows, the time to group and sum grows roughly in direct proportion.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 operations to group and sum |
| 100 | About 100 operations |
| 1000 | About 1000 operations |
Pattern observation: Doubling the data roughly doubles the work done.
Time Complexity: O(n)
This means the time grows linearly with the number of rows in the data.
[X] Wrong: "Pivot tables take constant time no matter how big the data is."
[OK] Correct: The function must look at each row to group and sum, so more data means more work.
Understanding how data grouping scales helps you explain your approach clearly when working with real data.
"What if we added multiple grouping columns? How would the time complexity change?"