groupby() basics in Pandas - Time & Space Complexity
We want to understand how the time needed to group data grows as the data gets bigger.
How does pandas groupby() handle larger datasets in terms of speed?
Analyze the time complexity of the following code snippet.
import pandas as pd
data = pd.DataFrame({
'Category': ['A', 'B', 'A', 'B', 'C', 'A'],
'Value': [10, 20, 30, 40, 50, 60]
})
grouped = data.groupby('Category').sum()
This code groups rows by the 'Category' column and sums the 'Value' for each group.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Scanning each row to assign it to a group.
- How many times: Once for each row in the data.
As the number of rows grows, the time to group and sum grows roughly in direct proportion.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 row checks and sums |
| 100 | About 100 row checks and sums |
| 1000 | About 1000 row checks and sums |
Pattern observation: Doubling the rows roughly doubles the work done.
Time Complexity: O(n)
This means the time grows linearly with the number of rows in the data.
[X] Wrong: "Grouping data is instant no matter how big the data is."
[OK] Correct: Each row must be checked and assigned to a group, so more rows mean more work and more time.
Knowing how grouping scales helps you explain your data processing choices clearly and confidently.
"What if we grouped by two columns instead of one? How would the time complexity change?"