GroupBy performance considerations in Pandas - Time & Space Complexity
When we use pandas GroupBy, we want to know how long it takes as data grows.
We ask: How does grouping data affect the time needed to finish?
Analyze the time complexity of the following code snippet.
import pandas as pd
data = pd.DataFrame({
'Category': ['A', 'B', 'C', 'A', 'B'] * 200,
'Value': range(1000)
})
result = data.groupby('Category').sum()
This code groups data by the 'Category' column and sums the 'Value' for each group.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Scanning all rows to assign them to groups.
- How many times: Once for each row in the data (n times).
As the number of rows grows, the time to group and sum grows roughly in the same way.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 operations to assign and sum |
| 100 | About 100 operations |
| 1000 | About 1000 operations |
Pattern observation: The work grows roughly in direct proportion to the number of rows.
Time Complexity: O(n)
This means the time needed grows linearly as the number of rows increases.
[X] Wrong: "Grouping by many categories always makes the operation much slower than grouping by few categories."
[OK] Correct: The main cost depends mostly on the number of rows, not the number of groups. More groups add some overhead, but it is usually small compared to scanning all rows.
Understanding how grouping scales helps you explain data processing choices clearly and confidently.
"What if we grouped by two columns instead of one? How would the time complexity change?"