Named aggregation in Pandas - Time & Space Complexity
We want to understand how the time needed to run named aggregation in pandas changes as the data grows.
Specifically, how does grouping and aggregating data scale with more rows?
Analyze the time complexity of the following code snippet.
import pandas as pd
df = pd.DataFrame({
'Category': ['A', 'B', 'A', 'B', 'C'],
'Value1': [10, 20, 30, 40, 50],
'Value2': [5, 10, 15, 20, 25]
})
result = df.groupby('Category').agg(
total_value1=pd.NamedAgg(column='Value1', aggfunc='sum'),
mean_value2=pd.NamedAgg(column='Value2', aggfunc='mean')
)
This code groups data by 'Category' and calculates the sum of 'Value1' and the mean of 'Value2' for each group.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: pandas loops internally over each row to assign it to a group and then performs aggregation on each group.
- How many times: Each of the n rows is processed once to find its group, then aggregation runs over each group's rows.
As the number of rows grows, the time to group and aggregate grows roughly in direct proportion.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 grouping and aggregation steps |
| 100 | About 100 grouping and aggregation steps |
| 1000 | About 1000 grouping and aggregation steps |
Pattern observation: The work grows roughly linearly as the number of rows increases.
Time Complexity: O(n)
This means the time needed grows roughly in direct proportion to the number of rows in the data.
[X] Wrong: "Grouping and aggregation run in constant time no matter how big the data is."
[OK] Correct: The code must look at each row to assign it to a group and then combine values, so more rows mean more work.
Understanding how grouping and aggregation scale helps you explain data processing efficiency clearly and confidently.
"What if we added multiple aggregation functions per column? How would the time complexity change?"