filter() for group-level filtering in Data Analysis Python - Time & Space Complexity
We want to understand how the time needed changes when we use filter() on groups of data.
How does the work grow when the data size grows?
Analyze the time complexity of the following code snippet.
import pandas as pd
data = pd.DataFrame({
'group': ['A', 'A', 'B', 'B', 'C', 'C'],
'value': [10, 20, 30, 40, 50, 60]
})
filtered = data.groupby('group').filter(lambda x: x['value'].mean() > 25)
This code groups data by 'group' and keeps only groups where the average 'value' is more than 25.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The code loops over each group created by
groupby. - How many times: Once per group, it calculates the mean of the group's values and applies the filter condition.
As the number of rows grows, the number of groups and their sizes affect the work done.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 operations to compute means and filter |
| 100 | About 100 operations, since each row is checked once |
| 1000 | About 1000 operations, scaling linearly with rows |
Pattern observation: The work grows roughly in direct proportion to the number of rows.
Time Complexity: O(n)
This means the time needed grows linearly as the data size grows.
[X] Wrong: "Filtering groups is faster because it only looks at groups, not all rows."
[OK] Correct: Even though we think in groups, the code still looks at every row inside each group to calculate the mean and decide if the group passes the filter.
Understanding how group-level filtering scales helps you explain data processing efficiency clearly and confidently.
What if we changed the filter condition to check the sum instead of the mean? How would the time complexity change?