groupby() basics in Data Analysis Python - Time & Space Complexity
When we use groupby() in data analysis, we want to know how long it takes as our data grows.
We ask: How does the time to group data change when we have more rows?
Analyze the time complexity of the following code snippet.
import pandas as pd
data = pd.DataFrame({
'Category': ['A', 'B', 'A', 'B', 'C'],
'Value': [10, 20, 30, 40, 50]
})
grouped = data.groupby('Category').sum()
This code groups rows by the 'Category' column and sums the 'Value' for each group.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Scanning each row once to assign it to a group.
- How many times: Once for each row in the data.
As the number of rows grows, the time to group grows roughly the same way.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 operations |
| 100 | About 100 operations |
| 1000 | About 1000 operations |
Pattern observation: The time grows in a straight line with the number of rows.
Time Complexity: O(n)
This means the time to group data grows directly with the number of rows.
[X] Wrong: "Grouping data takes the same time no matter how many rows there are."
[OK] Correct: More rows mean more work to check and assign each row to a group, so time grows with data size.
Understanding how grouping scales helps you explain data processing steps clearly and shows you know how data size affects performance.
"What if we grouped by two columns instead of one? How would the time complexity change?"