Resampling time series in Data Analysis Python - Time & Space Complexity
When working with time series data, resampling helps us change the frequency of data points.
We want to know how the time it takes to resample grows as the data size grows.
Analyze the time complexity of the following code snippet.
import pandas as pd
dates = pd.date_range('2023-01-01', periods=1000, freq='T')
data = pd.Series(range(1000), index=dates)
resampled = data.resample('H').mean()
This code creates a time series with 1000 minute-level points and resamples it to hourly data by averaging.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Grouping data points by hour and computing the mean for each group.
- How many times: The number of groups depends on input size; roughly one group per hour in the data.
As the number of data points grows, the number of groups grows proportionally, and each group's mean is computed.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 1 group, 10 operations |
| 100 | About 1-2 groups, 100 operations |
| 1000 | About 16 groups, 1000 operations |
Pattern observation: Operations grow roughly in direct proportion to the number of data points.
Time Complexity: O(n)
This means the time to resample grows linearly as the number of data points increases.
[X] Wrong: "Resampling time is constant no matter how much data there is."
[OK] Correct: More data means more groups and more calculations, so time grows with data size.
Understanding how resampling scales helps you handle real-world time series data efficiently and shows you can think about performance.
"What if we resampled to a daily frequency instead of hourly? How would the time complexity change?"