Feature engineering pipelines in MLOps - Time & Space Complexity
When building feature engineering pipelines, it is important to understand how the time to process data grows as the data size increases.
We want to know how the pipeline's execution time changes when we add more data.
Analyze the time complexity of the following feature engineering pipeline code snippet.
features = []
for record in dataset:
feature1 = transform1(record)
feature2 = transform2(record)
combined = combine_features(feature1, feature2)
features.append(combined)
This code applies two transformations and then combines them for each record in the dataset.
Look at what repeats as the data grows.
- Primary operation: Loop over each record in the dataset.
- How many times: Once for every record, so as many times as the dataset size.
As the number of records increases, the total work grows in a straight line.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 sets of transformations and combinations |
| 100 | About 100 sets of transformations and combinations |
| 1000 | About 1000 sets of transformations and combinations |
Pattern observation: Doubling the data roughly doubles the work done.
Time Complexity: O(n)
This means the time to run the pipeline grows directly in proportion to the number of records.
[X] Wrong: "Adding more transformations inside the loop does not affect overall time complexity."
[OK] Correct: Each added transformation runs for every record, so it increases the total work, even if the growth pattern stays linear.
Understanding how your pipeline scales with data size shows you can build efficient data workflows, a key skill in real projects.
"What if we added a nested loop inside the pipeline that compares each record to every other record? How would the time complexity change?"