Apache Airflow for ML orchestration in MLOps - Time & Space Complexity
When using Apache Airflow to run machine learning tasks, it's important to know how the time to complete workflows changes as you add more tasks.
We want to understand how the total work grows when the number of tasks increases.
Analyze the time complexity of the following Airflow DAG code snippet.
from airflow import DAG
from airflow.operators.python import PythonOperator
from datetime import datetime
def train_model(task_id):
print(f"Training model {task_id}")
dag = DAG('ml_training', start_date=datetime(2024, 1, 1))
n = 10 # Define n before using it
for i in range(1, n+1):
task = PythonOperator(
task_id=f'train_model_{i}',
python_callable=lambda i=i: train_model(i),
dag=dag
)
This code creates a workflow with n training tasks, each running a model training step.
- Primary operation: Creating and scheduling
ntraining tasks in the DAG. - How many times: The loop runs exactly
ntimes, once per task.
As you add more tasks, the total number of operations grows directly with the number of tasks.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 task creations and schedules |
| 100 | 100 task creations and schedules |
| 1000 | 1000 task creations and schedules |
Pattern observation: The work grows evenly and directly with the number of tasks added.
Time Complexity: O(n)
This means the time to set up and schedule tasks grows in a straight line as you add more tasks.
[X] Wrong: "Adding more tasks will only take a tiny bit more time, almost no change."
[OK] Correct: Each new task adds work to create and schedule it, so time grows steadily, not barely at all.
Understanding how task count affects workflow time helps you design efficient ML pipelines and shows you can reason about scaling in real projects.
"What if tasks depended on each other in a chain instead of running independently? How would the time complexity change?"