In Airflow's Kubernetes Executor, how does dynamic scaling of task pods happen?
Think about how Kubernetes manages workloads with pods for isolation.
The Kubernetes Executor creates a separate pod for each task dynamically. This allows Airflow to scale tasks independently and clean up resources after task completion.
What is the expected output of kubectl get pods when Airflow is running tasks with the Kubernetes Executor?
kubectl get pods
Remember each task runs in its own pod with Kubernetes Executor.
With Kubernetes Executor, each Airflow task runs in its own pod. So kubectl get pods shows multiple pods named like airflow-task-xxxx in Running or Completed states.
Which configuration snippet correctly enables the Kubernetes Executor with dynamic pod scaling in airflow.cfg?
Check the executor type and Kubernetes section for correct settings.
To enable Kubernetes Executor with dynamic scaling, set executor = KubernetesExecutor under [core] and configure the Kubernetes section with namespace, in_cluster, and worker container details.
You notice Airflow task pods created by Kubernetes Executor remain in Pending state and never run. What is the most likely cause?
Think about what causes pods to stay Pending in Kubernetes.
Pods stay in Pending state if Kubernetes cannot find suitable nodes to schedule them. This often happens due to insufficient cluster resources or restrictive node selectors.
Arrange the following steps in the correct order for how Airflow Kubernetes Executor dynamically scales task execution.
Think about the natural flow from scheduler to pod creation to execution.
The scheduler first detects the task, then creates the pod manifest, Kubernetes schedules the pod, and finally the task runs inside the pod.