Cluster Autoscaler watches the pods and nodes in a Kubernetes cluster. When does it decide to add new nodes?
Think about what causes pods to wait for resources.
Cluster Autoscaler adds nodes only when pods cannot be scheduled because existing nodes lack enough CPU, memory, or other resources.
What is the expected output of the command kubectl get deployment cluster-autoscaler -n kube-system -o jsonpath='{.status.availableReplicas}' if the Cluster Autoscaler is running with 1 replica?
Check the meaning of availableReplicas in deployment status.
The field availableReplicas shows how many pods of the deployment are ready and running. If Cluster Autoscaler is running with 1 pod, it shows 1.
You want Cluster Autoscaler to ignore nodes labeled autoscaler=disabled. Which flag should you add to the Cluster Autoscaler deployment?
Check official Cluster Autoscaler flags for ignoring nodes by label.
The correct flag is --skip-nodes-with-label to tell Cluster Autoscaler to ignore nodes with the given label.
Cluster Autoscaler is running but does not remove nodes even though some nodes have very low CPU and memory usage. What could be a reason?
Think about what prevents pods from being moved or deleted.
Pods with podDisruptionBudget can block eviction, so Cluster Autoscaler cannot safely remove nodes hosting those pods.
Put the steps in the correct order to enable Cluster Autoscaler on a new Kubernetes cluster.
Think about what must exist before deploying Cluster Autoscaler and configuring it.
First create the node group, then configure flags, deploy Cluster Autoscaler, and finally verify logs.