You have an Azure Kubernetes Service (AKS) cluster with two node pools: system and user. The system node pool is set to a fixed size of 3 nodes, and the user node pool has autoscaling enabled with a minimum of 1 and maximum of 5 nodes.
What happens when the workload on the user node pool increases beyond the capacity of 5 nodes?
Think about the maximum node count set for autoscaling and how AKS manages node pools independently.
Autoscaling respects the maximum node count set for each node pool. Once the user node pool reaches 5 nodes, it cannot scale further. New pods that require more capacity remain pending until resources free up or manual scaling occurs.
You want to configure an AKS node pool named compute to autoscale between 2 and 6 nodes. Which Azure CLI command correctly sets this autoscaling configuration?
Remember that autoscaling is enabled or updated on existing node pools with specific flags.
The az aks nodepool update command with --enable-cluster-autoscaler, --min-count, and --max-count correctly configures autoscaling limits on an existing node pool.
You need to design an AKS cluster to run two types of workloads: latency-sensitive services and batch jobs. Latency-sensitive services require dedicated nodes with high availability, while batch jobs can run on cheaper, spot-priced nodes that can be preempted.
Which node pool architecture best fits this requirement?
Consider workload isolation and cost optimization strategies.
Separating workloads into dedicated node pools allows tuning VM types and scaling policies per workload. Spot VMs are suitable for batch jobs but not for latency-sensitive services that require stability.
You want to restrict access to the system node pool in your AKS cluster so that only cluster administrators can deploy pods there. Which approach enforces this security requirement?
Think about combining Azure AD roles with Kubernetes RBAC and node labels.
Azure AD RBAC controls who can perform cluster admin actions. Kubernetes RBAC combined with node labels can restrict pod scheduling to specific node pools. This combination enforces deployment restrictions effectively.
An AKS cluster has two node pools: np1 and np2. Both have autoscaling enabled with min 1 and max 3 nodes. The cluster runs two deployments: app1 scheduled only on np1 and app2 scheduled only on np2.
If app1 suddenly requires 10 pods and app2 requires 2 pods, what is the expected scaling behavior?
Remember that node pools scale independently and pods are scheduled only on their assigned node pools.
Each node pool autoscaler respects its max node count. np1 can only scale to 3 nodes, so it cannot run all 10 pods, leaving some pending. np2 scales to 2 nodes to run its 2 pods. Node pools do not share capacity.