0
0
Azurecloud~20 mins

Node pools and scaling in Azure - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Node Pools and Scaling Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding Node Pool Scaling Behavior

You have an Azure Kubernetes Service (AKS) cluster with two node pools: system and user. The system node pool is set to a fixed size of 3 nodes, and the user node pool has autoscaling enabled with a minimum of 1 and maximum of 5 nodes.

What happens when the workload on the user node pool increases beyond the capacity of 5 nodes?

AThe <em>system</em> node pool automatically scales up to help with the <em>user</em> node pool workload.
BThe <em>user</em> node pool scales beyond 5 nodes to handle the workload automatically.
CThe cluster rejects new pods scheduled to the <em>user</em> node pool once it reaches 5 nodes, causing pending pods.
DAKS automatically creates a new node pool to handle the extra workload beyond 5 nodes.
Attempts:
2 left
💡 Hint

Think about the maximum node count set for autoscaling and how AKS manages node pools independently.

Configuration
intermediate
2:00remaining
Configuring Node Pool Autoscaling Limits

You want to configure an AKS node pool named compute to autoscale between 2 and 6 nodes. Which Azure CLI command correctly sets this autoscaling configuration?

Aaz aks nodepool create --resource-group myRG --cluster-name myAKS --name compute --enable-autoscale --min-nodes 2 --max-nodes 6
Baz aks nodepool scale --resource-group myRG --cluster-name myAKS --name compute --min-count 2 --max-count 6
Caz aks update --resource-group myRG --name myAKS --nodepool-name compute --autoscale true --min 2 --max 6
Daz aks nodepool update --resource-group myRG --cluster-name myAKS --name compute --enable-cluster-autoscaler --min-count 2 --max-count 6
Attempts:
2 left
💡 Hint

Remember that autoscaling is enabled or updated on existing node pools with specific flags.

Architecture
advanced
2:30remaining
Designing Node Pools for Mixed Workloads

You need to design an AKS cluster to run two types of workloads: latency-sensitive services and batch jobs. Latency-sensitive services require dedicated nodes with high availability, while batch jobs can run on cheaper, spot-priced nodes that can be preempted.

Which node pool architecture best fits this requirement?

ACreate two node pools: one with standard VMs for latency-sensitive services and one with spot VMs for batch jobs, each with appropriate scaling policies.
BUse only spot VM node pools and rely on pod priority to manage latency-sensitive services.
CCreate a single node pool with mixed VM sizes and enable autoscaling to handle both workloads.
DCreate multiple node pools with spot VMs only and use taints to separate workloads.
Attempts:
2 left
💡 Hint

Consider workload isolation and cost optimization strategies.

security
advanced
2:30remaining
Securing Node Pools with Azure AD Integration

You want to restrict access to the system node pool in your AKS cluster so that only cluster administrators can deploy pods there. Which approach enforces this security requirement?

AUse Azure AD RBAC to assign <code>Azure Kubernetes Service RBAC Cluster Admin</code> role only to administrators and use node pool labels with Kubernetes RBAC to restrict pod deployment.
BEnable network policies on the <code>system</code> node pool to block all traffic except from admin IPs.
CUse Azure Policy to deny pod creation on the <code>system</code> node pool for non-admin users.
DConfigure pod security policies to allow only admin users to schedule pods on the <code>system</code> node pool.
Attempts:
2 left
💡 Hint

Think about combining Azure AD roles with Kubernetes RBAC and node labels.

service_behavior
expert
3:00remaining
Behavior of Node Pool Scaling with Multiple Workloads

An AKS cluster has two node pools: np1 and np2. Both have autoscaling enabled with min 1 and max 3 nodes. The cluster runs two deployments: app1 scheduled only on np1 and app2 scheduled only on np2.

If app1 suddenly requires 10 pods and app2 requires 2 pods, what is the expected scaling behavior?

A<code>np2</code> scales up to 3 nodes to help <code>np1</code> handle the extra pods from <code>app1</code>.
B<code>np1</code> scales up to 3 nodes but cannot satisfy all 10 pods, causing some pods to remain pending; <code>np2</code> scales to 2 nodes to handle <code>app2</code> pods.
CBoth <code>np1</code> and <code>np2</code> scale to 3 nodes each to balance the load between deployments.
D<code>np1</code> scales beyond 3 nodes to handle all 10 pods; <code>np2</code> remains at 1 node since <code>app2</code> pods fit there.
Attempts:
2 left
💡 Hint

Remember that node pools scale independently and pods are scheduled only on their assigned node pools.