0
0
GCPcloud~20 mins

Node pools and auto scaling in GCP - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Node Pools and Auto Scaling Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Architecture
intermediate
2:00remaining
How does a node pool affect cluster scaling in GKE?

You have a Google Kubernetes Engine (GKE) cluster with two node pools: one for general workloads and one for high-memory workloads. You want to enable auto scaling to adjust the number of nodes based on demand.

Which statement best describes how node pools influence auto scaling behavior?

AEach node pool scales independently based on the resource requests of pods scheduled to it.
BAuto scaling adjusts the total number of nodes in the cluster without considering node pools.
COnly the default node pool can be auto scaled; additional node pools remain fixed size.
DNode pools must have the same machine type to enable auto scaling across the cluster.
Attempts:
2 left
💡 Hint

Think about how pods are assigned to node pools and how scaling reacts to workload demands.

Configuration
intermediate
2:00remaining
Identify the correct YAML snippet to enable auto scaling on a GKE node pool

You want to enable auto scaling on a GKE node pool named high-mem-pool with a minimum of 2 nodes and a maximum of 5 nodes.

Which YAML snippet correctly configures this?

A
nodePools:
- name: high-mem-pool
  autoscaling:
    enabled: true
    minCount: 2
    maxCount: 5
B
nodePools:
- name: high-mem-pool
  autoscaling:
    enabled: true
    minNodeCount: 2
    maxNodeCount: 5
C
nodePools:
- name: high-mem-pool
  autoscaling:
    enabled: false
    minNodeCount: 2
    maxNodeCount: 5
D
nodePools:
- name: high-mem-pool
  autoscaling:
    minNodeCount: 2
    maxNodeCount: 5
Attempts:
2 left
💡 Hint

Check the exact field names required to enable autoscaling in GKE node pool configuration.

service_behavior
advanced
2:00remaining
What happens when a pod requests more resources than any node in the node pool can provide?

You have a GKE cluster with a node pool of machines each having 4 CPU cores and 16 GB RAM. A pod requests 8 CPU cores and 32 GB RAM.

What will be the behavior of the cluster autoscaler and scheduler?

AThe pod will be split across multiple nodes to satisfy resource requests.
BAutoscaler will add more nodes until the pod fits; scheduler places the pod once enough nodes exist.
CThe scheduler will not place the pod; autoscaler will not add nodes because no node can satisfy the request.
DThe scheduler will place the pod on the node with the most free resources, ignoring resource limits.
Attempts:
2 left
💡 Hint

Consider how Kubernetes schedules pods and how autoscaler reacts to unschedulable pods.

security
advanced
2:00remaining
Which node pool configuration improves security by isolating sensitive workloads?

You want to run sensitive workloads in a GKE cluster with enhanced security. Which node pool configuration best isolates these workloads?

ACreate a dedicated node pool with nodes having Shielded VM enabled and use node taints to isolate pods.
BUse the default node pool and label sensitive pods with a security label only.
CRun all workloads on the same node pool but use network policies to isolate traffic.
DCreate a node pool with larger machines and run sensitive and non-sensitive pods together.
Attempts:
2 left
💡 Hint

Think about physical and logical isolation methods in Kubernetes node pools.

🧠 Conceptual
expert
2:00remaining
Why might enabling cluster autoscaler on multiple node pools cause unexpected scaling behavior?

You enabled autoscaling on three node pools in a GKE cluster. You notice that sometimes one node pool scales up aggressively while others remain at minimum size, even though workloads are balanced.

What is the most likely cause?

AThe cluster autoscaler requires all node pools to have identical resource limits to scale evenly.
BThe autoscaler prioritizes scaling node pools with cheaper machine types first, causing uneven scaling.
CAutoscaler cannot scale multiple node pools simultaneously and picks one at random to scale.
DPods have node selectors or affinity rules that restrict them to specific node pools, causing autoscaler to scale only those pools.
Attempts:
2 left
💡 Hint

Consider how pod scheduling constraints affect node pool scaling.