You have a Google Kubernetes Engine (GKE) cluster with two node pools: one for general workloads and one for high-memory workloads. You want to enable auto scaling to adjust the number of nodes based on demand.
Which statement best describes how node pools influence auto scaling behavior?
Think about how pods are assigned to node pools and how scaling reacts to workload demands.
In GKE, each node pool can be configured with its own auto scaling settings. The cluster autoscaler adjusts the size of each node pool independently based on the resource requests of pods scheduled to that pool.
You want to enable auto scaling on a GKE node pool named high-mem-pool with a minimum of 2 nodes and a maximum of 5 nodes.
Which YAML snippet correctly configures this?
Check the exact field names required to enable autoscaling in GKE node pool configuration.
The autoscaling block must include enabled: true along with minNodeCount and maxNodeCount to properly enable autoscaling on a node pool.
You have a GKE cluster with a node pool of machines each having 4 CPU cores and 16 GB RAM. A pod requests 8 CPU cores and 32 GB RAM.
What will be the behavior of the cluster autoscaler and scheduler?
Consider how Kubernetes schedules pods and how autoscaler reacts to unschedulable pods.
Kubernetes scheduler cannot place a pod if no single node has enough resources. The autoscaler only adds nodes if a pod can fit on a new node type. Since the pod requests more than any node can provide, it remains unschedulable and autoscaler does not add nodes.
You want to run sensitive workloads in a GKE cluster with enhanced security. Which node pool configuration best isolates these workloads?
Think about physical and logical isolation methods in Kubernetes node pools.
Creating a dedicated node pool with Shielded VMs and using taints ensures sensitive workloads run on isolated, secure nodes. Labels alone or network policies do not isolate nodes physically.
You enabled autoscaling on three node pools in a GKE cluster. You notice that sometimes one node pool scales up aggressively while others remain at minimum size, even though workloads are balanced.
What is the most likely cause?
Consider how pod scheduling constraints affect node pool scaling.
If pods have node selectors or affinity rules, they can only be scheduled on certain node pools. The autoscaler scales only the node pools that can host the pods, leading to uneven scaling.