0
0
GCPcloud~20 mins

Serverless vs GKE decision in GCP - Practice Questions

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Serverless vs GKE Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Architecture
intermediate
2:00remaining
Choosing Serverless for Event-Driven Workloads

You have a workload that processes user uploads and triggers image resizing automatically. The workload is unpredictable and can spike suddenly. Which option best explains why Serverless (Cloud Functions) is a better choice than GKE for this scenario?

AServerless automatically scales to zero when idle and scales up instantly on demand, reducing cost and management overhead.
BGKE provides automatic scaling to zero and instant startup, making it equally cost-effective for unpredictable workloads.
CServerless requires manual scaling configuration, which is complex for unpredictable workloads.
DGKE is cheaper for unpredictable workloads because it uses fixed resources regardless of demand.
Attempts:
2 left
💡 Hint

Think about how each service handles scaling and cost when there is no traffic.

service_behavior
intermediate
2:00remaining
GKE Pod Behavior on Node Failure

In a GKE cluster, if a node running several pods suddenly fails, what happens to those pods?

AThe pods remain stuck on the failed node until manual intervention restarts them.
BThe pods are automatically rescheduled on other healthy nodes by the Kubernetes control plane.
CThe pods are deleted permanently and cannot be recovered automatically.
DThe pods continue running on the failed node until the node recovers.
Attempts:
2 left
💡 Hint

Consider how Kubernetes manages pod availability and node health.

security
advanced
2:00remaining
Security Implications of Serverless vs GKE

Which security characteristic is a key advantage of using Serverless (Cloud Run or Cloud Functions) compared to managing your own GKE cluster?

AServerless requires manual management of firewall rules and OS updates, increasing security risks.
BGKE automatically patches the OS and container runtimes without user intervention, making it more secure than Serverless.
CServerless abstracts away the underlying infrastructure, reducing the attack surface and responsibility for patching OS and runtime vulnerabilities.
DGKE clusters have no security risks because they run isolated containers.
Attempts:
2 left
💡 Hint

Think about who manages the infrastructure and patching in each service.

Best Practice
advanced
2:00remaining
Cost Optimization Strategy for GKE Clusters

You run a GKE cluster with steady but low traffic. Which strategy best reduces cost without sacrificing availability?

AUse node auto-provisioning with preemptible VMs and set minimum node count to maintain availability.
BKeep a fixed large number of nodes running 24/7 to avoid scaling delays.
CDisable autoscaling and manually add nodes only during peak hours.
DUse only standard VMs with no autoscaling to ensure stability.
Attempts:
2 left
💡 Hint

Consider how to balance cost savings with availability using GKE features.

🧠 Conceptual
expert
2:00remaining
Comparing Cold Start Impact on Serverless and GKE

Which statement best describes the impact of cold starts on Serverless platforms compared to GKE-managed containers?

ACold starts affect GKE and Serverless equally because both use container images.
BGKE containers always have cold start delays because nodes must boot before pods start.
CServerless platforms never have cold starts because they keep all functions warm at all times.
DServerless functions may experience cold start latency when scaling from zero, while GKE containers typically run continuously, avoiding cold starts.
Attempts:
2 left
💡 Hint

Think about how each platform handles scaling from zero instances.