0
0
GCPcloud~20 mins

Concurrency and scaling in GCP - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Cloud Concurrency & Scaling Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
service_behavior
intermediate
2:00remaining
How does Google Cloud Run handle concurrency by default?

Google Cloud Run allows you to configure concurrency for your container instances. What is the default concurrency setting for Cloud Run services?

AEach container instance can handle up to 80 concurrent requests by default.
BEach container instance handles exactly 1 request at a time (concurrency = 1).
CEach container instance can handle unlimited concurrent requests by default.
DEach container instance handles 10 concurrent requests by default.
Attempts:
2 left
💡 Hint

Think about how Cloud Run optimizes resource use by handling multiple requests per container.

Architecture
intermediate
2:00remaining
What happens when a Google App Engine Standard environment instance reaches its maximum concurrent requests?

In Google App Engine Standard environment, each instance can handle a limited number of concurrent requests. What does App Engine do when an instance reaches this limit?

AApp Engine queues the extra requests until the instance can handle them.
BApp Engine automatically creates new instances to handle the extra requests.
CApp Engine rejects the extra requests with an error response.
DApp Engine throttles the requests by slowing down the instance.
Attempts:
2 left
💡 Hint

Think about how App Engine scales to handle more traffic.

security
advanced
2:00remaining
Which practice improves security when scaling Google Kubernetes Engine (GKE) clusters under heavy load?

When your GKE cluster scales up to handle more pods, which practice helps maintain security effectively?

AAllow all pods to run with root privileges to avoid permission issues during scaling.
BUse a single node pool with wide permissions for all workloads to simplify management.
CDisable network policies to improve pod communication speed during scaling.
DUse Pod Security Policies or Pod Security Admission to restrict pod permissions consistently.
Attempts:
2 left
💡 Hint

Think about how to keep pods secure even as new ones are added automatically.

Configuration
advanced
2:00remaining
How to configure autoscaling in Google Compute Engine Managed Instance Groups (MIG) based on CPU usage?

You want your MIG to add or remove VM instances automatically based on CPU load. Which configuration snippet correctly sets autoscaling to target 60% average CPU utilization?

Agcloud compute instance-groups managed set-autoscaling my-group --max-num-replicas=10 --min-num-replicas=1 --target-cpu-utilization=0.6
Bgcloud compute instance-groups managed set-autoscaling my-group --max-num-replicas=10 --min-num-replicas=1 --target-cpu-utilization=60
Cgcloud compute instance-groups managed set-autoscaling my-group --max-num-replicas=10 --min-num-replicas=1 --cpu-utilization-target=60
Dgcloud compute instance-groups managed set-autoscaling my-group --max-num-replicas=10 --min-num-replicas=1 --cpu-threshold=0.6
Attempts:
2 left
💡 Hint

Check the correct flag name and value format for CPU utilization target.

🧠 Conceptual
expert
2:00remaining
Why is horizontal scaling often preferred over vertical scaling in cloud environments?

Consider a web application running in the cloud. Why do architects usually prefer horizontal scaling (adding more machines) instead of vertical scaling (adding more power to one machine)?

AVertical scaling allows unlimited resource increase without downtime.
BVertical scaling is cheaper and faster to implement but less reliable.
CHorizontal scaling provides better fault tolerance and can handle more traffic by distributing load.
DHorizontal scaling requires specialized hardware, making it less flexible.
Attempts:
2 left
💡 Hint

Think about what happens if one machine fails in each scaling approach.