You configure auto scaling rules for an Azure App Service based on CPU usage. The rule states: "Scale out by 1 instance when average CPU usage is above 70% for 5 minutes." What happens when the CPU usage spikes to 75% for 6 minutes?
Think about the condition duration and threshold in the scaling rule.
Auto scaling rules trigger actions only after the condition is met for the specified duration. Here, CPU usage above 70% for 5 minutes triggers scaling out by one instance.
You want to auto scale your Azure App Service when the HTTP queue length exceeds 100 requests for 10 minutes. Which JSON snippet correctly defines this rule?
Check the metric name, operator, threshold, and duration carefully.
The rule triggers when the average HTTP queue length is greater than 100 for 10 minutes, increasing instance count by 1 with a cooldown of 5 minutes.
You want to scale out your Azure App Service without causing downtime or dropped requests. Which architectural approach ensures this?
Think about how traffic is managed during scaling.
Azure App Service uses a load balancer that routes traffic only to healthy instances, allowing new instances to come online without downtime.
If your Azure App Service auto scaling triggers scale-out and scale-in actions very frequently, what security risk might this cause?
Consider what happens when new instances are created rapidly.
Frequent scaling can create many instances quickly, increasing the chance that some might be misconfigured or not fully patched, increasing attack surface.
You notice your Azure App Service scales out rapidly multiple times in a short period, causing instability. Which strategy best prevents this 'scale-out storm'?
Think about how to control the frequency of scaling actions.
Cooldown periods prevent rapid repeated scaling by enforcing a wait time after each scale action, stabilizing the system.