Container Apps scaling rules in Azure - Time & Space Complexity
When using Container Apps, scaling rules decide how many containers run based on demand.
We want to know how the number of scaling actions grows as the workload increases.
Analyze the time complexity of scaling Container Apps based on CPU usage.
# Define a scaling rule for Container App
az containerapp update \
--name myapp \
--resource-group myrg \
--min-replicas 1 \
--max-replicas 10 \
--scale-rule-name cpu-scale \
--scale-rule-type cpu \
--scale-rule-metadata target=70
This sets the app to scale between 1 and 10 containers based on CPU usage target of 70%.
Look at what happens repeatedly as load changes.
- Primary operation: Scaling actions that add or remove container instances.
- How many times: Up to the max replicas limit, depending on workload spikes.
As workload increases, more containers start to handle the load.
| Input Size (CPU load spikes) | Approx. Scaling Actions |
|---|---|
| 10 | Up to 10 scaling actions (one per container added) |
| 100 | Still capped at 10 scaling actions due to max replicas |
| 1000 | Still capped at 10 scaling actions; no more containers added |
Pattern observation: Scaling actions grow linearly with load until the max container limit is reached, then stay constant.
Time Complexity: O(n)
This means scaling actions increase directly with workload until the maximum number of containers is reached.
[X] Wrong: "Scaling actions happen instantly and infinitely as load grows."
[OK] Correct: Scaling is limited by max replicas and takes time to add containers, so actions are capped and paced.
Understanding how scaling rules affect resource use helps you design apps that handle growth smoothly and predictably.
"What if we changed the max replicas from 10 to 100? How would the time complexity change?"