Deploying workloads to AKS in Azure - Time & Space Complexity
When deploying workloads to Azure Kubernetes Service (AKS), it is important to understand how the deployment time changes as the number of workloads grows.
We want to know how the time to deploy scales when adding more workloads.
Analyze the time complexity of deploying multiple container workloads to AKS using Azure CLI commands.
az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 3 --enable-addons monitoring --generate-ssh-keys
for workload in workloads:
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
kubectl apply -f workload.yaml
This sequence creates an AKS cluster once, then deploys each workload by applying its configuration to the cluster.
- Primary operation: Applying workload configuration with
kubectl apply. - How many times: Once per workload, so it repeats as many times as there are workloads.
- Fetching cluster credentials with
az aks get-credentialsalso repeats per workload but usually cached after first time.
Each workload requires a separate apply operation, so the total deployment time grows as you add more workloads.
| Input Size (n) | Approx. API Calls/Operations |
|---|---|
| 10 | 10 apply operations |
| 100 | 100 apply operations |
| 1000 | 1000 apply operations |
Pattern observation: The number of apply operations grows directly with the number of workloads.
Time Complexity: O(n)
This means the deployment time increases in a straight line as you add more workloads.
[X] Wrong: "Deploying multiple workloads happens all at once, so time stays the same no matter how many workloads."
[OK] Correct: Each workload requires its own deployment step, so time adds up with each one.
Understanding how deployment time grows helps you plan and explain scaling strategies clearly in real projects.
"What if we deployed all workloads using a single combined configuration file? How would the time complexity change?"