Why load balancing matters in Azure - Performance Analysis
We want to understand how the work done by load balancing changes as more users or requests come in.
How does the system handle growing traffic efficiently?
Analyze the time complexity of distributing incoming requests to backend servers.
// Azure Load Balancer example
resource lb 'Microsoft.Network/loadBalancers@2022-05-01' = {
name: 'myLoadBalancer',
location: resourceGroup().location,
properties: {
frontendIPConfigurations: [
{
name: 'LoadBalancerFrontEnd',
properties: { publicIPAddress: { id: publicIP.id } }
}
],
backendAddressPools: [ { name: 'BackendPool' } ],
loadBalancingRules: [
{
name: 'HTTPRule',
properties: {
frontendIPConfiguration: { id: lb.properties.frontendIPConfigurations[0].id },
backendAddressPool: { id: lb.properties.backendAddressPools[0].id },
protocol: 'Tcp',
frontendPort: 80,
backendPort: 80,
enableFloatingIP: false,
idleTimeoutInMinutes: 4,
loadDistribution: 'Default'
}
}
]
}
}
This setup distributes incoming web requests evenly across multiple servers.
Look at what happens repeatedly as requests come in.
- Primary operation: Routing each incoming request to one backend server.
- How many times: Once per request, no matter how many requests arrive.
Each new request causes one routing decision by the load balancer.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | 10 routing operations |
| 100 | 100 routing operations |
| 1000 | 1000 routing operations |
Pattern observation: The number of routing operations grows directly with the number of requests.
Time Complexity: O(n)
This means the work done by the load balancer grows linearly as more requests come in.
[X] Wrong: "Load balancing handles all requests instantly, so time doesn't grow with more users."
[OK] Correct: Each request still needs to be routed, so the total work grows as requests increase, even if each routing is fast.
Understanding how load balancing scales helps you design systems that stay responsive as more people use them.
"What if the load balancer had to check the health of each backend server before routing every request? How would the time complexity change?"