Weighted round-robin in Nginx - Time & Space Complexity
We want to understand how the time needed to pick a server grows as the number of servers increases in weighted round-robin load balancing.
How does the selection process scale when more servers with different weights are added?
Analyze the time complexity of the following nginx weighted round-robin snippet.
upstream backend {
server backend1.example.com weight=3;
server backend2.example.com weight=1;
server backend3.example.com weight=2;
}
server {
location / {
proxy_pass http://backend;
}
}
This configuration distributes requests to servers based on their weights, cycling through them proportionally.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Iterating through the list of servers to select the next one based on weights.
- How many times: Once per request, cycling through all servers in weighted order.
As the number of servers increases, the selection process checks more servers to find the next one according to weights.
| Input Size (n) | Approx. Operations |
|---|---|
| 3 | 3 checks per cycle |
| 10 | 10 checks per cycle |
| 100 | 100 checks per cycle |
Pattern observation: The number of operations grows linearly with the number of servers.
Time Complexity: O(n)
This means the time to select a server grows directly in proportion to the number of servers configured.
[X] Wrong: "Selecting a server in weighted round-robin is always constant time regardless of server count."
[OK] Correct: The selection process must check servers in order to respect weights, so more servers mean more checks.
Understanding how load balancing scales helps you design systems that handle more traffic smoothly. This skill shows you can think about efficiency in real setups.
"What if nginx used a different algorithm that stored precomputed weighted lists? How would the time complexity change?"