Round-robin (default) in Nginx - Time & Space Complexity
We want to understand how the work done by nginx grows when it uses the round-robin method to send requests to servers.
Specifically, how does the number of servers affect the time it takes to pick the next server?
Analyze the time complexity of the following nginx round-robin load balancing snippet.
upstream backend {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
server {
location / {
proxy_pass http://backend;
}
}
This configuration sends each incoming request to the next server in the list, cycling through them one by one.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Selecting the next server in the list for each request.
- How many times: Once per incoming request, cycling through all servers in order.
As the number of servers increases, nginx moves to the next server in the list for each request.
| Input Size (n servers) | Approx. Operations per request |
|---|---|
| 3 | 1 (just pick next server) |
| 10 | 1 (still just pick next server) |
| 100 | 1 (still just pick next server) |
Pattern observation: The work to select the next server stays the same no matter how many servers there are.
Time Complexity: O(1)
This means nginx picks the next server in constant time, regardless of how many servers are configured.
[X] Wrong: "Selecting the next server takes longer as the number of servers grows because nginx checks all servers each time."
[OK] Correct: nginx keeps track of the last used server and directly picks the next one without checking all servers, so the time stays constant.
Understanding how nginx handles load balancing efficiently shows you can think about how systems scale and keep performance steady as they grow.
"What if nginx used a weighted round-robin method instead? How would the time complexity change?"