Max fails and fail timeout in Nginx - Time & Space Complexity
We want to understand how nginx handles repeated connection failures to a backend server.
Specifically, how the number of failure checks grows as requests increase.
Analyze the time complexity of the following nginx configuration snippet.
upstream backend {
server backend1.example.com max_fails=3 fail_timeout=30s;
}
server {
location / {
proxy_pass http://backend;
}
}
This config sets a backend server with a max of 3 allowed failures before marking it down for 30 seconds.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: nginx checks each request if the backend server is marked as failed or available.
- How many times: This check happens once per request, repeated for every incoming request.
Each new request triggers a quick check of the server's failure state.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 checks |
| 100 | 100 checks |
| 1000 | 1000 checks |
Pattern observation: The number of checks grows directly with the number of requests.
Time Complexity: O(n)
This means the time spent checking failures grows linearly with the number of requests.
[X] Wrong: "The failure checks happen only when a failure occurs, so they are rare and constant time."
[OK] Correct: nginx checks the server status on every request, regardless of failure, so the checks scale with request count.
Understanding how nginx handles failure checks helps you reason about system reliability and request handling under load.
What if we added multiple backend servers with max_fails and fail_timeout settings? How would the time complexity change?