Blue-green deployment routing in Nginx - Time & Space Complexity
We want to understand how the routing decisions in blue-green deployment scale as requests increase.
How does nginx handle routing when switching between two versions of an app?
Analyze the time complexity of the following nginx routing configuration for blue-green deployment.
upstream blue {
server 10.0.0.1:8080;
}
upstream green {
server 10.0.0.2:8080;
}
server {
listen 80;
location / {
if ($cookie_version = "blue") {
proxy_pass http://blue;
}
if ($cookie_version = "green") {
proxy_pass http://green;
}
}
}
This code routes user requests to either the blue or green server based on a cookie value.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: nginx checks the cookie value for each incoming request.
- How many times: once per request, no loops or recursion inside the config.
Each new request triggers a single cookie check and routing decision.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 cookie checks and routing decisions |
| 100 | 100 cookie checks and routing decisions |
| 1000 | 1000 cookie checks and routing decisions |
Pattern observation: The work grows linearly with the number of requests.
Time Complexity: O(n)
This means the routing work increases directly with the number of requests.
[X] Wrong: "Routing decisions take constant time regardless of request count."
[OK] Correct: Each request requires a routing check, so total work grows with requests, not fixed.
Understanding how routing scales helps you explain real deployment strategies clearly and confidently.
What if we added more versions (blue, green, yellow, etc.)? How would the time complexity change?