Micro-caching for dynamic content in Nginx - Time & Space Complexity
We want to understand how the time to serve requests changes when using micro-caching in nginx.
Specifically, how caching affects the work nginx does as requests increase.
Analyze the time complexity of this nginx micro-caching setup.
proxy_cache_path /tmp/cache keys_zone=microcache:10m max_size=100m inactive=1m;
server {
location /dynamic/ {
proxy_cache microcache;
proxy_cache_valid 200 1s;
proxy_pass http://backend;
}
}
This config caches dynamic content for 1 second to reduce backend load on many requests.
Look at what repeats when requests come in.
- Primary operation: Checking cache for each request.
- How many times: Once per request.
- Secondary operation: Forwarding to backend if cache miss, which is more costly.
As requests increase, nginx checks cache quickly each time.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 cache checks, few backend calls |
| 100 | 100 cache checks, few backend calls |
| 1000 | 1000 cache checks, few backend calls |
Pattern observation: Cache checks grow linearly with requests, but backend calls stay low due to caching.
Time Complexity: O(n)
This means the work nginx does grows directly with the number of requests, but each request is fast because of caching.
[X] Wrong: "Caching makes nginx handle requests instantly with no work."
[OK] Correct: Even with caching, nginx must check the cache for every request, so work still grows with requests.
Understanding how caching affects request handling helps you explain real-world server performance clearly and confidently.
"What if the cache duration changed from 1 second to 10 seconds? How would the time complexity change?"