Cache validity rules in Nginx - Time & Space Complexity
When nginx checks if a cached response is still good to use, it runs some rules to decide.
We want to know how the time it takes to check these rules grows as the number of cache entries grows.
Analyze the time complexity of the following nginx cache validity check snippet.
proxy_cache_path /data/nginx/cache keys_zone=mycache:10m;
server {
location / {
proxy_cache mycache;
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout updating;
}
}
This snippet sets rules for how long cached responses stay valid and when nginx can use stale cache.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Checking cache entries against validity rules for each incoming request.
- How many times: Once per request, nginx looks up the cache key and checks timestamps and status codes.
Each request triggers a cache lookup and validity check, which depends on the cache key, not the total cache size.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 cache entries | 1 lookup and validity check |
| 100 cache entries | 1 lookup and validity check |
| 1000 cache entries | 1 lookup and validity check |
Pattern observation: The time to check cache validity stays about the same no matter how many entries exist.
Time Complexity: O(1)
This means nginx checks cache validity in constant time for each request, regardless of cache size.
[X] Wrong: "Checking cache validity takes longer as the cache grows because nginx scans all entries."
[OK] Correct: nginx uses a fast lookup by cache key, so it does not scan all entries but directly finds the needed one.
Understanding how cache validity checks scale helps you explain efficient caching in real systems.
"What if nginx had to scan multiple cache keys for a request? How would the time complexity change?"