Cloud CDN integration in GCP - Time & Space Complexity
When using Cloud CDN, it's important to understand how the number of requests affects the system's work.
We want to know how the time to serve content grows as more users request data.
Analyze the time complexity of serving content with Cloud CDN enabled.
// Enable Cloud CDN on backend service
const backendService = gcp.compute.BackendService.get("backend-service", "backend-service-id");
backendService.enableCdn = true;
// User requests content
function serveContent(request) {
if (cdnCache.has(request.url)) {
return cdnCache.get(request.url); // Serve from cache
} else {
const content = fetchFromOrigin(request.url); // Fetch from origin server
cdnCache.set(request.url, content); // Cache content
return content;
}
}
This sequence shows how Cloud CDN caches content and serves repeated requests faster.
Look at what happens repeatedly when users request content.
- Primary operation: Checking cache and serving content.
- How many times: Once per user request.
As more users request content, the system checks the cache each time.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | 10 cache checks, some origin fetches if cache misses |
| 100 | 100 cache checks, fewer origin fetches as cache fills |
| 1000 | 1000 cache checks, mostly cache hits, few origin fetches |
Each request causes one cache check, so work grows directly with requests.
Time Complexity: O(n)
This means the time to serve content grows linearly with the number of user requests.
[X] Wrong: "Cloud CDN makes serving content instant no matter how many requests come."
[OK] Correct: Each request still requires a cache check, so time grows with requests, though origin fetches reduce over time.
Understanding how caching affects request handling helps you design efficient cloud systems and explain performance trade-offs clearly.
What if we added multiple CDN edge locations? How would the time complexity of serving requests change?