You need to design a Cloud CDN integration for a website that serves users worldwide. Which architecture best ensures low latency and high availability?
Think about how Google Cloud distributes traffic globally and caches content close to users.
Option B uses a global HTTP(S) Load Balancer that routes user requests to the nearest backend region and enables Cloud CDN to cache content at edge locations worldwide, ensuring low latency and high availability.
Your Cloud CDN cache hit ratio improves from 60% to 90%. How does this affect the backend server load?
Cache hit ratio means the percentage of requests served by the CDN cache instead of the backend.
Improving cache hit ratio from 60% to 90% means 30% more requests are served from cache, reducing backend load by that amount.
You have static assets stored in Cloud Storage. You want to serve them globally with low latency. What is the main tradeoff when enabling Cloud CDN in front of Cloud Storage?
Consider how caching affects cost and performance.
Cloud CDN caches content at edge locations, reducing latency but may increase cost due to cache egress traffic charges.
After updating your website content, you want Cloud CDN to serve the new version immediately. Which method correctly invalidates cached content?
Think about how to remove specific cached content without downtime.
Cloud CDN provides an API to invalidate cached URLs immediately, ensuring updated content is served without waiting for TTL expiration.
Your website receives 10 million requests per day. Average response size is 500 KB. You expect a 75% cache hit ratio. Estimate the daily data served from Cloud CDN cache edges.
Calculate total data, then multiply by cache hit ratio.
Total data = 10 million * 500 KB = 5,000,000,000 KB = ~4.66 TB. 75% of this is served from cache edges, so ~3.5 TB. Rounded to 3.75 TB for simplicity.