In a microservices system, a sidecar proxy is deployed alongside a service instance. What is its main job?
Think about what tasks a proxy can do to help a service without changing its core logic.
A sidecar proxy manages networking concerns like routing, load balancing, and security for its paired service, without changing the service code.
Choose the architecture diagram that correctly shows how a sidecar proxy is deployed with a microservice.
Remember that sidecar proxies run alongside the service but as separate processes or containers.
The sidecar proxy pattern deploys the proxy as a separate container or process next to the service, often sharing the same network environment.
When scaling a microservice horizontally, what must be considered about its sidecar proxies?
Think about how sidecar proxies are deployed relative to service instances.
Since sidecar proxies run alongside each service instance, scaling the service means scaling the proxies one-to-one to keep their benefits.
Using sidecar proxies adds benefits but also some costs. Which option best describes a main tradeoff?
Consider what running extra processes alongside services means for resources and complexity.
Sidecar proxies improve control over communication but require extra CPU, memory, and operational management.
Each sidecar proxy uses about 50MB memory and 0.1 CPU cores. The service instances use 200MB memory and 0.5 CPU cores each. What is the total additional memory and CPU needed for all sidecar proxies?
Multiply the per-instance proxy resource by the number of instances.
100 proxies × 50MB = 5000MB = 5GB memory; 100 proxies × 0.1 CPU = 10 CPU cores total.