In a microservices architecture, what is the primary purpose of traffic routing?
Think about how requests find the right service to handle them.
Traffic routing ensures that requests are sent to the correct microservice based on defined rules like URL paths, headers, or other conditions.
You want to deploy a new version of a microservice and gradually shift 20% of traffic to it while keeping 80% on the old version. Which traffic splitting method is best suited for this?
Consider a method that can precisely control traffic percentages.
Weighted routing allows precise control over what percentage of traffic goes to each version, ideal for gradual rollouts.
Your microservices system experiences sudden spikes in traffic. Which design choice helps maintain efficient traffic routing and splitting at scale?
Think about decentralizing routing logic to handle scale.
A distributed service mesh with sidecar proxies offloads routing decisions to each service instance, improving scalability and fault tolerance.
Which is a key tradeoff when using client-side traffic splitting versus server-side traffic splitting?
Consider who controls the routing decisions and how that affects consistency.
Client-side splitting lets clients decide where to send requests, which is flexible but can lead to uneven traffic distribution if clients behave differently.
Your microservices system expects 1 million requests per minute. Each routing decision takes 1 millisecond of CPU time. How many CPU cores are needed to handle routing without delay, assuming each core can process 1000 routing decisions per second?
Calculate requests per second and divide by processing capacity per core.
1 million requests per minute = 16,666.67 requests per second. Each core handles 1000 requests per second, so 16.67 cores needed, rounded up to 17.