0
0
Microservicessystem_design~10 mins

Traffic management (routing, splitting) in Microservices - Scalability & System Analysis

Choose your learning style9 modes available
Scalability Analysis - Traffic management (routing, splitting)
Traffic Growth and System Changes
Users / TrafficRouting ComplexitySplitting Use CasesInfrastructure NeedsMonitoring & Control
100 usersSimple routing rules, mostly staticRare, manual splitting for testingSingle load balancer, minimal proxiesBasic logs and alerts
10,000 usersDynamic routing based on service healthCanary releases, A/B testing startsMultiple load balancers, API gatewaysReal-time monitoring dashboards
1,000,000 usersAdvanced routing with weighted splits, geo-routingAutomated traffic splitting for experimentsDistributed proxies, service mesh adoptionAutomated anomaly detection, tracing
100,000,000 usersGlobal traffic management, multi-region routingComplex multi-dimensional splits (device, region, version)Global DNS, edge proxies, multi-cloudAI-driven traffic control, self-healing
First Bottleneck

At low to medium scale, the first bottleneck is the routing layer such as API gateways or load balancers. They can become overwhelmed by the number of routing rules and traffic volume, causing increased latency or failures.

As traffic grows, service discovery and configuration management also become bottlenecks, since routing decisions depend on up-to-date service health and versions.

Scaling Solutions
  • Horizontal scaling: Add more instances of API gateways and proxies to distribute routing load.
  • Service mesh: Offload routing and splitting logic to sidecars for decentralized control.
  • Caching routing decisions: Reduce repeated lookups by caching routing rules locally.
  • Weighted routing and traffic splitting: Use dynamic weights to gradually shift traffic during deployments.
  • Global traffic management: Use DNS-based geo-routing and edge proxies for global scale.
  • Automation: Automate routing updates and health checks to avoid stale routes.
Back-of-Envelope Cost Analysis
  • At 1M users with 1 request per second each, expect ~1 million requests per second (QPS) at peak.
  • Each API gateway instance can handle ~5,000 QPS, so ~200 instances needed for routing layer.
  • Service mesh sidecars add CPU and memory overhead per service instance.
  • Bandwidth depends on request size; for 1 KB requests, 1M QPS = ~1 GB/s network traffic.
  • Storage for routing configs and logs grows with number of rules and traffic volume.
Interview Tip

Start by explaining the routing and splitting needs at different traffic scales. Identify the first bottleneck clearly (usually routing layer). Then discuss specific scaling techniques like horizontal scaling, service mesh, and automation. Use real numbers to justify your approach. Finally, mention monitoring and fallback strategies to maintain reliability.

Self Check

Question: Your routing layer handles 1,000 QPS. Traffic grows 10x to 10,000 QPS. What is your first action and why?

Answer: Add more routing instances (horizontal scaling) and implement load balancing to distribute traffic. This prevents overload and maintains low latency.

Key Result
Traffic management systems first hit bottlenecks at the routing layer as traffic grows; horizontal scaling and service mesh adoption are key to maintaining efficient routing and splitting at scale.