0
0
Microservicessystem_design~10 mins

Microservices characteristics - Scalability & System Analysis

Choose your learning style9 modes available
Scalability Analysis - Microservices characteristics
Growth Table: Microservices Characteristics Scaling
Users / TrafficService CountDeployment ComplexityData ManagementCommunicationMonitoring & Logging
100 usersFew (5-10)Simple, manual deploysSingle DB or shared DBSimple REST callsBasic logs, manual checks
10,000 usersModerate (20-50)Automated CI/CD pipelinesDatabase per service startsREST + message queuesCentralized logging, alerts
1,000,000 usersMany (100+)Fully automated, container orchestrationPolyglot persistence, shardingAsynchronous messaging, event-drivenDistributed tracing, metrics dashboards
100,000,000 usersHundreds to thousandsMulti-cluster, multi-region deploymentsData partitioning, CQRS, event sourcingHigh-throughput event buses, service meshAI-driven monitoring, anomaly detection
First Bottleneck

At small scale, the first bottleneck is often service communication latency because many microservices call each other synchronously, causing delays.

As users grow, database scaling becomes the bottleneck since each service may have its own database increasing load and complexity.

At large scale, deployment and operational complexity breaks first due to managing many services, versions, and dependencies.

Scaling Solutions
  • Horizontal scaling: Add more instances of services behind load balancers.
  • Service decomposition: Split large services into smaller focused ones.
  • Asynchronous communication: Use message queues and event buses to reduce latency.
  • Database per service: Isolate data to reduce contention and enable independent scaling.
  • Container orchestration: Use Kubernetes or similar to automate deployment and scaling.
  • Service mesh: Manage service-to-service communication, retries, and security.
  • Centralized monitoring: Implement distributed tracing and metrics aggregation.
Back-of-Envelope Cost Analysis

Assuming 1 million users generating 10 requests per second each, total requests = 10 million RPS.

Each service instance handles ~2000 RPS, so need ~5000 instances distributed across services.

Database load is split; each DB handles ~5000 RPS, so need many DB shards or replicas.

Network bandwidth must support inter-service calls; estimate 1 Gbps per 1000 instances.

Storage grows with data per service; consider tiered storage and archiving.

Interview Tip

Start by explaining microservices basics: independent deployability, decentralized data, and communication patterns.

Discuss scaling challenges at different user levels and identify the first bottleneck clearly.

Propose targeted solutions like asynchronous messaging and container orchestration.

Use real numbers to justify your scaling approach and show awareness of operational complexity.

Self Check Question

Your database handles 1000 QPS. Traffic grows 10x to 10,000 QPS. What do you do first?

Answer: Add read replicas and implement caching to reduce direct DB load before considering sharding or redesign.

Key Result
Microservices scale by decomposing functionality into independent services, but first bottlenecks appear in communication latency and database load. Solutions include asynchronous messaging, database per service, and container orchestration to manage complexity.