| Users | Data Volume | Service Count | Complexity | Communication |
|---|---|---|---|---|
| 100 users | Small (MBs) | Few (1-3) | Low - simple domains | Direct calls, simple APIs |
| 10,000 users | Medium (GBs) | Several (5-10) | Moderate - multiple bounded contexts | REST/gRPC with retries |
| 1,000,000 users | Large (TBs) | Many (20+) | High - complex domain models | Event-driven, async messaging |
| 100,000,000 users | Very Large (PBs) | Hundreds | Very High - multiple teams/domains | Advanced messaging, CQRS, eventual consistency |
Domain-Driven Design (DDD) basics in Microservices - Scalability & System Analysis
At small scale, the database becomes the first bottleneck because all domain data is stored centrally, causing contention and slow queries.
As users and services grow, the complexity of inter-service communication and data consistency across bounded contexts becomes the bottleneck.
At very large scale, network latency and message broker throughput limit system responsiveness.
- Database Scaling: Use database per bounded context to isolate data and reduce contention.
- Service Decomposition: Split monolith into microservices aligned with bounded contexts.
- Asynchronous Communication: Use event-driven messaging (Kafka, RabbitMQ) to decouple services.
- CQRS and Event Sourcing: Separate read/write models to optimize performance and scalability.
- API Gateways and Load Balancers: Manage traffic and provide single entry points.
- Cache Frequently Used Data: Use Redis or similar to reduce database load.
- Partitioning and Sharding: Distribute data across multiple databases by domain or customer.
Assuming 1 million users with 10 requests per second each:
- Total requests: 10 million requests per second (distributed across services)
- Database QPS per instance: 5,000 -> Need ~2,000 DB instances or sharding
- Message broker throughput: Kafka can handle ~1 million messages/sec per cluster -> multiple clusters needed
- Storage: TBs to PBs depending on domain data retention
- Network bandwidth: 1 Gbps per server -> horizontal scaling with load balancers
Start by explaining the concept of bounded contexts and how DDD helps manage complexity by aligning microservices with business domains.
Discuss how scaling affects data consistency and communication patterns.
Outline bottlenecks and propose concrete solutions like database per context, event-driven architecture, and CQRS.
Use real-life analogies like teams working on different parts of a product to explain bounded contexts.
Your database handles 1000 QPS. Traffic grows 10x. What do you do first?
Answer: Introduce read replicas and caching to reduce load, then consider splitting the database by bounded contexts to scale horizontally.