0
0
Microservicessystem_design~25 mins

Single responsibility per service in Microservices - System Design Exercise

Choose your learning style9 modes available
Design: Microservices with Single Responsibility
Design focuses on defining microservices boundaries, communication, and data ownership. Infrastructure details like cloud provider or CI/CD pipelines are out of scope.
Functional Requirements
FR1: Each microservice should have one clear responsibility or business capability.
FR2: Services must communicate with each other to fulfill user requests.
FR3: The system should allow independent deployment and scaling of each service.
FR4: Services should handle failures gracefully without affecting others.
FR5: Data owned by each service should be encapsulated and not shared directly.
Non-Functional Requirements
NFR1: Support up to 10,000 concurrent users.
NFR2: API response latency p99 under 300ms.
NFR3: Availability target of 99.9% uptime.
NFR4: Services must be loosely coupled and independently deployable.
Think Before You Design
Questions to Ask
❓ Question 1
❓ Question 2
❓ Question 3
❓ Question 4
❓ Question 5
Key Components
API Gateway or Service Mesh for routing
Individual microservices each with own database
Message broker for asynchronous communication
Load balancers
Monitoring and logging tools
Design Patterns
Domain-Driven Design (DDD) for service boundaries
Event-driven architecture
Circuit breaker pattern for fault tolerance
Database per service pattern
API Gateway pattern
Reference Architecture
          +-------------------+          
          |    API Gateway    |          
          +---------+---------+          
                    |                    
    +---------------+---------------+    
    |               |               |    
+---v---+       +---v---+       +---v---+
|User   |       |Order  |       |Payment|
|Service|       |Service|       |Service|
+---+---+       +---+---+       +---+---+
    |               |               |    
+---v---+       +---v---+       +---v---+
|UserDB |       |OrderDB|       |PayDB  |
+-------+       +-------+       +-------+

Services communicate via API Gateway for sync calls and use message broker for async events.
Components
API Gateway
Nginx or Kong
Routes client requests to appropriate microservices and handles authentication.
User Service
Node.js/Express or Spring Boot
Manages user profiles and authentication.
Order Service
Node.js/Express or Spring Boot
Handles order creation, updates, and queries.
Payment Service
Node.js/Express or Spring Boot
Processes payments and manages payment status.
Databases
PostgreSQL or MongoDB
Each service owns its own database to encapsulate data.
Message Broker
RabbitMQ or Kafka
Enables asynchronous communication between services.
Request Flow
1. Client sends request to API Gateway.
2. API Gateway routes request to the appropriate microservice based on URL and method.
3. Microservice processes request using its own database.
4. If needed, microservice publishes events to message broker for other services.
5. Other services subscribe to relevant events and update their state asynchronously.
6. Microservice returns response to API Gateway.
7. API Gateway sends response back to client.
Database Schema
User Service: User(id PK, name, email, password_hash) Order Service: Order(id PK, user_id FK, product_id, quantity, status) Payment Service: Payment(id PK, order_id FK, amount, status, payment_method) Each service owns its schema and does not share tables directly.
Scaling Discussion
Bottlenecks
API Gateway can become a single point of failure or bottleneck.
Database performance under high load for each service.
Message broker throughput limits.
Inter-service communication latency.
Deployment complexity as number of services grows.
Solutions
Use multiple API Gateway instances behind a load balancer.
Scale databases vertically or use read replicas; consider sharding if needed.
Deploy a clustered message broker with partitioning.
Use caching and optimize communication patterns (batching, async).
Automate deployment with container orchestration (Kubernetes) and use service discovery.
Interview Tips
Time: Spend 10 minutes clarifying requirements and scope, 20 minutes designing service boundaries and communication, 10 minutes discussing scaling and trade-offs, 5 minutes summarizing.
Explain importance of single responsibility to reduce complexity and improve maintainability.
Discuss how data ownership per service avoids tight coupling.
Describe communication methods and their trade-offs (sync vs async).
Highlight fault tolerance with circuit breakers and retries.
Mention scaling strategies and deployment independence.