0
0
Microservicessystem_design~25 mins

Service decomposition strategies in Microservices - System Design Exercise

Choose your learning style9 modes available
Design: Service Decomposition Strategies for Microservices
Focus on strategies for breaking down a monolith into microservices, including service boundaries, communication patterns, and data management. Out of scope: detailed implementation of each microservice.
Functional Requirements
FR1: Decompose a monolithic application into microservices
FR2: Ensure each service has a single responsibility
FR3: Enable independent deployment and scaling of services
FR4: Maintain data consistency and integrity across services
FR5: Support clear communication between services
Non-Functional Requirements
NFR1: Handle up to 10,000 concurrent users
NFR2: API response latency p99 under 300ms
NFR3: Availability target of 99.9% uptime
NFR4: Services must be loosely coupled
NFR5: Data storage should be decentralized per service
Think Before You Design
Questions to Ask
❓ Question 1
❓ Question 2
❓ Question 3
❓ Question 4
❓ Question 5
Key Components
API Gateway
Service Registry and Discovery
Message Broker for asynchronous communication
Database per service
Load Balancer
Design Patterns
Domain-Driven Design (DDD) for defining service boundaries
Database per service pattern
API Gateway pattern
Event-driven architecture
Saga pattern for distributed transactions
Reference Architecture
                +-------------------+
                |    API Gateway    |
                +---------+---------+
                          |
        +-----------------+-----------------+
        |                 |                 |
+-------v-------+ +-------v-------+ +-------v-------+
|  User Service | | Order Service | | Payment Service|
+-------+-------+ +-------+-------+ +-------+-------+
        |                 |                 |
+-------v-------+ +-------v-------+ +-------v-------+
| User DB       | | Order DB      | | Payment DB    |
+---------------+ +---------------+ +---------------+

Services communicate asynchronously via message broker for events.
Components
API Gateway
Nginx or Kong
Entry point for clients, routes requests to appropriate services, handles authentication and rate limiting
User Service
Spring Boot / Node.js
Manages user profiles and authentication
Order Service
Spring Boot / Node.js
Handles order creation, updates, and queries
Payment Service
Spring Boot / Node.js
Processes payments and manages payment status
Message Broker
Apache Kafka / RabbitMQ
Enables asynchronous communication and event-driven workflows between services
Databases
PostgreSQL / MongoDB
Each service owns its database to ensure loose coupling and data encapsulation
Request Flow
1. Client sends request to API Gateway.
2. API Gateway routes request to the appropriate microservice based on URL and method.
3. Microservice processes request using its own database.
4. If an action affects other services, microservice publishes an event to the message broker.
5. Other services subscribe to relevant events and update their state accordingly.
6. Microservice returns response to API Gateway.
7. API Gateway sends response back to client.
Database Schema
Entities per service: - User Service: User(id, name, email, password_hash) - Order Service: Order(id, user_id, product_id, quantity, status) - Payment Service: Payment(id, order_id, amount, status) Relationships: - User Service owns User entity - Order Service references User by user_id (foreign key) - Payment Service references Order by order_id Each service manages its own database schema independently.
Scaling Discussion
Bottlenecks
API Gateway can become a single point of failure and bottleneck under high load
Database per service may face scaling challenges with large data volumes
Synchronous communication between services can increase latency and reduce availability
Eventual consistency can cause temporary data mismatches
Service discovery and load balancing complexity increases with number of services
Solutions
Deploy multiple API Gateway instances behind a load balancer for high availability
Use database sharding and read replicas to scale databases
Favor asynchronous communication with message brokers to decouple services
Implement compensating transactions and monitoring to handle eventual consistency
Use service registry tools like Consul or Eureka for dynamic service discovery
Interview Tips
Time: Spend 10 minutes understanding requirements and clarifying scope, 20 minutes designing service boundaries and communication, 10 minutes discussing scaling and trade-offs, 5 minutes summarizing.
Explain how you identify service boundaries using business domains
Discuss pros and cons of database per service pattern
Describe communication patterns: synchronous vs asynchronous
Highlight importance of loose coupling and independent deployability
Address data consistency challenges and solutions
Mention scalability and fault tolerance considerations