0
0
Microservicessystem_design~25 mins

Request-response vs event-driven in Microservices - Design Approaches Compared

Choose your learning style9 modes available
Design: Microservices Communication System
Design focuses on communication patterns between microservices using request-response and event-driven approaches. It excludes internal service logic and database design beyond messaging needs.
Functional Requirements
FR1: Enable communication between multiple microservices
FR2: Support synchronous interactions where immediate response is needed
FR3: Support asynchronous interactions for decoupled processing
FR4: Ensure reliable message delivery between services
FR5: Handle failures gracefully without data loss
FR6: Allow scaling of services independently
Non-Functional Requirements
NFR1: System should handle up to 10,000 requests per second
NFR2: API response latency for synchronous calls should be under 200ms (p99)
NFR3: Event processing latency should be under 1 second
NFR4: System availability target is 99.9% uptime
NFR5: Services may be deployed across multiple data centers
Think Before You Design
Questions to Ask
❓ Question 1
❓ Question 2
❓ Question 3
❓ Question 4
❓ Question 5
Key Components
API Gateway or Load Balancer
Service Registry and Discovery
Message Broker (e.g., Kafka, RabbitMQ)
Synchronous HTTP/gRPC communication
Asynchronous event queues/topics
Retry and Dead Letter Queues
Monitoring and Logging tools
Design Patterns
Request-Response pattern for synchronous calls
Event-Driven Architecture for asynchronous communication
Publish-Subscribe pattern
Circuit Breaker for fault tolerance
Message Queuing for decoupling
Idempotency to handle retries
Reference Architecture
Client
  |
  | HTTP/gRPC Request
  v
API Gateway / Load Balancer
  |
  | Synchronous Request-Response
  v
Microservice A <-----> Microservice B
  |
  | Publishes Event
  v
Message Broker (Kafka/RabbitMQ)
  |
  | Asynchronous Event Delivery
  v
Microservice C, Microservice D (Subscribers)

Monitoring & Logging
Service Registry & Discovery
Components
API Gateway / Load Balancer
Nginx, Envoy, or AWS ALB
Routes client requests to appropriate microservices and balances load
Microservices
Any language/framework supporting HTTP/gRPC and messaging
Business logic units communicating synchronously or asynchronously
Message Broker
Apache Kafka or RabbitMQ
Handles asynchronous event delivery with durability and ordering guarantees
Service Registry & Discovery
Consul, Eureka, or Kubernetes DNS
Allows services to find each other dynamically
Monitoring & Logging
Prometheus, Grafana, ELK Stack
Tracks system health, latency, errors, and message flows
Request Flow
1. Client sends synchronous request to Microservice A via API Gateway.
2. Microservice A processes request and calls Microservice B synchronously using HTTP/gRPC.
3. Microservice B responds immediately to Microservice A.
4. Microservice A returns response to client.
5. Separately, Microservice A publishes an event to the Message Broker asynchronously.
6. Message Broker delivers event to subscribed Microservices C and D.
7. Microservices C and D process events independently and acknowledge receipt.
8. If event processing fails, messages are retried or sent to Dead Letter Queue for manual inspection.
Database Schema
Not applicable as this design focuses on communication patterns. However, message broker stores events with metadata (event_id, timestamp, payload, status). Microservices maintain their own databases for business data.
Scaling Discussion
Bottlenecks
API Gateway can become a single point of failure or bottleneck under high load.
Synchronous calls can cause cascading failures if downstream services are slow or down.
Message Broker throughput limits event processing speed.
Service discovery delays can cause communication failures.
Monitoring systems may be overwhelmed with high volume logs and metrics.
Solutions
Use multiple API Gateway instances with load balancing and health checks.
Implement circuit breakers and timeouts to isolate failures in synchronous calls.
Partition topics and scale Message Broker clusters horizontally.
Use caching and local service registries to reduce discovery latency.
Aggregate and sample monitoring data to reduce overhead.
Interview Tips
Time: Spend 10 minutes understanding requirements and clarifying synchronous vs asynchronous needs, 20 minutes designing architecture and data flow, 10 minutes discussing scaling and trade-offs, 5 minutes summarizing.
Explain difference between request-response (synchronous) and event-driven (asynchronous) communication.
Discuss when to use each pattern based on latency and coupling requirements.
Highlight components like API Gateway, Message Broker, and Service Discovery.
Describe failure handling with retries, circuit breakers, and dead letter queues.
Address scaling challenges and solutions for high throughput and availability.