0
0
Microservicessystem_design~25 mins

Popular gateways (Kong, AWS API Gateway, Nginx) in Microservices - System Design Exercise

Choose your learning style9 modes available
Design: API Gateway System for Microservices
Design the API gateway layer that manages traffic between clients and microservices. Exclude internal microservice design and database details.
Functional Requirements
FR1: Route client requests to appropriate microservices
FR2: Handle authentication and authorization
FR3: Provide rate limiting to prevent abuse
FR4: Enable request and response transformation
FR5: Support logging and monitoring of API calls
FR6: Allow easy addition or removal of microservices
FR7: Ensure high availability and low latency
Non-Functional Requirements
NFR1: Must handle 10,000 concurrent requests
NFR2: API response latency p99 under 200ms
NFR3: 99.9% uptime availability
NFR4: Support REST and WebSocket protocols
Think Before You Design
Questions to Ask
❓ Question 1
❓ Question 2
❓ Question 3
❓ Question 4
❓ Question 5
❓ Question 6
Key Components
API Gateway server (Kong, AWS API Gateway, or Nginx)
Authentication and authorization module
Rate limiting and throttling component
Load balancer
Service registry or discovery system
Logging and monitoring tools
Configuration management
Design Patterns
Reverse proxy pattern
Circuit breaker pattern
Token-based authentication
Rate limiting algorithms (token bucket, leaky bucket)
Blue-green deployment for gateway updates
Reference Architecture
API Gateway
Auth Service
Load Balancer
Microservices Cluster
Databases
Components
API Gateway
Kong / AWS API Gateway / Nginx
Entry point for all client requests, routes to microservices, handles protocol translation
Authentication Module
JWT, OAuth plugins or custom middleware
Verify client identity and permissions before forwarding requests
Rate Limiter
Built-in gateway plugins or external service
Limit number of requests per client to prevent abuse
Load Balancer
Nginx or cloud load balancer
Distribute incoming requests evenly across microservice instances
Service Registry
Consul, Eureka, or built-in discovery
Keep track of available microservice instances for routing
Logging and Monitoring
Prometheus, Grafana, ELK stack
Collect metrics and logs for observability and troubleshooting
Request Flow
1. Client sends request to API Gateway
2. API Gateway authenticates request using Authentication Module
3. API Gateway checks rate limits via Rate Limiter
4. If allowed, API Gateway routes request to Load Balancer
5. Load Balancer forwards request to appropriate microservice instance
6. Microservice processes request and sends response back
7. API Gateway logs request and response details
8. API Gateway returns response to client
Database Schema
Not applicable as this design focuses on API Gateway layer; microservices manage their own databases.
Scaling Discussion
Bottlenecks
API Gateway becoming a single point of failure under high load
Rate limiter performance degrading with many clients
Authentication module causing latency if external calls are slow
Load balancer unevenly distributing traffic
Logging system overwhelmed by high request volume
Solutions
Deploy multiple API Gateway instances behind a load balancer for high availability
Use distributed rate limiting with in-memory stores like Redis for speed
Cache authentication tokens and use asynchronous verification where possible
Implement health checks and smart load balancing algorithms
Use scalable logging pipelines with batching and sampling
Interview Tips
Time: 10 minutes for requirements and clarifications, 15 minutes for architecture and data flow, 10 minutes for scaling and trade-offs, 10 minutes for questions
Explain why API Gateway is critical for microservices communication
Discuss trade-offs between Kong, AWS API Gateway, and Nginx
Highlight importance of authentication and rate limiting
Describe how to ensure high availability and low latency
Mention monitoring and logging for operational excellence