0
0
HLDsystem_design~12 mins

Throughput, latency, and availability in HLD - Architecture Diagram

Choose your learning style9 modes available
System Overview - Throughput, latency, and availability

This system is designed to handle user requests efficiently by balancing throughput, latency, and availability. It ensures many requests can be processed quickly (throughput), responses are fast (low latency), and the system stays online even if some parts fail (high availability).

Architecture Diagram
User
  |
  v
Load Balancer
  |
  v
API Gateway
  |
  v
+-----------------+      +------------+
|    Service A    |<---->|   Cache    |
+-----------------+      +------------+
          |
          v
     Database
Components
User
client
Sends requests to the system
Load Balancer
load_balancer
Distributes incoming requests evenly to prevent overload
API Gateway
api_gateway
Routes requests to appropriate services and handles security
Service A
service
Processes business logic and handles user requests
Cache
cache
Stores frequently accessed data to reduce latency and database load
Database
database
Stores persistent data for the system
Request Flow - 11 Hops
UserLoad Balancer
Load BalancerAPI Gateway
API GatewayCache
CacheAPI Gateway
API GatewayService A
Service ADatabase
DatabaseService A
Service ACache
Service AAPI Gateway
API GatewayLoad Balancer
Load BalancerUser
Failure Scenario
Component Fails:Database
Impact:New writes and cache updates fail; reads may still succeed if data is cached
Mitigation:Use database replication for failover; rely on cache for read availability; alert for manual intervention
Architecture Quiz - 3 Questions
Test your understanding
Which component helps reduce latency by storing frequently accessed data?
ALoad Balancer
BCache
CAPI Gateway
DDatabase
Design Principle
This architecture balances throughput by using a load balancer, reduces latency with a cache layer before the database, and ensures availability by isolating failures and using replication and caching.