0
0
Microservicessystem_design~10 mins

Database decomposition strategy in Microservices - Scalability & System Analysis

Choose your learning style9 modes available
Scalability Analysis - Database decomposition strategy
Growth Table: Database Decomposition Strategy
UsersData VolumeDatabase SetupChallengesChanges Needed
100 usersSmall (MBs)Single monolithic databaseSimple queries, low latencyNone, simple design works
10,000 usersMedium (GBs)Monolithic DB with read replicasRead scaling, some write contentionAdd read replicas, caching
1,000,000 usersLarge (TBs)Decompose DB by service (microservices)Write bottlenecks, complex joins, latencySplit DB by domain, use separate DB per microservice
100,000,000 usersVery Large (PBs)Sharded and decomposed DBs per serviceCross-service data consistency, network latencyShard databases, async communication, event sourcing
First Bottleneck

At around 1 million users, the monolithic database becomes the first bottleneck. It struggles with write throughput and complex joins across large tables. This causes slow response times and limits scaling.

Scaling Solutions
  • Database decomposition: Split the monolithic database into multiple smaller databases aligned with microservices. Each service owns its data.
  • Horizontal scaling: Add more database instances for each microservice to distribute load.
  • Caching: Use caches to reduce database reads for frequently accessed data.
  • Sharding: Partition large databases by key ranges or user IDs to spread data across servers.
  • Async communication: Use event-driven patterns to sync data between services without blocking.
Back-of-Envelope Cost Analysis
  • At 1M users, expect ~10,000 QPS (queries per second) on the database.
  • A single PostgreSQL instance handles ~5,000 QPS, so two or more DBs needed.
  • Storage grows to terabytes; decomposed DBs reduce single DB size.
  • Network bandwidth increases due to inter-service communication; plan for 1 Gbps+ links.
  • Caching reduces DB load by 30-50%, saving costs and improving latency.
Interview Tip

Start by explaining the limits of a monolithic database as users grow. Then describe how splitting data by service boundaries helps scale. Discuss trade-offs like data consistency and complexity. Finally, mention caching, sharding, and async communication as further steps.

Self Check

Your database handles 1000 QPS. Traffic grows 10x to 10,000 QPS. What do you do first?

Answer: Add read replicas and implement caching to reduce load on the primary database before considering decomposition or sharding.

Key Result
Database decomposition becomes essential at around 1 million users to overcome monolithic DB bottlenecks by splitting data per microservice, enabling horizontal scaling and reducing latency.