0
0
Microservicessystem_design~10 mins

Identifying service boundaries in Microservices - Scalability & System Analysis

Choose your learning style9 modes available
Scalability Analysis - Identifying service boundaries
Growth Table: Identifying Service Boundaries
Users / ScaleSystem ChangesService Boundaries Impact
100 usersMonolithic or few services handle requests easily.
Low latency, simple deployment.
Boundaries may be coarse or unclear.
Services might be combined for simplicity.
10,000 usersIncreased load stresses single services.
Latency and failures start to appear.
Need for clear service separation grows.
Services split by business capabilities.
Clear APIs and data ownership needed.
Boundaries help isolate failures.
1,000,000 usersHigh traffic demands horizontal scaling.
Database and network become bottlenecks.
Service communication overhead increases.
Fine-grained services with well-defined boundaries.
Event-driven or async communication preferred.
Data partitioning per service.
100,000,000 usersMassive scale requires global distribution.
Latency optimization critical.
Data consistency challenges.
Microservices deployed regionally.
Strong boundary enforcement to reduce coupling.
Use of API gateways and service meshes.
First Bottleneck: Service Boundary Challenges

At small scale, unclear or broad service boundaries cause tight coupling. This leads to:

  • Difficulty scaling individual parts.
  • Increased failure blast radius.
  • Harder to deploy or update services independently.

As users grow, the first bottleneck is the monolithic or poorly separated services that cannot scale or isolate faults well.

Scaling Solutions for Service Boundaries
  • Define clear business capabilities: Split services by distinct functions (e.g., user management, payments).
  • Use domain-driven design: Identify bounded contexts to guide boundaries.
  • Adopt asynchronous communication: Use messaging queues to decouple services.
  • Implement API gateways: Manage service access and routing.
  • Apply service mesh: Control communication, security, and observability.
  • Horizontal scaling: Scale services independently based on load.
  • Data ownership: Each service manages its own database to avoid tight coupling.
Back-of-Envelope Cost Analysis
  • Requests per second (RPS):
    At 1M users, assuming 1 request per user per minute -> ~16,700 RPS total.
    Services must handle their share independently.
  • Storage:
    Each service owns data; storage scales with user data size.
    Partitioning reduces single database load.
  • Bandwidth:
    Inter-service communication adds overhead.
    Use efficient protocols (gRPC, HTTP/2) to reduce cost.
  • Operational cost:
    More services mean more deployment and monitoring overhead.
    Automation and orchestration tools reduce manual effort.
Interview Tip: Structuring Scalability Discussion

Start by explaining how you identify service boundaries based on business capabilities and data ownership.

Discuss how boundaries help isolate failures and enable independent scaling.

Explain the challenges of unclear boundaries at scale and how they become bottlenecks.

Describe solutions like domain-driven design, asynchronous communication, and service meshes.

Use examples to show how scaling affects service boundaries and system complexity.

Self-Check Question

Your database handles 1000 QPS. Traffic grows 10x. What do you do first?

Answer: Identify if the database is the bottleneck due to increased load. First, introduce read replicas and caching to reduce direct database queries. Then, consider splitting data ownership by service boundaries to distribute load and enable horizontal scaling.

Key Result
Clear and well-defined service boundaries enable independent scaling and fault isolation. Poor boundaries cause bottlenecks early as traffic grows, making it critical to split services by business capabilities and data ownership.