0
0
LLDsystem_design~10 mins

Anti-patterns to avoid in LLD - Scalability & System Analysis

Choose your learning style9 modes available
Scalability Analysis - Anti-patterns to avoid
Growth Table: What Changes at Each Scale
UsersCommon Anti-patternsImpactSigns to Watch
100 usersMonolithic design, no caching, tight couplingSystem works but slow response under loadSlow page loads, high CPU spikes
10,000 usersSingle database instance, no load balancing, synchronous callsDatabase overload, server crashes, slow API responsesTimeouts, increased error rates
1,000,000 usersNo sharding, no horizontal scaling, ignoring eventual consistencyDatabase bottleneck, network congestion, data loss riskHigh latency, frequent downtime
100,000,000 usersIgnoring microservices, no CDN, no caching layersMassive delays, huge infrastructure costs, poor user experienceSystem outages, slow global access
First Bottleneck: What Breaks and Why

At small scale, the database is the first to struggle because it handles all requests directly without caching or replicas.

As users grow, the application server CPU and memory become overloaded due to synchronous processing and lack of load balancing.

At large scale, network bandwidth and data storage become bottlenecks if data partitioning and CDNs are not used.

Scaling Solutions to Avoid Anti-patterns
  • Horizontal scaling: Add more servers to distribute load and avoid single points of failure.
  • Caching: Use caches to reduce database hits and speed up responses.
  • Database sharding: Split data across multiple databases to handle large volumes.
  • Load balancing: Distribute incoming traffic evenly across servers.
  • Use CDNs: Deliver static content closer to users to reduce latency.
  • Microservices: Break monoliths into smaller services for better maintainability and scaling.
Back-of-Envelope Cost Analysis
  • At 10,000 users, expect ~1000 QPS (queries per second).
  • Database storage grows with user data; plan for GBs to TBs depending on data size.
  • Network bandwidth needs increase; 1 Gbps can handle ~125 MB/s, plan accordingly.
  • Adding caching reduces database load by up to 70%, saving costs.
  • Horizontal scaling increases server costs linearly but improves availability.
Interview Tip: Structuring Scalability Discussion

Start by identifying current system limits and bottlenecks.

Discuss how load increases affect each component.

Explain anti-patterns and why they cause problems at scale.

Propose clear, practical solutions matching the bottleneck.

Use real numbers to justify your approach.

Self Check Question

Your database handles 1000 QPS. Traffic grows 10x. What do you do first?

Answer: Add read replicas and implement caching to reduce direct database load before scaling vertically or sharding.

Key Result
Avoiding anti-patterns like monolithic design, no caching, and single database instances is crucial. The database is usually the first bottleneck as traffic grows. Solutions include horizontal scaling, caching, sharding, and load balancing to maintain performance and reliability.