0
0
LLDsystem_design~10 mins

Board and piece hierarchy in LLD - Scalability & System Analysis

Choose your learning style9 modes available
Scalability Analysis - Board and piece hierarchy
Growth Table: Board and Piece Hierarchy
ScaleNumber of BoardsNumber of PiecesMemory UsageOperations per SecondComplexity
100 Boards1001,600 (16 per board)Low (MBs)Low (few hundred ops)Simple object management
10,000 Boards10,000160,000Moderate (GBs)Moderate (thousands ops)Need efficient data structures
1,000,000 Boards1,000,00016,000,000High (hundreds GBs)High (hundreds of thousands ops)Requires sharding and caching
100,000,000 Boards100,000,0001,600,000,000Very High (TBs)Very High (millions ops)Distributed system with partitioning
First Bottleneck

The first bottleneck is memory and CPU on the application server managing the board and piece objects. As the number of boards and pieces grows, keeping all objects in memory and processing moves or state changes becomes expensive. The object hierarchy and frequent updates cause CPU load and memory pressure before storage or network limits.

Scaling Solutions
  • Horizontal scaling: Add more application servers to distribute board and piece management load.
  • Caching: Cache frequently accessed board states or piece positions to reduce recomputation.
  • Sharding: Partition boards across servers by ID ranges or user groups to limit per-server load.
  • Efficient data structures: Use lightweight representations for pieces and boards to reduce memory footprint.
  • Event-driven updates: Process piece moves asynchronously to smooth CPU spikes.
Back-of-Envelope Cost Analysis
  • Assuming 16 pieces per board, 1 million boards = 16 million pieces.
  • Each piece object ~200 bytes -> 3.2 GB memory for pieces alone.
  • Boards with metadata ~1 KB each -> 1 GB memory for boards.
  • Operations: 10 moves per second per board -> 10 million ops/sec at 1 million boards.
  • Network bandwidth depends on update size; small updates (~100 bytes) -> ~1 GB/s bandwidth.
Interview Tip

Start by explaining the system components: boards and pieces as objects. Discuss how load grows with number of boards and pieces. Identify bottlenecks in memory and CPU. Propose scaling solutions like sharding and caching. Use real numbers to justify your approach. Keep answers structured: growth, bottleneck, solution, cost.

Self Check

Your application server handles 1,000 piece updates per second. Traffic grows 10x to 10,000 updates per second. What do you do first?

Answer: Add horizontal scaling by deploying more application servers behind a load balancer to distribute the update processing load and avoid CPU bottlenecks.

Key Result
The main scalability challenge is managing memory and CPU load for many board and piece objects; horizontal scaling and sharding are key to handle growth beyond thousands of boards.