0
0
LLDsystem_design~10 mins

Search and filter design in LLD - Scalability & System Analysis

Choose your learning style9 modes available
Scalability Analysis - Search and filter design
Growth Table: Search and Filter Design
Users / Queries100 Users10K Users1M Users100M Users
Search Queries per Second (QPS)10 QPS1,000 QPS50,000 QPS5,000,000 QPS
Data Size (Indexed Items)10K items1M items100M items10B items
Index SizeSmall (few GB)Medium (hundreds GB)Large (TBs)Very Large (PBs)
Latency Expectation<100 ms<200 ms<300 ms<500 ms
InfrastructureSingle server with local indexCluster with distributed indexMulti-region clusters with replicationGlobal distributed system with sharding and CDN
Filter ComplexitySimple filters (few fields)Moderate filters (multi-field)Complex filters with facets and rangesHighly dynamic filters with personalization
First Bottleneck

The first bottleneck is the search index and query processing. As users and data grow, the index size increases, making queries slower. The CPU and memory on the search servers become overwhelmed handling complex filters and high QPS.

Scaling Solutions
  • Horizontal scaling: Add more search nodes to distribute query load and index shards.
  • Index sharding: Split the index into smaller parts by data ranges or categories to reduce query scope.
  • Caching: Cache frequent queries and filter results to reduce repeated computation.
  • Pre-aggregation: For filters, precompute counts or facets to speed up filtering.
  • Load balancing: Use smart routing to send queries to the least busy nodes.
  • Use of CDN: For static filter metadata or autocomplete suggestions, serve from CDN to reduce backend load.
  • Asynchronous processing: For complex filters, consider background jobs to prepare results.
Back-of-Envelope Cost Analysis

At 10K users generating 1,000 QPS, each query touching 1MB of index data means 1GB/s data throughput. This requires multiple servers with fast SSDs and high network bandwidth.

Storage for 1M items with index size ~100 bytes per item is ~100MB, but with inverted indexes and facets, it can grow to 100GB+.

At 1M users and 50,000 QPS, network bandwidth and CPU become critical. Each server can handle ~5,000 QPS, so at least 10 servers are needed just for query handling.

Interview Tip

Start by clarifying the scale and query patterns. Discuss data size, query complexity, and latency needs. Identify bottlenecks early (index size, CPU, memory). Propose incremental scaling: caching, sharding, horizontal scaling. Mention trade-offs like consistency and freshness of index.

Self Check

Your search database handles 1,000 QPS. Traffic grows 10x to 10,000 QPS. What do you do first?

Answer: Add horizontal scaling by adding more search nodes and shard the index to distribute the query load. Also, implement caching for frequent queries to reduce load.

Key Result
Search and filter systems first break at the search index and query processing due to CPU and memory limits. Scaling horizontally with sharding and caching is key to handle growth from thousands to millions of users.