Bird
0
0
LLDsystem_design~7 mins

Search functionality design in LLD - System Design Guide

Choose your learning style9 modes available
Problem Statement
Users expect to find relevant information quickly, but a naive search that scans all data sequentially causes slow responses and poor user experience. As data grows, search queries become slower and less scalable, leading to timeouts and frustrated users.
Solution
Search functionality design uses indexing and optimized query processing to quickly locate relevant data without scanning everything. It builds data structures like inverted indexes to map keywords to documents, enabling fast lookups and ranking results by relevance.
Architecture
User Query
Search Service
Ranking &
Result Sort

This diagram shows a user query sent to the search service, which consults the index store to find matching documents. The results are ranked and sorted before returning to the user.

Trade-offs
✓ Pros
Significantly faster search responses by avoiding full data scans.
Scalable to large datasets by using efficient indexes.
Improves user experience with relevant, ranked results.
Supports complex queries like phrase search and filters.
✗ Cons
Index building and updating adds complexity and resource use.
Requires careful handling of index consistency with data changes.
Ranking algorithms can be complex and require tuning.
When the dataset exceeds tens of thousands of records and users require fast, relevant search results with complex query support.
For very small datasets under a few thousand records where full scans are fast enough and index maintenance overhead is unnecessary.
Real World Examples
Amazon
Uses inverted indexes and ranking algorithms to quickly find relevant products from millions of listings based on user search queries.
Google
Builds massive distributed indexes to serve billions of search queries per day with low latency and high relevance.
Airbnb
Implements search with filters and ranking to help users find suitable listings quickly from a large inventory.
Code Example
The before code scans every document for the query word, which is slow for many documents. The after code builds an inverted index mapping words to documents, enabling fast lookup by intersecting sets of documents containing each query word.
LLD
### Before: Naive search scanning all documents
class SearchEngine:
    def __init__(self, documents):
        self.documents = documents

    def search(self, query):
        results = []
        for doc_id, text in self.documents.items():
            if query.lower() in text.lower():
                results.append(doc_id)
        return results


### After: Using inverted index for fast lookup
class SearchEngine:
    def __init__(self, documents):
        self.documents = documents
        self.index = self.build_index(documents)

    def build_index(self, documents):
        index = {}
        for doc_id, text in documents.items():
            for word in text.lower().split():
                index.setdefault(word, set()).add(doc_id)
        return index

    def search(self, query):
        query_words = query.lower().split()
        if not query_words:
            return []
        result_sets = [self.index.get(word, set()) for word in query_words]
        # Intersection of sets to find docs containing all query words
        results = set.intersection(*result_sets) if result_sets else set()
        return list(results)
OutputSuccess
Alternatives
Full Table Scan
Scans all data sequentially without indexes, resulting in slower queries.
Use when: Dataset is very small or queries are infrequent and simple.
Database Text Search
Uses built-in database text search features, which may be less flexible or scalable than dedicated search engines.
Use when: When integration simplicity is more important than advanced search features.
Search as a Service (e.g., Algolia, Elasticsearch Cloud)
Outsources search infrastructure to managed services, reducing operational overhead.
Use when: When rapid development and maintenance reduction are priorities over full control.
Summary
Naive search scanning all data is slow and unscalable for large datasets.
Search functionality design uses indexes and ranking to deliver fast, relevant results.
Trade-offs include index maintenance complexity versus improved user experience.