to_tsquery for search terms in PostgreSQL - Time & Space Complexity
We want to understand how the time to process search terms using to_tsquery grows as the input size changes.
Specifically, how does the search query processing time increase when we add more words or terms?
Analyze the time complexity of the following PostgreSQL snippet using to_tsquery.
SELECT *
FROM documents
WHERE to_tsvector('english', content) @@ to_tsquery('english', 'search & term & example');
This code searches the documents table for rows where the content matches all the words in the search query.
Look for repeated work done by the query.
- Primary operation: Checking each document's text vector against the search query.
- How many times: Once per row in the
documentstable.
The time grows roughly with the number of rows and the number of search terms.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 rows, 2 terms | About 20 checks |
| 100 rows, 3 terms | About 300 checks |
| 1000 rows, 5 terms | About 5000 checks |
Pattern observation: More rows and more search terms increase the work roughly by multiplying together.
Time Complexity: O(n * m)
This means the time grows proportionally to the number of rows n and the number of search terms m.
[X] Wrong: "Adding more search terms does not affect the query time much because the database just matches words quickly."
[OK] Correct: Each additional search term adds more checks per row, increasing the total work and time needed.
Understanding how search queries scale helps you explain performance in real projects and shows you can think about how data size affects response time.
"What if we used a phrase search instead of multiple AND terms? How would the time complexity change?"