Analyzing index usage with pg_stat in PostgreSQL - Time & Space Complexity
When we check how often database indexes are used, we want to understand how the work grows as data grows.
We ask: How does the cost of using indexes change when the table gets bigger?
Analyze the time complexity of this query that checks index usage statistics.
SELECT indexrelname, idx_scan, idx_tup_read, idx_tup_fetch
FROM pg_stat_user_indexes
WHERE schemaname = 'public';
This query reads index usage stats for all indexes in the public schema.
Look for repeated actions in the query.
- Primary operation: Scanning the pg_stat_user_indexes view rows.
- How many times: Once per index in the public schema.
As the number of indexes grows, the query reads more rows.
| Input Size (number of indexes) | Approx. Rows Read |
|---|---|
| 10 | 10 rows |
| 100 | 100 rows |
| 1000 | 1000 rows |
Pattern observation: The work grows directly with the number of indexes.
Time Complexity: O(n)
This means the time to get index stats grows linearly with how many indexes exist.
[X] Wrong: "The query time depends on the size of the table data."
[OK] Correct: This query reads only index stats, not the table rows, so table size does not affect it.
Understanding how system views scale helps you explain database monitoring and performance checks clearly.
"What if we added a filter on index name? How would that change the time complexity?"