Query history and profiling in Snowflake - Time & Space Complexity
When we look at query history and profiling in Snowflake, we want to know how the time to get this data changes as we ask for more records.
We ask: How does the effort grow when we request more query history entries?
Analyze the time complexity of the following operation sequence.
SELECT query_id, user_name, start_time, total_elapsed_time
FROM snowflake.account_usage.query_history
WHERE start_time > DATEADD(day, -7, CURRENT_TIMESTAMP())
ORDER BY start_time DESC
LIMIT 1000;
This query fetches the last 1000 queries run in the past week, ordered by start time.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: Reading rows from the query_history table in the account_usage schema.
- How many times: The system scans or filters rows for the past 7 days, then returns up to the requested limit (e.g., 1000 rows).
As you ask for more query history rows, the system reads more data to find and return those rows.
| Input Size (n) | Approx. API Calls/Operations |
|---|---|
| 10 | Reads enough rows to find 10 recent queries. |
| 100 | Reads more rows to find 100 recent queries. |
| 1000 | Reads even more rows to find 1000 recent queries. |
Pattern observation: The work grows roughly in direct proportion to how many rows you request.
Time Complexity: O(n)
This means the time to get query history grows linearly with the number of rows requested.
[X] Wrong: "Getting query history is always instant no matter how many rows I ask for."
[OK] Correct: More rows mean more data to scan and transfer, so it takes more time.
Understanding how data retrieval scales helps you design efficient monitoring and troubleshooting tools in real projects.
"What if we added a filter on user_name to only get queries from one user? How would the time complexity change?"