Access history and audit logging in Snowflake - Time & Space Complexity
When we check access history and audit logs in Snowflake, we want to know how long it takes to get results as the amount of data grows.
We ask: How does the time to fetch logs change when there are more records?
Analyze the time complexity of the following code snippet.
SELECT event_time, user_name, action
FROM snowflake.account_usage.access_history
WHERE event_time > DATEADD(day, -7, CURRENT_TIMESTAMP())
ORDER BY event_time DESC
LIMIT 1000;
This query fetches the last 1000 access events from the past week, ordered by time.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Scanning the access_history table for events in the last 7 days.
- How many times: The database scans all matching rows for the 7-day period before sorting and limiting results.
As the number of access events in the last 7 days grows, the scan and sort take longer.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 rows scanned and sorted |
| 100 | About 100 rows scanned and sorted |
| 1000 | About 1000 rows scanned and sorted |
Pattern observation: The work grows roughly in direct proportion to the number of rows scanned before limiting.
Time Complexity: O(n log n)
This means the time grows a bit faster than the number of rows because sorting takes extra steps.
[X] Wrong: "Fetching audit logs always takes the same time no matter how many records there are."
[OK] Correct: The database must scan and sort all matching rows, so more records mean more work and longer time.
Understanding how log queries scale helps you design better monitoring and troubleshooting tools in real projects.
"What if we added a clustering key on event_time? How would the time complexity change?"