Resource monitors for cost control in Snowflake - Time & Space Complexity
We want to understand how the time to check and enforce resource limits grows as usage increases.
Specifically, how does Snowflake handle monitoring usage to control costs efficiently?
Analyze the time complexity of the following resource monitor checks.
CREATE RESOURCE MONITOR my_monitor
WITH CREDIT_QUOTA = 100
TRIGGERS ON 80 PERCENT DO NOTIFY
TRIGGERS ON 100 PERCENT DO SUSPEND;
-- Periodically check usage
SELECT SUM(CREDITS_USED)
FROM SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY
WHERE START_TIME >= DATEADD(DAY, -1, CURRENT_TIMESTAMP());
-- If threshold reached, trigger action
ALTER WAREHOUSE "my_warehouse" SUSPEND;
This sequence creates a monitor, checks usage, and triggers actions when limits are reached.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: Periodic usage check queries to monitor credit consumption.
- How many times: These checks happen regularly, depending on configuration (e.g., every few minutes).
As the number of warehouses or queries running increases, the monitor must check more usage data.
| Input Size (number of warehouses/queries) | Approx. Api Calls/Operations |
|---|---|
| 10 | 10 usage checks per interval |
| 100 | 100 usage checks per interval |
| 1000 | 1000 usage checks per interval |
Pattern observation: The number of checks grows linearly with the number of monitored resources.
Time Complexity: O(n)
This means the time to monitor usage grows directly with the number of resources being tracked.
[X] Wrong: "Resource monitors check all usage instantly regardless of scale."
[OK] Correct: Each resource's usage must be checked separately, so more resources mean more checks and longer total time.
Understanding how monitoring scales helps you design cost controls that stay efficient as systems grow.
"What if the resource monitor aggregated usage data instead of checking each resource separately? How would the time complexity change?"