Databases and schemas in Snowflake - Time & Space Complexity
We want to understand how the time to list databases and schemas changes as the number of these objects grows.
How does Snowflake handle these operations when there are many databases and schemas?
Analyze the time complexity of the following operation sequence.
-- List all databases
SHOW DATABASES;
-- For each database, list all schemas
DECLARE db_cursor CURSOR FOR SELECT name FROM TABLE(RESULT_SCAN(LAST_QUERY_ID()));
FOR db_record IN db_cursor DO
LET db_name = db_record.name;
EXECUTE IMMEDIATE 'SHOW SCHEMAS IN DATABASE ' || db_name;
END FOR;
This sequence lists all databases, then for each database lists all its schemas.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: The repeated call is
SHOW SCHEMAS IN DATABASEfor each database. - How many times: Once for each database found by
SHOW DATABASES.
As the number of databases grows, the number of schema listing calls grows the same way.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 databases | 1 (SHOW DATABASES) + 10 (SHOW SCHEMAS) = 11 calls |
| 100 databases | 1 + 100 = 101 calls |
| 1000 databases | 1 + 1000 = 1001 calls |
Pattern observation: The total calls grow linearly with the number of databases.
Time Complexity: O(n)
This means the time to list all schemas grows directly in proportion to the number of databases.
[X] Wrong: "Listing schemas for all databases takes the same time no matter how many databases exist."
[OK] Correct: Each database requires a separate call to list its schemas, so more databases mean more calls and more time.
Understanding how operations scale with input size helps you design efficient database queries and scripts in real projects.
What if we changed the operation to list all schemas across all databases in a single call? How would the time complexity change?