After running a complex SELECT query in Snowflake, you open the query profile. Which of the following best describes what the "Execution Time" metric represents?
Think about what "execution" means in the context of running a query.
The "Execution Time" in the query profile shows how long Snowflake took to actually run the query steps such as scanning tables, joining data, and computing results. Parsing and compiling happen before execution, and result transfer happens after execution.
In Snowflake's query plan, which component is responsible for determining the order and method of joining tables?
Consider which part decides how to best run the query.
The Query Optimizer analyzes the SQL and decides the best way to join tables, which indexes to use, and the order of operations to minimize resource use and time.
You notice a Snowflake query is running slower than expected. The query profile shows a large percentage of time spent in "Spilling to Disk". What does this indicate and what is the best immediate action?
Spilling to disk usually means memory is insufficient.
Spilling to disk happens when the query needs more memory than allocated, causing Snowflake to write intermediate data to disk, which slows performance. Increasing warehouse size or rewriting the query to use less memory helps.
Which part of Snowflake's query profile can help you verify which tables and columns were accessed during a query for auditing purposes?
Look for where Snowflake shows what data was touched.
The "Objects Accessed" section in the query profile lists all tables and columns the query read or modified, which is useful for auditing data access.
You have a query joining two large tables in Snowflake. The query profile shows a "Broadcast Join" was chosen by the optimizer, but the query runs slowly and uses excessive resources. What is the best next step to improve performance?
Broadcast joins send one table to all nodes; think about when this is efficient.
Broadcast joins work well when one table is small. For large tables, broadcasting causes heavy network and memory use. Forcing a shuffle join redistributes data more evenly and can improve performance.