0
0
Snowflakecloud~5 mins

Why data sharing eliminates data copies in Snowflake - Performance Analysis

Choose your learning style9 modes available
Time Complexity: Why data sharing eliminates data copies
O(1)
Understanding Time Complexity

We want to understand how the work grows when using data sharing instead of copying data.

Specifically, how many operations happen as data size grows when sharing data.

Scenario Under Consideration

Analyze the time complexity of sharing data instead of copying it.


-- Create a share
CREATE SHARE my_share;

-- Add a database to the share
ALTER SHARE my_share ADD DATABASE my_database;

-- Consumer creates a database from the share
CREATE DATABASE shared_db FROM SHARE provider_account.my_share;

This sequence shares a database without copying data, allowing access without duplication.

Identify Repeating Operations

Look at what happens repeatedly when sharing data.

  • Primary operation: Granting access to data via share metadata.
  • How many times: Once per share setup, regardless of data size.
How Execution Grows With Input

Sharing data does not copy data, so operations stay almost the same as data grows.

Input Size (n)Approx. API Calls/Operations
103 (create share, add database, create consumer DB)
1003 (same operations, no extra copies)
10003 (still just setup calls, no data duplication)

Pattern observation: The number of operations stays constant, not growing with data size.

Final Time Complexity

Time Complexity: O(1)

This means the work to share data stays the same no matter how big the data is.

Common Mistake

[X] Wrong: "Sharing data copies all the data behind the scenes."

[OK] Correct: Sharing only grants access pointers, so no data is duplicated or copied.

Interview Connect

Understanding how sharing avoids copying helps you explain efficient data access in cloud systems.

Self-Check

"What if data sharing required copying data to the consumer account? How would the time complexity change?"