0
0
Snowflakecloud~10 mins

COPY INTO command in Snowflake - Step-by-Step Execution

Choose your learning style9 modes available
Process Flow - COPY INTO command
Start COPY INTO command
Connect to source data
Read data files
Parse and transform data
Load data into target table
Handle errors or skip bad records
Finish and report status
The COPY INTO command reads data files from a source, parses them, loads the data into a target table, and handles errors before finishing.
Execution Sample
Snowflake
COPY INTO my_table
FROM @my_stage/data/
FILE_FORMAT = (TYPE = 'CSV' FIELD_DELIMITER = ',' SKIP_HEADER = 1)
ON_ERROR = 'CONTINUE';
This command copies CSV data from a stage into a table, skipping the header and continuing on errors.
Process Table
StepActionDetailsResult
1Start COPY INTOTarget: my_table, Source: @my_stage/data/Ready to load data
2Connect to sourceAccess files in @my_stage/data/Files located
3Read data filesRead CSV files, skip first lineData rows ready
4Parse dataSplit fields by commaParsed rows
5Load dataInsert rows into my_tableRows inserted
6Handle errorsOn error: CONTINUEBad rows skipped, process continues
7FinishReport number of rows loaded and skippedLoad complete
💡 All files processed; errors handled by skipping bad rows
Status Tracker
VariableStartAfter Step 3After Step 5Final
Files to loadList of files in stageFiles readFiles loadedAll files processed
Rows read0Data rows readRows insertedTotal rows loaded
Errors0Errors foundErrors skippedTotal errors skipped
Key Moments - 3 Insights
Why does the COPY INTO command skip the first line of the CSV?
Because the FILE_FORMAT option SKIP_HEADER=1 tells Snowflake to ignore the header row, as shown in step 3 of the execution_table.
What happens when a bad row is encountered during loading?
The ON_ERROR='CONTINUE' option causes the command to skip the bad row and continue loading, as explained in step 6 of the execution_table.
How does Snowflake know where to find the data files?
The FROM clause specifies the stage location (@my_stage/data/), which Snowflake accesses in step 2 of the execution_table.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table, at which step does Snowflake parse the CSV data?
AStep 3
BStep 4
CStep 5
DStep 6
💡 Hint
Refer to the 'Parse data' action in step 4 of the execution_table.
According to variable_tracker, what happens to the 'Errors' variable after step 6?
AIt increases and bad rows are skipped
BIt resets to zero
CIt increases and stops the load
DIt decreases
💡 Hint
Check the 'Errors' row in variable_tracker after step 6 and the 'Handle errors' action in execution_table.
If we remove SKIP_HEADER=1 from FILE_FORMAT, what changes in the execution?
ANo data will be loaded
BThe command will fail immediately
CThe header row will be loaded as data
DErrors will be ignored
💡 Hint
Consider step 3 in execution_table where SKIP_HEADER=1 causes the first line to be skipped.
Concept Snapshot
COPY INTO command loads data files into a Snowflake table.
Specify source location with FROM and file format with FILE_FORMAT.
Use ON_ERROR to control error handling (e.g., CONTINUE to skip bad rows).
SKIP_HEADER option skips header lines in files.
The command reads, parses, loads data, and reports results.
Full Transcript
The COPY INTO command in Snowflake loads data from files into a table. It starts by connecting to the source location, reading the files, parsing the data according to the file format, and then loading the data into the target table. If errors occur, the command can skip bad rows and continue loading based on the ON_ERROR setting. The process finishes by reporting how many rows were loaded and how many were skipped. For example, using SKIP_HEADER=1 skips the first line of CSV files, which is usually the header. This step-by-step flow helps beginners understand how data moves from files into Snowflake tables safely and efficiently.