0
0
Apache Sparkdata~10 mins

Delta Lake introduction in Apache Spark - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to read a Delta Lake table using Spark.

Apache Spark
df = spark.read.format("[1]").load("/delta/events")
Drag options to blanks, or click blank then click option'
Adelta
Bparquet
Cjson
Dcsv
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'parquet' or 'csv' instead of 'delta' will not read Delta Lake tables correctly.
2fill in blank
medium

Complete the code to write a DataFrame as a Delta Lake table.

Apache Spark
df.write.format("delta").mode("[1]").save("/delta/events")
Drag options to blanks, or click blank then click option'
Aoverwrite
Berror
Cignore
Dappend
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'append' will add data instead of replacing it.
Using 'ignore' or 'error' will not overwrite existing data.
3fill in blank
hard

Fix the error in the code to enable time travel on a Delta Lake table.

Apache Spark
df = spark.read.format("delta").option("[1]", "5").load("/delta/events")
Drag options to blanks, or click blank then click option'
AtimestampAsOf
BversionAsOf
CtimeTravel
Dhistory
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'timestampAsOf' requires a timestamp, not a version number.
Options like 'timeTravel' or 'history' are not valid for this purpose.
4fill in blank
hard

Fill both blanks to create a Delta Lake table and enable schema enforcement.

Apache Spark
df.write.format("[1]").mode("overwrite").option("[2]", "true").save("/delta/events")
Drag options to blanks, or click blank then click option'
Adelta
Bparquet
CdeltaSchemaEnforcement
DmergeSchema
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'parquet' format will not create a Delta Lake table.
Option 'deltaSchemaEnforcement' is not a valid Delta Lake option.
5fill in blank
hard

Fill all three blanks to create a Delta Lake table, enable schema merge, and read a specific version.

Apache Spark
df.write.format("[1]").mode("append").option("[2]", "true").save("/delta/events")
df2 = spark.read.format("[3]").option("versionAsOf", 3).load("/delta/events")
Drag options to blanks, or click blank then click option'
Adelta
BmergeSchema
Dparquet
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'parquet' format will not work for Delta Lake features.
For reading a specific version, the format must be 'delta'.