Complete the code to read a Delta Lake table using Spark.
df = spark.read.format("[1]").load("/delta/events")
Delta Lake tables are read using the format "delta" in Spark.
Complete the code to write a DataFrame as a Delta Lake table.
df.write.format("delta").mode("[1]").save("/delta/events")
To replace existing data, use mode "overwrite" when writing to Delta Lake.
Fix the error in the code to enable time travel on a Delta Lake table.
df = spark.read.format("delta").option("[1]", "5").load("/delta/events")
To read a previous version of a Delta Lake table, use the option "versionAsOf" with the version number.
Fill both blanks to create a Delta Lake table and enable schema enforcement.
df.write.format("[1]").mode("overwrite").option("[2]", "true").save("/delta/events")
Use format "delta" to write Delta Lake tables and option "mergeSchema" set to "true" to enable schema enforcement and evolution.
Fill all three blanks to create a Delta Lake table, enable schema merge, and read a specific version.
df.write.format("[1]").mode("append").option("[2]", "true").save("/delta/events") df2 = spark.read.format("[3]").option("versionAsOf", 3).load("/delta/events")
Write with format "delta" and option "mergeSchema" to allow schema changes. Read with format "delta" and option "versionAsOf" to time travel to version 3.