Complete the code to read a JSON file into a DataFrame.
df = spark.read.[1]("data.json")
The json method reads JSON files into a DataFrame.
Complete the code to select the nested field 'address.city' from the DataFrame.
df.select("[1]")
address[city] is not valid syntax here.Use dot notation to select nested fields in Spark DataFrames.
Fix the error in the code to explode the nested array field 'phones'.
from pyspark.sql.functions import [1] df.select(explode(df.phones)).show()
flatten or collect_list does not explode arrays into rows.The explode function expands an array column into multiple rows.
Fill both blanks to create a dictionary of word lengths for words longer than 3 characters.
lengths = {word: [1] for word in words if [2]len(word) for the dictionary values.The dictionary comprehension maps each word to its length if the word length is greater than 3.
Fill all three blanks to create a dictionary with uppercase keys and values greater than 0.
result = [1]: [2] for k, v in data.items() if v [3] 0}
k.lower() instead of uppercase keys.< or ==.The dictionary comprehension uses uppercase keys, keeps values, and filters values greater than zero.