Complete the code to import the window function module from PySpark.
from pyspark.sql import [1]
We import PySpark SQL functions as F to use window functions like row_number().
Complete the code to define a window partitioned by 'department' and ordered by 'salary' descending.
from pyspark.sql.window import Window window_spec = Window.partitionBy('department').[1](F.col('salary').desc())
The correct method to order rows in a window is orderBy.
Fix the error in the code to add a row number column using the window specification.
df = df.withColumn('row_num', F.[1]().over(window_spec))
row_number() assigns a unique row number within the window partition.
Fill both blanks to create a window partitioned by 'team' and ordered by 'date' ascending.
window_spec = Window.[1]('team').[2](F.col('date').asc())
Window specs use partitionBy to group and orderBy to sort rows.
Fill all three blanks to create a dictionary comprehension that maps each word to its length if length is greater than 3.
lengths = { [1]: [2] for [3] in words if len([3]) > 3 }The comprehension maps each word to its length using len(word) for words longer than 3 characters.