Window functions in Apache Spark let you perform calculations across sets of rows related to the current row. You start with a DataFrame, define a window specification that partitions data (like by department) and orders it (like by salary). Then you apply a window function such as rank() over this window. The function computes results per partition and adds a new column to the DataFrame without removing any rows. For example, ranking employees by salary within each department assigns ranks starting at 1 for each department. This process keeps all original data but enriches it with new insights. Key points include that partitioning controls where the function resets, ordering is necessary for ranking, and window functions do not reduce the number of rows.