This lesson shows how to create DataFrames in Apache Spark by reading files like CSV, JSON, or Parquet. First, we start a SparkSession. Then, we choose the file type by setting the format or using specific read methods. We set options such as header=True to tell Spark if the file has column names, and inferSchema=True to detect data types automatically. Next, we load the file path. Spark reads the file and creates a DataFrame. Finally, we can use the DataFrame to analyze or display data. The execution table traces each step from starting Spark to showing the DataFrame. Key moments clarify why options like header and inferSchema matter. The visual quiz tests understanding of each step's role. This process helps beginners see how Spark reads files and creates DataFrames for data science tasks.