Complete the code to print the main difference between Hadoop and Spark.
print("Hadoop processes data using [1] computing.")
Hadoop processes data mainly using batch computing, which means it handles large data sets in chunks.
Complete the code to show which framework is faster for iterative tasks.
print("Spark is generally [1] than Hadoop for iterative algorithms.")
Spark is faster than Hadoop for iterative tasks because it keeps data in memory.
Fix the error in the code to correctly compare data processing models.
model = "Spark uses [1] processing, Hadoop uses batch processing."
Spark supports streaming processing, which allows real-time data handling, unlike Hadoop's batch model.
Fill both blanks to create a dictionary comparing Hadoop and Spark features.
comparison = {"Hadoop": "[1] processing", "Spark": "[2] processing"}Hadoop uses batch processing, while Spark supports streaming processing for real-time data.
Fill all three blanks to create a summary dictionary with key features.
summary = {"Speed": "[1]", "Data Model": "[2]", "Memory Usage": "[3]"}Spark is faster due to in-memory computing, Hadoop uses MapReduce as its data model and does not use in-memory processing.