Complete the code to create a Spark session with dynamic allocation enabled.
from pyspark.sql import SparkSession spark = SparkSession.builder.appName('App').config('spark.dynamicAllocation.enabled', [1]).getOrCreate()
Dynamic allocation must be set to True to enable auto-scaling of executors in Spark.
Complete the code to set the minimum number of executors to 2 in Spark configuration.
spark.conf.set('spark.dynamicAllocation.minExecutors', [1])
The value for spark.dynamicAllocation.minExecutors must be a string representing the number of executors, e.g., '2'.
Fix the error in the code to correctly set the maximum number of executors to 10.
spark.conf.set('spark.dynamicAllocation.maxExecutors', [1])
The maximum executors setting requires a string value, so use '10' with quotes.
Fill both blanks to create a dictionary that maps executor IDs to their memory sizes in GB.
executor_memory = { [1]: [2] for [1] in ['exec1', 'exec2', 'exec3'] }The dictionary comprehension uses executor IDs as keys (strings like 'exec1') and memory sizes as values (integers like 8).
Fill all three blanks to create a filtered dictionary of executors with memory greater than 10 GB.
filtered_memory = { [1]: [2] for [1] , [2] in executor_memory.items() if [2] > 10 }The comprehension iterates over executor_memory.items() with variables executor and mem. It filters for memory values greater than 10.