Complete the code to specify the default replication factor in Hadoop configuration.
conf.set("dfs.replication", [1])
The default replication factor in Hadoop is usually set to 3 to ensure data reliability.
Complete the code to get the block size from Hadoop configuration.
long blockSize = conf.getLong("dfs.blocksize", [1]);
The default block size in Hadoop is 128 MB, which is 134217728 bytes.
Fix the error in the code to correctly retrieve the replication factor as an integer.
int replication = conf.getInt("dfs.replication", [1]);
The getInt method expects an integer default value, not a string.
Fill both blanks to create a dictionary comprehension that maps block IDs to their replication counts, filtering blocks with replication less than 3.
block_replications = {block_id: replication for block_id, replication in blocks.items() if replication [1] [2]The comprehension filters blocks with replication less than 3.
Fill all three blanks to create a dictionary comprehension that maps block IDs to their sizes for blocks with size greater than 128 MB.
large_blocks = { [1] : [2] for [3] in blocks.items() if size > 134217728 }The comprehension unpacks each block tuple into block_id and size, then maps block_id to size for large blocks.