Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to start the decommissioning process for a node in Hadoop.
Hadoop
hdfs dfsadmin -[1] <node-hostname> Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'decommission' instead of 'refreshNodes' flag.
Trying to decommission without updating the exclude file.
✗ Incorrect
The command 'hdfs dfsadmin -refreshNodes' tells Hadoop to reload the list of nodes to decommission.
2fill in blank
mediumComplete the code to add a new node to the Hadoop cluster by updating the include file.
Hadoop
echo '[1]' >> /etc/hadoop/conf/dfs.include
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Adding the namenode hostname instead of the new node.
Editing the exclude file instead of the include file.
✗ Incorrect
You add the new node's hostname to the dfs.include file to allow it to join the cluster.
3fill in blank
hardFix the error in the command to check the decommission status of nodes.
Hadoop
hdfs dfsadmin -[1] Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'status' or 'check' which are not valid dfsadmin flags.
Trying to list nodes with an incorrect flag.
✗ Incorrect
The 'hdfs dfsadmin -report' command shows the status of all nodes including decommissioning progress.
4fill in blank
hardFill both blanks to update the exclude file and refresh nodes to decommission a node.
Hadoop
echo '[1]' >> /etc/hadoop/conf/dfs.exclude hdfs dfsadmin -[2]
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using wrong flags like 'decommissionNode' instead of 'refreshNodes'.
Not updating the exclude file before refreshing nodes.
✗ Incorrect
You add the node hostname to dfs.exclude and then run 'hdfs dfsadmin -refreshNodes' to start decommissioning.
5fill in blank
hardFill all three blanks to create a new node, add it to the include file, and refresh nodes to scale the cluster.
Hadoop
ssh [1] 'sudo systemctl start hadoop-datanode' echo '[2]' >> /etc/hadoop/conf/dfs.include hdfs dfsadmin -[3]
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'startNode' instead of 'refreshNodes' to apply changes.
Not starting the datanode service before adding to include file.
✗ Incorrect
You start the datanode service on the new node, add it to dfs.include, then refresh nodes to scale.