0
0
Hadoopdata~10 mins

Node decommissioning and scaling in Hadoop - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to start the decommissioning process for a node in Hadoop.

Hadoop
hdfs dfsadmin -[1] <node-hostname>
Drag options to blanks, or click blank then click option'
Adecommission
BrefreshNodes
CstartDecommission
DstopNode
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'decommission' instead of 'refreshNodes' flag.
Trying to decommission without updating the exclude file.
2fill in blank
medium

Complete the code to add a new node to the Hadoop cluster by updating the include file.

Hadoop
echo '[1]' >> /etc/hadoop/conf/dfs.include
Drag options to blanks, or click blank then click option'
Anew-node-hostname
Bnamenode-host
Cold-node-hostname
Ddatanode-host
Attempts:
3 left
💡 Hint
Common Mistakes
Adding the namenode hostname instead of the new node.
Editing the exclude file instead of the include file.
3fill in blank
hard

Fix the error in the command to check the decommission status of nodes.

Hadoop
hdfs dfsadmin -[1]
Drag options to blanks, or click blank then click option'
Acheck
Bstatus
Creport
Dlist
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'status' or 'check' which are not valid dfsadmin flags.
Trying to list nodes with an incorrect flag.
4fill in blank
hard

Fill both blanks to update the exclude file and refresh nodes to decommission a node.

Hadoop
echo '[1]' >> /etc/hadoop/conf/dfs.exclude
hdfs dfsadmin -[2]
Drag options to blanks, or click blank then click option'
Anode-to-remove-hostname
BrefreshNodes
CdecommissionNode
DstopNode
Attempts:
3 left
💡 Hint
Common Mistakes
Using wrong flags like 'decommissionNode' instead of 'refreshNodes'.
Not updating the exclude file before refreshing nodes.
5fill in blank
hard

Fill all three blanks to create a new node, add it to the include file, and refresh nodes to scale the cluster.

Hadoop
ssh [1] 'sudo systemctl start hadoop-datanode'
echo '[2]' >> /etc/hadoop/conf/dfs.include
hdfs dfsadmin -[3]
Drag options to blanks, or click blank then click option'
Anew-node-hostname
CrefreshNodes
DstartNode
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'startNode' instead of 'refreshNodes' to apply changes.
Not starting the datanode service before adding to include file.