0
0
Hadoopdata~5 mins

Node decommissioning and scaling in Hadoop - Cheat Sheet & Quick Revision

Choose your learning style9 modes available
Recall & Review
beginner
What is node decommissioning in Hadoop?
Node decommissioning is the process of safely removing a node from a Hadoop cluster without losing data or interrupting running jobs. It ensures data blocks are replicated elsewhere before the node is taken offline.
Click to reveal answer
beginner
Why is node decommissioning important before scaling down a Hadoop cluster?
Decommissioning ensures that data stored on the node is copied to other nodes, preventing data loss. It also allows running tasks to finish or move, maintaining cluster stability during scaling down.
Click to reveal answer
intermediate
What happens during the node decommissioning process in Hadoop?
The node is marked for decommissioning, the cluster replicates its data blocks to other nodes, running tasks are completed or moved, and finally the node is removed from the cluster.
Click to reveal answer
beginner
How does Hadoop handle scaling up the cluster?
Scaling up means adding new nodes to the cluster. Hadoop automatically detects new nodes, balances data across them, and uses them for storage and processing to improve performance.
Click to reveal answer
intermediate
What is the role of the 'exclude' file in Hadoop node decommissioning?
The 'exclude' file lists nodes to be decommissioned. Hadoop reads this file to know which nodes to remove safely from the cluster during decommissioning.
Click to reveal answer
What is the first step in safely removing a node from a Hadoop cluster?
ADelete data on the node
BTurn off the node immediately
CMark the node for decommissioning
DAdd new nodes to the cluster
Which file in Hadoop specifies nodes to be decommissioned?
Ainclude file
Bexclude file
Cconfig file
Dhosts file
What does scaling up a Hadoop cluster involve?
AAdding nodes
BDeleting data
CStopping all jobs
DRemoving nodes
Why must data be replicated before node decommissioning?
ATo reduce cluster size
BTo speed up the node
CTo delete old data
DTo prevent data loss
What happens to running tasks on a node during decommissioning?
AThey finish or move to other nodes
BThey are immediately stopped
CThey continue running without change
DThey are deleted
Explain the process and importance of node decommissioning in a Hadoop cluster.
Think about what happens before a node is removed to keep data safe and jobs running.
You got /5 concepts.
    Describe how scaling up and scaling down work in Hadoop clusters.
    Consider what happens when you want to grow or shrink your cluster.
    You got /5 concepts.