0
0
HadoopConceptBeginner · 3 min read

What is DataNode in Hadoop: Definition and Usage

In Hadoop, a DataNode is a worker node that stores actual data blocks in the Hadoop Distributed File System (HDFS). It manages data storage on local disks and communicates with the NameNode to report the status of stored data.
⚙️

How It Works

Think of Hadoop's DataNode as a warehouse worker who stores and manages boxes of data on shelves. Each DataNode stores parts of files called blocks on its local disk. When you save a file in Hadoop, it is split into blocks and distributed across many DataNodes.

The NameNode acts like the warehouse manager, keeping track of where each block is stored but not storing the data itself. The DataNodes regularly send heartbeats to the NameNode to confirm they are working and report the health of their stored blocks. If a DataNode fails, the system knows to replicate its blocks elsewhere to keep data safe.

💻

Example

This example shows how to list the status of DataNodes in a Hadoop cluster using the Hadoop command line interface.
bash
hdfs dfsadmin -report
Output
Configured Capacity: 1000 GB DFS Used: 400 GB Non DFS Used: 100 GB DFS Remaining: 500 GB DFS Used%: 40% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 ------------------------------------------------- Datanodes available: 3 (3 total, 0 dead) Name: 192.168.1.101:50010 Hostname: datanode1 Decommission Status : Normal Configured Capacity: 333 GB DFS Used: 133 GB Non DFS Used: 33 GB DFS Remaining: 167 GB DFS Used%: 40% Name: 192.168.1.102:50010 Hostname: datanode2 Decommission Status : Normal Configured Capacity: 333 GB DFS Used: 133 GB Non DFS Used: 33 GB DFS Remaining: 167 GB DFS Used%: 40% Name: 192.168.1.103:50010 Hostname: datanode3 Decommission Status : Normal Configured Capacity: 334 GB DFS Used: 134 GB Non DFS Used: 34 GB DFS Remaining: 166 GB DFS Used%: 40%
🎯

When to Use

Use DataNodes whenever you need to store large amounts of data distributed across many machines for fault tolerance and scalability. They are essential in big data environments where files are too big for a single computer.

For example, companies processing huge logs, videos, or sensor data use DataNodes to store data reliably. If one DataNode fails, Hadoop automatically recovers data from other nodes, ensuring no data loss.

Key Points

  • DataNode stores actual data blocks on local disks.
  • It communicates with the NameNode to report health and block status.
  • Multiple DataNodes work together to provide fault tolerance.
  • They enable Hadoop to scale storage across many machines.

Key Takeaways

A DataNode stores and manages data blocks in Hadoop's HDFS.
DataNodes report their status to the NameNode regularly.
They provide fault tolerance by replicating data across nodes.
DataNodes enable Hadoop to handle very large datasets efficiently.