What Is Ingest Node in Elasticsearch: Simple Explanation and Example
ingest node in Elasticsearch is a special node that preprocesses documents before they are indexed. It uses ingest pipelines to transform, enrich, or modify data, like adding fields or parsing logs, making indexing more flexible and efficient.How It Works
Think of an ingest node as a smart mail sorter in a post office. When letters (documents) arrive, this sorter can open them, add stamps, or organize them before sending them to their final destination (the index).
In Elasticsearch, the ingest node receives data and runs it through a series of steps called an ingest pipeline. Each step can change the data, like extracting parts of a log message or adding a timestamp. This happens automatically before the data is stored, so the index only gets clean, ready-to-use documents.
This process helps keep your data organized and searchable without needing to change your original data source or add extra processing outside Elasticsearch.
Example
This example shows how to create an ingest pipeline that adds a field called ingested_at with the current timestamp to each document, then how to index a document using this pipeline.
PUT _ingest/pipeline/add_timestamp
{
"description": "Add ingestion timestamp",
"processors": [
{
"set": {
"field": "ingested_at",
"value": "{{_ingest.timestamp}}"
}
}
]
}
PUT my-index/_doc/1?pipeline=add_timestamp
{
"message": "Hello, Elasticsearch!"
}
GET my-index/_doc/1When to Use
Use ingest nodes when you want to prepare or enrich data as it enters Elasticsearch without changing your original data source. For example:
- Parsing log files to extract fields like IP addresses or error codes.
- Adding timestamps or metadata automatically to documents.
- Converting data formats or cleaning data before indexing.
- Enriching data with information from other sources.
This is helpful in real-time data processing, log analytics, and monitoring systems where quick, automatic data transformation is needed.
Key Points
- An ingest node preprocesses documents using pipelines before indexing.
- Ingest pipelines consist of processors that modify or enrich data.
- They help keep data clean and structured without external processing.
- Useful for log parsing, metadata addition, and data transformation.