0
0
Elasticsearchquery~10 mins

Log management pipeline in Elasticsearch - Step-by-Step Execution

Choose your learning style9 modes available
Concept Flow - Log management pipeline
Log Generated by Application
Log Shipper (e.g., Filebeat)
Log Ingested into Elasticsearch
Log Processed by Ingest Pipeline
Log Stored in Elasticsearch Index
Log Visualized in Kibana
Logs flow from the application through a shipper, get processed and stored in Elasticsearch, then visualized in Kibana.
Execution Sample
Elasticsearch
PUT _ingest/pipeline/log_pipeline
{
  "processors": [
    {"grok": {"field": "message", "patterns": ["%{COMMONAPACHELOG}"]}}
  ]
}
Defines an ingest pipeline that parses log messages using a grok pattern.
Execution Table
StepActionInput LogProcessor AppliedOutput Document
1Receive log from shipper{"message": "127.0.0.1 - - [10/Oct/2023:13:55:36 +0000] \"GET /index.html HTTP/1.1\" 200 2326"}None{"message": "127.0.0.1 - - [10/Oct/2023:13:55:36 +0000] \"GET /index.html HTTP/1.1\" 200 2326"}
2Apply grok processor{"message": "127.0.0.1 - - [10/Oct/2023:13:55:36 +0000] \"GET /index.html HTTP/1.1\" 200 2326"}grok parsing COMMONAPACHELOG{"clientip": "127.0.0.1", "ident": "-", "auth": "-", "timestamp": "10/Oct/2023:13:55:36 +0000", "verb": "GET", "request": "/index.html", "httpversion": "1.1", "response": "200", "bytes": "2326", "message": "127.0.0.1 - - [10/Oct/2023:13:55:36 +0000] \"GET /index.html HTTP/1.1\" 200 2326"}
3Store document in index{"clientip": "127.0.0.1", "request": "/index.html"}NoneDocument stored in Elasticsearch index
4Visualize in KibanaStored documentNoneLog entry visible in Kibana dashboard
💡 All logs processed and stored; pipeline completes successfully.
Variable Tracker
VariableStartAfter Step 1After Step 2After Step 3Final
log_document{}{"message": "127.0.0.1 - - [10/Oct/2023:13:55:36 +0000] \"GET /index.html HTTP/1.1\" 200 2326"}{"clientip": "127.0.0.1", "ident": "-", "auth": "-", "timestamp": "10/Oct/2023:13:55:36 +0000", "verb": "GET", "request": "/index.html", "httpversion": "1.1", "response": "200", "bytes": "2326", "message": "127.0.0.1 - - [10/Oct/2023:13:55:36 +0000] \"GET /index.html HTTP/1.1\" 200 2326"}{"clientip": "127.0.0.1", "request": "/index.html"}Stored in Elasticsearch index
Key Moments - 3 Insights
Why does the log document have both the original message and parsed fields after processing?
Because the grok processor extracts fields but keeps the original message intact, as shown in execution_table step 2 where both exist.
What happens if the grok pattern does not match the log message?
The processor will fail to parse fields, so the output document will only have the original message without extracted fields, stopping further processing.
Why do we need a shipper like Filebeat before logs reach Elasticsearch?
The shipper collects and forwards logs reliably from sources to Elasticsearch, as shown in concept_flow step 2 before ingestion.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table at step 2, what field contains the client's IP address after grok processing?
Arequest
Bmessage
Cclientip
Dtimestamp
💡 Hint
Check the 'Output Document' column at step 2 in execution_table.
At which step is the log document stored in the Elasticsearch index?
AStep 1
BStep 3
CStep 2
DStep 4
💡 Hint
Look for the action 'Store document in index' in execution_table.
If the grok processor was removed, how would the 'log_document' variable change after step 2?
AIt would remain the original message only
BIt would be empty
CIt would contain parsed fields
DIt would cause an error
💡 Hint
Refer to variable_tracker and key_moments about grok processor effects.
Concept Snapshot
Log management pipeline flow:
1. Logs generated by apps
2. Sent by shipper (Filebeat)
3. Ingested into Elasticsearch
4. Processed by ingest pipeline (e.g., grok parsing)
5. Stored in index
6. Visualized in Kibana
Use ingest pipelines to parse and enrich logs before storage.
Full Transcript
This visual execution shows how logs move from an application through a shipper to Elasticsearch. The ingest pipeline applies processors like grok to parse log messages into fields. The execution table traces each step: receiving the log, applying grok to extract fields, storing the document, and visualizing it in Kibana. Variables track the log document's state as it gains parsed fields. Key moments clarify why original messages remain and what happens if parsing fails. The quiz tests understanding of fields, storage steps, and processor effects. The snapshot summarizes the pipeline stages and purpose.