How to Use ELK Stack for Microservices Logging Effectively
Use the
ELK stack by sending logs from each microservice to Logstash or Beats, which then forwards them to Elasticsearch for storage and indexing. Use Kibana to visualize and analyze logs centrally, helping you monitor and debug microservices efficiently.Syntax
The ELK stack consists of three main components:
- Elasticsearch: Stores and indexes logs for fast search.
- Logstash: Collects, processes, and forwards logs.
- Kibana: Visualizes logs and creates dashboards.
Microservices send logs via Beats or directly to Logstash. Logstash uses pipelines with input, filter, and output sections to process logs before sending them to Elasticsearch.
logstash
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{COMMONAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "microservices-logs-%{+YYYY.MM.dd}"
}
}Example
This example shows a simple setup where a microservice sends JSON logs using Filebeat to Logstash, which processes and forwards them to Elasticsearch. Kibana then visualizes the logs.
yaml
# Filebeat configuration snippet to send logs to Logstash filebeat.inputs: - type: log paths: - /var/log/microservice/*.log output.logstash: hosts: ["localhost:5044"] --- # Logstash pipeline configuration input { beats { port => 5044 } } filter { json { source => "message" } } output { elasticsearch { hosts => ["http://localhost:9200"] index => "microservices-logs-%{+YYYY.MM.dd}" } } --- # Sample microservice log line (JSON format) {"timestamp":"2024-06-01T12:00:00Z","level":"INFO","service":"user-service","message":"User created successfully","userId":123}
Output
Logs indexed in Elasticsearch under index microservices-logs-2024.06.01
Kibana dashboard shows user-service logs with filters by level and timestamp.
Common Pitfalls
Common mistakes when using ELK for microservices logging include:
- Not standardizing log formats, making parsing difficult.
- Sending too verbose logs, causing storage bloat and slow queries.
- Not securing Elasticsearch, exposing sensitive data.
- Ignoring log rotation and retention policies, leading to disk space issues.
- Misconfiguring Logstash pipelines, causing data loss or errors.
Always use structured logs (like JSON), set proper filters, and monitor ELK resource usage.
json
Wrong way (unstructured logs):
{"message": "User created successfully"}
Right way (structured JSON logs):
{"timestamp":"2024-06-01T12:00:00Z","level":"INFO","service":"user-service","message":"User created successfully","userId":123}Quick Reference
| Component | Role | Common Configuration Tips |
|---|---|---|
| Elasticsearch | Stores and indexes logs | Use index patterns, enable security, monitor cluster health |
| Logstash | Processes and forwards logs | Use grok/json filters, avoid heavy processing, configure pipelines carefully |
| Kibana | Visualizes logs | Create dashboards, use filters, set alerts |
| Beats (Filebeat) | Lightweight log shipper | Use for log forwarding, configure multiline logs if needed |
Key Takeaways
Send structured logs from microservices to Logstash or Beats for easy parsing.
Use Logstash pipelines to filter and transform logs before storing in Elasticsearch.
Visualize and analyze logs in Kibana to monitor microservices health and troubleshoot issues.
Standardize log formats and manage log retention to keep ELK efficient and scalable.
Secure Elasticsearch and monitor resource usage to avoid data loss and performance problems.