How to Log in Microservices: Best Practices and Examples
To log in microservices, use a centralized logging system that collects logs from all services with
structured logging and correlation IDs to trace requests across services. Each microservice should log meaningful events locally and send logs to a central store for easy searching and monitoring.Syntax
Logging in microservices typically involves these parts:
- Logger setup: Initialize a logger in each service.
- Structured logs: Use JSON or key-value pairs for easy parsing.
- Correlation ID: Attach a unique ID to each request to trace it across services.
- Log levels: Use levels like DEBUG, INFO, WARN, ERROR to filter logs.
- Centralized logging: Send logs to a central system like ELK or Splunk.
python
import logging import json logger = logging.getLogger('microservice') logger.setLevel(logging.INFO) correlation_id = '12345-abcde' log_entry = { 'level': 'INFO', 'message': 'User created successfully', 'correlation_id': correlation_id, 'service': 'user-service' } logger.info(json.dumps(log_entry))
Example
This example shows a simple Python microservice logging a request with a correlation ID and sending structured logs.
python
import logging import json import uuid # Setup logger logger = logging.getLogger('order-service') logger.setLevel(logging.INFO) handler = logging.StreamHandler() logger.addHandler(handler) # Function to simulate handling a request def handle_order_request(user_id): correlation_id = str(uuid.uuid4()) log_entry_start = { 'level': 'INFO', 'message': 'Start processing order', 'correlation_id': correlation_id, 'user_id': user_id, 'service': 'order-service' } logger.info(json.dumps(log_entry_start)) # Simulate order processing # ... log_entry_end = { 'level': 'INFO', 'message': 'Order processed successfully', 'correlation_id': correlation_id, 'user_id': user_id, 'service': 'order-service' } logger.info(json.dumps(log_entry_end)) # Run example handle_order_request('user123')
Output
{"level": "INFO", "message": "Start processing order", "correlation_id": "<uuid>", "user_id": "user123", "service": "order-service"}
{"level": "INFO", "message": "Order processed successfully", "correlation_id": "<uuid>", "user_id": "user123", "service": "order-service"}
Common Pitfalls
Common mistakes when logging in microservices include:
- Not using correlation IDs, making it hard to trace requests across services.
- Logging unstructured plain text, which is difficult to search and analyze.
- Logging sensitive data like passwords or personal info.
- Logging too much at DEBUG level in production, causing noise and performance issues.
- Not centralizing logs, so logs are scattered and hard to monitor.
python
import logging import json logger = logging.getLogger('bad-logging') logger.setLevel(logging.INFO) # Wrong: Logging plain text without structure or correlation ID logger.info('User logged in') # Right: Structured log with correlation ID correlation_id = 'abc-123' log_entry = { 'level': 'INFO', 'message': 'User logged in', 'correlation_id': correlation_id, 'service': 'auth-service' } logger.info(json.dumps(log_entry))
Quick Reference
- Use structured logging: JSON format is best for parsing.
- Always include correlation IDs: Helps trace requests across services.
- Set appropriate log levels: DEBUG for development, INFO/WARN/ERROR for production.
- Centralize logs: Use tools like ELK stack, Splunk, or cloud logging services.
- Protect sensitive data: Never log passwords or personal info.
Key Takeaways
Use structured logging with JSON to make logs easy to search and analyze.
Include correlation IDs in every log to trace requests across multiple microservices.
Centralize logs in a single system for monitoring and troubleshooting.
Avoid logging sensitive information to protect user privacy and security.
Set log levels properly to balance detail and performance.