0
0
Kubernetesdevops~30 mins

Centralized logging (EFK stack) in Kubernetes - Mini Project: Build & Apply

Choose your learning style9 modes available
Centralized logging with EFK stack on Kubernetes
📖 Scenario: You are managing a Kubernetes cluster for a small company. You want to collect logs from all running applications in one place to easily search and analyze them.To do this, you will set up the EFK stack: Elasticsearch to store logs, Fluentd to collect and forward logs, and Kibana to view logs in a web interface.
🎯 Goal: Build a simple EFK stack on Kubernetes that collects logs from all pods and allows viewing them in Kibana.
📋 What You'll Learn
Create a Kubernetes namespace called logging
Deploy Elasticsearch StatefulSet with 1 replica in logging namespace
Deploy Fluentd DaemonSet in logging namespace to collect logs from all nodes
Deploy Kibana Deployment with 1 replica in logging namespace
Expose Kibana with a ClusterIP service
Verify logs are collected and visible in Kibana
💡 Why This Matters
🌍 Real World
Centralized logging helps teams monitor and troubleshoot applications by collecting logs from many sources into one place.
💼 Career
DevOps engineers often set up logging stacks like EFK on Kubernetes clusters to improve observability and support incident response.
Progress0 / 4 steps
1
Create the logging namespace
Write a YAML manifest to create a Kubernetes namespace called logging. Use kubectl apply -f to apply it.
Kubernetes
Need a hint?

A namespace manifest uses apiVersion: v1 and kind: Namespace. The name goes under metadata.

2
Deploy Elasticsearch StatefulSet in logging namespace
Write a YAML manifest to deploy Elasticsearch as a StatefulSet with 1 replica in the logging namespace. Use the image docker.elastic.co/elasticsearch/elasticsearch:7.17.0. Set environment variable discovery.type to single-node. Use port 9200. Apply the manifest with kubectl apply -f.
Kubernetes
Need a hint?

Use kind: StatefulSet with replicas: 1. Set environment variable discovery.type: single-node for Elasticsearch single node mode.

3
Deploy Fluentd DaemonSet to collect logs
Write a YAML manifest to deploy Fluentd as a DaemonSet in the logging namespace. Use the image fluent/fluentd:v1.14-debian-1. Mount the host's /var/log directory to /var/log inside the container. Apply the manifest with kubectl apply -f.
Kubernetes
Need a hint?

Use kind: DaemonSet to run Fluentd on all nodes. Mount /var/log from host to container.

4
Deploy Kibana and expose it with a ClusterIP service
Write a YAML manifest to deploy Kibana as a Deployment with 1 replica in the logging namespace. Use the image docker.elastic.co/kibana/kibana:7.17.0. Expose port 5601. Create a ClusterIP service named kibana in the logging namespace exposing port 5601. Apply the manifest with kubectl apply -f. Then run kubectl get pods -n logging and kubectl get svc -n logging to verify pods and service are running.
Kubernetes
Need a hint?

Deploy Kibana as a Deployment with 1 replica. Expose it with a ClusterIP service on port 5601.