0
0
Kubernetesdevops~10 mins

Centralized logging (EFK stack) in Kubernetes - Commands & Configuration

Choose your learning style9 modes available
Introduction
When you run many applications on Kubernetes, logs are scattered across many places. Centralized logging collects all logs in one place so you can easily search and analyze them. The EFK stack uses Elasticsearch to store logs, Fluentd to collect and send logs, and Kibana to view logs in a friendly way.
When you want to see logs from all your Kubernetes pods in one dashboard.
When you need to quickly find errors or issues across multiple containers.
When you want to keep logs for a long time and search them efficiently.
When you want to monitor your applications without logging into each pod.
When you want to share logs with your team using a web interface.
Config File - efk-stack.yaml
efk-stack.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: logging
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: elasticsearch
  namespace: logging
spec:
  replicas: 1
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch:8.6.3
        ports:
        - containerPort: 9200
        env:
        - name: discovery.type
          value: single-node
        resources:
          limits:
            memory: 1Gi
          requests:
            cpu: 500m
            memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  namespace: logging
spec:
  ports:
  - port: 9200
    targetPort: 9200
  selector:
    app: elasticsearch
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: logging
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      containers:
      - name: fluentd
        image: fluent/fluentd:v1.14.5-debian-1.0
        env:
        - name: FLUENT_ELASTICSEARCH_HOST
          value: elasticsearch.logging.svc.cluster.local
        - name: FLUENT_ELASTICSEARCH_PORT
          value: "9200"
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: logging
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:8.6.3
        ports:
        - containerPort: 5601
        env:
        - name: ELASTICSEARCH_HOSTS
          value: http://elasticsearch.logging.svc.cluster.local:9200
---
apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: logging
spec:
  ports:
  - port: 5601
    targetPort: 5601
  selector:
    app: kibana

This file creates a logging namespace to keep logging components separate. It deploys Elasticsearch as a single-node cluster to store logs. A Fluentd DaemonSet runs on every node to collect logs from containers and send them to Elasticsearch. Kibana is deployed to provide a web interface to search and view logs. Services expose Elasticsearch and Kibana inside the cluster.

Commands
This command creates all the EFK stack components in Kubernetes: Elasticsearch, Fluentd, and Kibana with their services and namespace.
Terminal
kubectl apply -f efk-stack.yaml
Expected OutputExpected
namespace/logging created deployment.apps/elasticsearch created service/elasticsearch created daemonset.apps/fluentd created deployment.apps/kibana created service/kibana created
Check that all pods for Elasticsearch, Fluentd, and Kibana are running in the logging namespace.
Terminal
kubectl get pods -n logging
Expected OutputExpected
NAME READY STATUS RESTARTS AGE elasticsearch-xxxxxxxxxx-xxxxx 1/1 Running 0 1m fluentd-xxxxx 1/1 Running 0 1m kibana-xxxxxxxxxx-xxxxx 1/1 Running 0 1m
Forward local port 5601 to Kibana service so you can open Kibana dashboard in your browser at http://localhost:5601.
Terminal
kubectl port-forward svc/kibana 5601:5601 -n logging
Expected OutputExpected
Forwarding from 127.0.0.1:5601 -> 5601 Forwarding from [::1]:5601 -> 5601
-n - Specify the namespace where Kibana service is running
View the last 10 log lines from Fluentd pods to verify it is collecting and forwarding logs.
Terminal
kubectl logs -l app=fluentd -n logging --tail=10
Expected OutputExpected
[2024-06-01 12:00:00] Fluentd started [2024-06-01 12:00:05] Sending logs to Elasticsearch [2024-06-01 12:00:10] Successfully sent batch of logs
-l - Select pods by label
--tail - Show only last N lines
Key Concept

If you remember nothing else from this pattern, remember: Fluentd collects logs from all nodes and sends them to Elasticsearch, where Kibana lets you search and view them easily.

Common Mistakes
Not creating the logging namespace before applying the EFK stack.
Resources fail to create because the namespace does not exist.
Apply the full efk-stack.yaml which includes the namespace creation or create the namespace first with kubectl create namespace logging.
Trying to access Kibana without port-forwarding or exposing the service.
Kibana is not accessible outside the cluster by default, so the browser cannot connect.
Use kubectl port-forward to access Kibana locally or create an ingress or LoadBalancer service.
Not mounting the correct log directories in Fluentd DaemonSet.
Fluentd cannot read container logs, so no logs are collected or sent.
Mount /var/log and /var/lib/docker/containers as shown in the config to allow Fluentd to read logs.
Summary
Apply the efk-stack.yaml file to deploy Elasticsearch, Fluentd, and Kibana in the logging namespace.
Verify pods are running with kubectl get pods -n logging.
Use kubectl port-forward to access Kibana dashboard locally.
Check Fluentd logs to confirm it is collecting and forwarding logs.