0
0
Kubernetesdevops~20 mins

Centralized logging (EFK stack) in Kubernetes - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
EFK Stack Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
💻 Command Output
intermediate
2:00remaining
Check Elasticsearch Pod Status
You run the command kubectl get pods -n logging to check the status of Elasticsearch pods in your EFK stack. What is the expected output if Elasticsearch is running correctly?
Kubernetes
kubectl get pods -n logging
A
NAME                      READY   STATUS    RESTARTS   AGE
elasticsearch-0           1/1     Completed   0          10m
B
NAME                      READY   STATUS    RESTARTS   AGE
elasticsearch-0           1/1     Running   0          10m
C
NAME                      READY   STATUS    RESTARTS   AGE
elasticsearch-0           1/1     Pending   0          10m
D
NAME                      READY   STATUS    RESTARTS   AGE
elasticsearch-0           0/1     CrashLoopBackOff   3          10m
Attempts:
2 left
💡 Hint
Look for pods with STATUS as Running and READY as full.
Configuration
intermediate
2:30remaining
Fluentd Configuration for Kubernetes Logs
Which Fluentd configuration snippet correctly collects logs from all Kubernetes pods and sends them to Elasticsearch in the EFK stack?
A
<source>
  @type tail
  path /var/log/containers/*.log
  pos_file /var/log/fluentd-containers.log.pos
  tag kubernetes.*
  format json
</source>
<match kubernetes.**>
  @type elasticsearch
  host elasticsearch.logging.svc.cluster.local
  port 9200
  logstash_format true
</match>
B
<source>
  @type syslog
  port 5140
</source>
<match **>
  @type elasticsearch
  host elasticsearch.logging.svc.cluster.local
  port 9200
</match>
C
<source>
  @type tail
  path /var/log/messages
  pos_file /var/log/fluentd-messages.log.pos
  tag system
  format none
</source>
<match system>
  @type elasticsearch
  host elasticsearch.logging.svc.cluster.local
  port 9200
</match>
D
<source>
  @type tail
  path /var/log/containers/*.log
  pos_file /var/log/fluentd-containers.log.pos
  tag kubernetes.*
  format none
</source>
<match kubernetes.**>
  @type elasticsearch
  host elasticsearch.logging.svc.cluster.local
  port 9200
</match>
Attempts:
2 left
💡 Hint
Look for JSON log format and correct path for Kubernetes container logs.
Troubleshoot
advanced
3:00remaining
Kibana Dashboard Not Showing Logs
You notice Kibana dashboards are empty even though Elasticsearch and Fluentd pods are running. Which is the most likely cause?
AFluentd is not able to connect to Elasticsearch due to wrong service name or port.
BElasticsearch index is corrupted and needs manual deletion.
CKibana pod is in CrashLoopBackOff state due to insufficient memory.
DKubernetes nodes are not labeled correctly for Fluentd to run.
Attempts:
2 left
💡 Hint
Check Fluentd logs for connection errors to Elasticsearch.
🔀 Workflow
advanced
3:00remaining
Steps to Upgrade EFK Stack Components
What is the correct order of steps to safely upgrade Elasticsearch, Fluentd, and Kibana in a Kubernetes EFK stack?
A3,1,2,4,5
B2,1,3,4,5
C1,3,2,4,5
D1,2,3,4,5
Attempts:
2 left
💡 Hint
Start with safely handling Elasticsearch pods before upgrading Fluentd and Kibana.
Best Practice
expert
3:00remaining
Optimizing Elasticsearch Index Management
Which Elasticsearch index management strategy is best to keep the EFK stack performant and storage efficient?
AManually delete indices weekly using kubectl exec into Elasticsearch pod.
BKeep all logs in a single large index to simplify queries and avoid overhead.
CUse index lifecycle management (ILM) to rollover indices and delete old data automatically.
DDisable index sharding to reduce resource usage on small clusters.
Attempts:
2 left
💡 Hint
Automate index rollover and deletion to maintain cluster health.