0
0
Kubernetesdevops~15 mins

DaemonSets for per-node workloads in Kubernetes - Deep Dive

Choose your learning style9 modes available
Overview - DaemonSets for per-node workloads
What is it?
A DaemonSet is a Kubernetes object that ensures a copy of a specific pod runs on every node in a cluster or on selected nodes. It is used to deploy per-node workloads like monitoring agents, log collectors, or network tools. When new nodes join the cluster, the DaemonSet automatically adds pods to them. When nodes leave, the pods are cleaned up.
Why it matters
Without DaemonSets, you would have to manually deploy and manage pods on each node, which is error-prone and inefficient. DaemonSets solve the problem of running essential services uniformly across all nodes, ensuring consistent monitoring, logging, or networking. This uniformity is critical for cluster health and security.
Where it fits
Before learning DaemonSets, you should understand basic Kubernetes concepts like pods, nodes, and deployments. After mastering DaemonSets, you can explore advanced topics like StatefulSets, Operators, and custom controllers that manage complex workloads.
Mental Model
Core Idea
A DaemonSet automatically runs one pod copy on every node to provide node-level services consistently across the cluster.
Think of it like...
Imagine a hotel where every room needs a smoke detector installed. Instead of installing them one by one, a DaemonSet is like a maintenance team that automatically places a smoke detector in every room as soon as it is built.
┌─────────────┐       ┌─────────────┐       ┌─────────────┐
│   Node 1    │──────▶│ Pod (Daemon)│
│  (Worker)   │       └─────────────┘       ┌─────────────┐
└─────────────┘                           │ Pod (Daemon)│
                                          └─────────────┘
┌─────────────┐       ┌─────────────┐       ┌─────────────┐
│   Node 2    │──────▶│ Pod (Daemon)│
│  (Worker)   │       └─────────────┘       ┌─────────────┐
└─────────────┘                           │ Pod (Daemon)│
                                          └─────────────┘
┌─────────────┐
│   Node 3    │──────▶ Pod (Daemon)
│  (Worker)   │
└─────────────┘
Build-Up - 7 Steps
1
FoundationUnderstanding Kubernetes Nodes and Pods
🤔
Concept: Learn what nodes and pods are in Kubernetes and how pods run on nodes.
A Kubernetes cluster has machines called nodes. Each node runs pods, which are the smallest units that hold containers. Pods run your applications or services. Nodes can be physical or virtual machines.
Result
You know that pods run on nodes and that nodes make up the cluster.
Understanding nodes and pods is essential because DaemonSets control how pods are placed on nodes.
2
FoundationWhat is a DaemonSet in Kubernetes?
🤔
Concept: Introduce the DaemonSet object and its purpose to run pods on all or selected nodes.
A DaemonSet ensures that a pod runs on every node or a subset of nodes. It automatically adds pods to new nodes and removes them from deleted nodes. This is useful for running services that must be on every node, like log collectors or monitoring agents.
Result
You understand that DaemonSets automate pod placement per node.
Knowing that DaemonSets automate uniform pod deployment helps you manage node-level services efficiently.
3
IntermediateCreating a Basic DaemonSet Manifest
🤔Before reading on: do you think a DaemonSet manifest looks like a Deployment manifest or something completely different? Commit to your answer.
Concept: Learn the YAML structure to define a DaemonSet and how it differs from other controllers.
A DaemonSet manifest includes apiVersion: apps/v1, kind: DaemonSet, metadata, and a pod template under spec.template. Unlike Deployments, DaemonSets do not have replicas because they run one pod per node automatically. Example: apiVersion: apps/v1 kind: DaemonSet metadata: name: example-daemonset spec: selector: matchLabels: name: example-pod template: metadata: labels: name: example-pod spec: containers: - name: example-container image: busybox command: ["sleep", "3600"]
Result
You can write a basic DaemonSet YAML to deploy a pod on all nodes.
Understanding the manifest structure lets you customize DaemonSets for your needs.
4
IntermediateControlling DaemonSet Pod Placement
🤔Before reading on: do you think DaemonSets run pods on all nodes always, or can you limit which nodes get pods? Commit to your answer.
Concept: Learn how to use node selectors, tolerations, and affinity to control where DaemonSet pods run.
You can control DaemonSet pods placement using: - nodeSelector: to select nodes by labels - tolerations: to allow pods on tainted nodes - affinity: to define complex rules for node selection Example snippet: nodeSelector: disktype: ssd tolerations: - key: "key1" operator: "Exists" effect: "NoSchedule"
Result
You can restrict DaemonSet pods to run only on specific nodes.
Knowing placement controls prevents resource waste and ensures pods run where needed.
5
IntermediateUpdating DaemonSets Safely
🤔Before reading on: do you think DaemonSet updates restart all pods at once or one by one? Commit to your answer.
Concept: Understand how rolling updates work for DaemonSets and how to configure update strategies.
DaemonSets support rolling updates by default, updating pods one by one to avoid downtime. You can configure update strategy with spec.updateStrategy, choosing RollingUpdate or OnDelete. RollingUpdate updates pods gradually, while OnDelete requires manual pod deletion to update. Example: updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1
Result
You can update DaemonSet pods without disrupting all nodes simultaneously.
Understanding update strategies helps maintain cluster stability during changes.
6
AdvancedHandling DaemonSets on Cluster Scaling
🤔Before reading on: when a new node joins, does the DaemonSet pod start automatically or require manual intervention? Commit to your answer.
Concept: Learn how DaemonSets react to nodes joining or leaving the cluster and how this affects workload consistency.
When a new node joins, the DaemonSet controller automatically creates the pod on that node. When a node leaves or is removed, the pod is deleted. This ensures the workload is always present on all nodes. However, if node labels or taints change, pods may not be scheduled until conditions match. You can watch pods appear/disappear with: kubectl get pods -o wide --watch
Result
DaemonSet pods dynamically adjust to cluster size changes without manual steps.
Knowing this dynamic behavior ensures you trust DaemonSets for consistent node-level services.
7
ExpertDaemonSet Controller Internals and Edge Cases
🤔Before reading on: do you think DaemonSet pods can run multiple copies per node or only one? Commit to your answer.
Concept: Explore how the DaemonSet controller manages pod lifecycle, handles node conditions, and deals with edge cases like node drain or taints.
The DaemonSet controller watches nodes and pods continuously. It ensures exactly one pod per eligible node. It respects node taints and tolerations to avoid scheduling on unsuitable nodes. During node drain, pods are deleted gracefully. DaemonSets do not run multiple pods per node unless configured with multiple DaemonSets or pod anti-affinity is ignored. Edge cases include: - Pods stuck in terminating state if finalizers block deletion - Scheduling delays if node labels change - Conflicts with other controllers managing pods on nodes
Result
You understand the internal logic and limitations of DaemonSets in production.
Knowing controller internals helps troubleshoot complex issues and optimize DaemonSet usage.
Under the Hood
The DaemonSet controller is a Kubernetes control loop that continuously monitors the cluster state. It lists all nodes and ensures a pod exists on each node that matches the DaemonSet's node selectors and tolerations. It creates pods on new nodes and deletes pods from removed nodes. It uses the Kubernetes scheduler to place pods but overrides replica counts by enforcing one pod per node. The controller also handles updates by deleting and recreating pods according to the update strategy.
Why designed this way?
DaemonSets were designed to solve the problem of running node-level services uniformly without manual intervention. The one-pod-per-node model simplifies management and ensures consistency. Alternatives like Deployments do not guarantee per-node placement. The controller's design balances automation with flexibility through selectors and tolerations, allowing it to adapt to diverse cluster environments.
┌─────────────────────────────┐
│      DaemonSet Controller    │
├─────────────┬───────────────┤
│ Watches     │ Lists Nodes    │
│ DaemonSet   │               │
│ Spec & Pods │               │
├─────────────┴───────────────┤
│ For each node:              │
│  ├─ Check node labels & taints│
│  ├─ If eligible & no pod → create pod
│  ├─ If pod exists & node removed → delete pod
│  └─ Manage pod updates       │
└─────────────────────────────┘
Myth Busters - 4 Common Misconceptions
Quick: Does a DaemonSet run multiple pods per node by default? Commit to yes or no.
Common Belief:A DaemonSet runs multiple pods on each node to handle load.
Tap to reveal reality
Reality:A DaemonSet runs exactly one pod per eligible node by default.
Why it matters:Believing multiple pods run per node can lead to resource overcommitment and confusion about pod counts.
Quick: Can a DaemonSet pod run on a node that is tainted with NoSchedule without tolerations? Commit to yes or no.
Common Belief:DaemonSet pods ignore node taints and run everywhere.
Tap to reveal reality
Reality:DaemonSet pods respect node taints and will not run on tainted nodes unless they have matching tolerations.
Why it matters:Ignoring taints can cause pods to remain unscheduled, leading to missing node-level services.
Quick: When updating a DaemonSet, do all pods restart simultaneously? Commit to yes or no.
Common Belief:DaemonSet updates restart all pods at once, causing downtime.
Tap to reveal reality
Reality:DaemonSets use rolling updates by default, updating pods one at a time to avoid downtime.
Why it matters:Misunderstanding update behavior can cause unnecessary fear of downtime and poor update planning.
Quick: Does a DaemonSet automatically run pods on nodes that do not match its nodeSelector? Commit to yes or no.
Common Belief:DaemonSets run pods on every node regardless of labels or selectors.
Tap to reveal reality
Reality:DaemonSets only run pods on nodes that match their nodeSelector and tolerations.
Why it matters:Assuming pods run everywhere can cause missing services on some nodes and troubleshooting confusion.
Expert Zone
1
DaemonSets can be combined with PodSecurityPolicies or PodSecurityAdmission to enforce security on node-level pods, which is often overlooked.
2
The interaction between DaemonSets and node auto-scaling groups requires careful label and taint management to avoid scheduling issues.
3
DaemonSets do not support horizontal scaling per node; to run multiple pods per node, multiple DaemonSets or other controllers are needed.
When NOT to use
DaemonSets are not suitable for workloads that require scaling by demand or that do not need to run on every node. For such cases, use Deployments or StatefulSets. Also, avoid DaemonSets for batch jobs or ephemeral workloads that do not need persistent presence on nodes.
Production Patterns
In production, DaemonSets are commonly used for logging agents like Fluentd, monitoring agents like Prometheus Node Exporter, and network plugins like Calico or Weave. They are often combined with node labels and taints to target specific node pools, and use rolling updates with maxUnavailable set to 1 to maintain cluster stability.
Connections
Kubernetes Deployments
complementary controllers with different pod placement models
Understanding DaemonSets alongside Deployments clarifies how Kubernetes manages both per-node and scalable application workloads.
Operating System Daemons
conceptual similarity in running background services on every machine
Knowing OS daemons helps grasp why DaemonSets exist: to run essential background services uniformly on all nodes.
Distributed Systems Consistency
DaemonSets ensure consistent service presence across distributed nodes
Recognizing DaemonSets as a pattern for uniform service deployment aids understanding of consistency challenges in distributed systems.
Common Pitfalls
#1Expecting DaemonSet pods to run on all nodes without considering node labels or taints.
Wrong approach:apiVersion: apps/v1 kind: DaemonSet metadata: name: my-daemonset spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: mycontainer image: myimage nodeSelector: disktype: ssd
Correct approach:Remove or adjust nodeSelector to match actual node labels or add tolerations for taints. Example: nodeSelector: disktype: ssd tolerations: - key: "" operator: "Exists" effect: "NoSchedule"
Root cause:Misunderstanding that nodeSelector and tolerations restrict pod scheduling leads to pods not running on expected nodes.
#2Updating DaemonSet by changing pod template without specifying update strategy, causing all pods to restart at once.
Wrong approach:apiVersion: apps/v1 kind: DaemonSet metadata: name: my-daemonset spec: template: spec: containers: - name: mycontainer image: myimage:v2
Correct approach:Specify rolling update strategy to update pods one by one. updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1
Root cause:Ignoring updateStrategy causes default behavior that may disrupt cluster stability.
#3Trying to run multiple pods per node with a single DaemonSet expecting it to scale horizontally per node.
Wrong approach:Using one DaemonSet with multiple replicas field (which is invalid) or expecting multiple pods per node automatically.
Correct approach:Create multiple DaemonSets or use Deployments with node affinity for multiple pods per node.
Root cause:Misunderstanding DaemonSet's one-pod-per-node model leads to incorrect scaling expectations.
Key Takeaways
DaemonSets ensure one pod runs on each eligible node, automating node-level service deployment.
They use node selectors, tolerations, and affinity to control pod placement precisely.
DaemonSets support rolling updates to maintain cluster stability during changes.
Understanding DaemonSet internals helps troubleshoot scheduling and update issues effectively.
DaemonSets are essential for consistent monitoring, logging, and networking services across Kubernetes clusters.