0
0
Azurecloud~15 mins

AKS networking (kubenet, Azure CNI) - Deep Dive

Choose your learning style9 modes available
Overview - AKS networking (kubenet, Azure CNI)
What is it?
AKS networking is how Azure Kubernetes Service connects and manages communication between containers, nodes, and external networks. It uses two main models: kubenet and Azure CNI. Kubenet is a basic network plugin that assigns IP addresses to nodes and uses network address translation for pods. Azure CNI assigns IP addresses directly to pods from the Azure virtual network, allowing better integration and scalability.
Why it matters
Networking in AKS controls how your applications talk to each other and the outside world. Without proper networking, containers can't communicate, causing failures or security risks. Choosing the right networking model affects performance, scalability, and how easily you manage your cluster. Without AKS networking, your apps would be isolated or insecure, making cloud deployments unreliable.
Where it fits
Before learning AKS networking, you should understand basic Kubernetes concepts like pods, nodes, and services. After this, you can explore advanced networking topics like network policies, ingress controllers, and service meshes to control traffic and security in your cluster.
Mental Model
Core Idea
AKS networking connects containers inside Kubernetes clusters to each other and the outside world using either simple node-based IP translation or direct pod IP assignment from Azure's network.
Think of it like...
Imagine a large apartment building where each apartment (pod) needs a phone line. Kubenet is like sharing a few phone lines for the whole building and forwarding calls inside, while Azure CNI gives each apartment its own direct phone line from the city network.
┌─────────────────────────────┐
│        Azure VNet           │
│ ┌───────────────┐           │
│ │   AKS Cluster │           │
│ │ ┌───────────┐ │           │
│ │ │  Nodes    │ │           │
│ │ │ ┌───────┐ │ │           │
│ │ │ │ Pods  │ │ │           │
│ │ │ └───────┘ │ │           │
│ │ └───────────┘ │           │
│ └───────────────┘           │
└─────────────────────────────┘

Kubenet: Pods share node IPs, use NAT to reach outside.
Azure CNI: Pods get IPs directly from Azure VNet.
Build-Up - 7 Steps
1
FoundationBasics of AKS and Pods Networking
🤔
Concept: Understand what AKS is and how pods communicate inside a cluster.
AKS is a managed Kubernetes service on Azure. Pods are the smallest units running containers. For pods to work together, they need networking to send messages and data. AKS provides networking so pods can find and talk to each other and external services.
Result
You know that pods need network connections and AKS provides this automatically.
Understanding that pods are isolated units that need networking is the foundation for grasping how AKS networking works.
2
FoundationIntroduction to Network Plugins in AKS
🤔
Concept: Learn that AKS uses network plugins to manage pod networking.
Network plugins are software that control how pods get IP addresses and connect. AKS supports two main plugins: kubenet and Azure CNI. These plugins decide how pods get IPs and how traffic flows inside and outside the cluster.
Result
You understand that network plugins are the key to pod communication in AKS.
Knowing that network plugins shape pod connectivity helps you see why choosing the right one matters.
3
IntermediateHow Kubenet Networking Works
🤔Before reading on: do you think pods get their own IPs directly from Azure VNet in kubenet? Commit to your answer.
Concept: Explore kubenet's approach to pod IP addressing and traffic routing.
Kubenet assigns IP addresses only to nodes, not pods. Pods get IPs from a separate internal range. When pods communicate outside their node, kubenet uses network address translation (NAT) to map pod IPs to node IPs. This means pods share the node's IP for external traffic.
Result
Pods can communicate inside the cluster and reach outside, but their IPs are hidden behind node IPs.
Understanding kubenet's NAT approach explains why pod IPs are not visible outside and why this limits some network features.
4
IntermediateHow Azure CNI Networking Works
🤔Before reading on: do you think Azure CNI assigns pod IPs from the Azure VNet directly? Commit to your answer.
Concept: Learn that Azure CNI assigns IPs to pods directly from the Azure virtual network.
Azure CNI integrates deeply with Azure networking. Each pod gets its own IP address from the Azure VNet subnet, just like a virtual machine. This allows pods to communicate directly with other Azure resources and external networks without NAT. It supports larger clusters and advanced networking features.
Result
Pods have unique IPs visible in Azure VNet, enabling direct communication and better network control.
Knowing Azure CNI assigns real VNet IPs to pods helps understand its advantages in scalability and integration.
5
IntermediateComparing Kubenet and Azure CNI
🤔Before reading on: which networking model do you think supports larger clusters better? Kubenet or Azure CNI? Commit to your answer.
Concept: Compare the strengths and limitations of kubenet and Azure CNI.
Kubenet is simpler and uses fewer IPs but limits cluster size and network features. Azure CNI uses more IPs but supports larger clusters and advanced features like network policies and direct Azure resource access. Kubenet requires manual routing setup for some scenarios, while Azure CNI automates this.
Result
You can choose the right networking model based on cluster size, IP availability, and feature needs.
Understanding trade-offs between kubenet and Azure CNI guides better network design decisions.
6
AdvancedConfiguring AKS Networking Options
🤔Before reading on: do you think you can switch networking models after cluster creation easily? Commit to your answer.
Concept: Learn how to select and configure networking models during AKS cluster setup.
When creating an AKS cluster, you specify the network plugin: kubenet or Azure CNI. This choice affects IP allocation and network behavior. Switching between them after creation is not supported and requires cluster recreation. You also configure subnet sizes and IP ranges to ensure enough addresses for pods and nodes.
Result
You know how to plan and set up AKS networking correctly from the start.
Knowing that networking choice is fixed at creation prevents costly mistakes and downtime.
7
ExpertAdvanced Networking Challenges and Solutions
🤔Before reading on: do you think Azure CNI always uses more IPs than kubenet? Commit to your answer.
Concept: Explore complex scenarios like IP exhaustion, network policies, and hybrid networking in AKS.
Azure CNI can exhaust IP addresses if subnet sizes are too small, requiring careful planning or subnet expansion. Kubenet may need manual route management for cross-node pod communication. Network policies work better with Azure CNI due to direct pod IPs. Hybrid setups combining on-premises and Azure networks require VPNs or ExpressRoute with proper routing.
Result
You understand how to handle real-world networking issues in AKS clusters.
Recognizing these challenges helps design resilient, scalable AKS networks and avoid common pitfalls.
Under the Hood
Kubenet works by assigning IP addresses only to nodes and creating a separate internal network for pods. It uses Linux bridge and iptables to route pod traffic through nodes, applying NAT for external communication. Azure CNI integrates with Azure's virtual network, assigning each pod an IP from the VNet subnet. It uses Azure's networking stack to route traffic directly, avoiding NAT and enabling pods to appear as first-class network citizens.
Why designed this way?
Kubenet was designed for simplicity and compatibility with Kubernetes defaults, minimizing IP usage and complexity. Azure CNI was created to leverage Azure's powerful networking features, allowing better scalability, security, and integration with Azure services. The tradeoff is between simplicity and advanced capabilities, giving users options based on their needs.
┌───────────────┐          ┌───────────────┐
│   Azure VNet  │          │   Azure VNet  │
│               │          │               │
│  ┌─────────┐  │          │  ┌─────────┐  │
│  │  Node   │  │          │  │  Node   │  │
│  │(Kubenet)│  │          │  │(Azure CNI)│  │
│  └─────────┘  │          │  └─────────┘  │
│     │         │          │      │        │
│  Pods IPs    │          │  Pods IPs    │
│  (internal)  │          │  (VNet range)│
│     │         │          │      │        │
│ NAT via node │          │ Direct routing│
└───────────────┘          └───────────────┘
Myth Busters - 4 Common Misconceptions
Quick: Do pods in kubenet get IP addresses from the Azure VNet directly? Commit to yes or no.
Common Belief:Pods in kubenet get IP addresses directly from the Azure virtual network.
Tap to reveal reality
Reality:In kubenet, pods get IPs from a separate internal range, not from the Azure VNet. Only nodes have VNet IPs.
Why it matters:Assuming pods have VNet IPs can lead to misconfigured routing and security rules, causing communication failures.
Quick: Can you change the AKS network plugin from kubenet to Azure CNI after cluster creation? Commit to yes or no.
Common Belief:You can switch between kubenet and Azure CNI networking anytime after creating the AKS cluster.
Tap to reveal reality
Reality:The network plugin is fixed at cluster creation and cannot be changed without recreating the cluster.
Why it matters:Trying to switch plugins later wastes time and resources, and can cause downtime.
Quick: Does Azure CNI always use more IP addresses than kubenet? Commit to yes or no.
Common Belief:Azure CNI always uses more IP addresses than kubenet, making it less efficient.
Tap to reveal reality
Reality:Azure CNI uses more IPs because each pod gets a VNet IP, but this enables better scalability and network features. Kubenet uses fewer IPs but limits cluster size and features.
Why it matters:Misunderstanding this tradeoff can lead to choosing a network model that doesn't fit your cluster size or feature needs.
Quick: Does kubenet require manual routing setup for cross-node pod communication? Commit to yes or no.
Common Belief:Kubenet automatically handles all routing between pods on different nodes without extra setup.
Tap to reveal reality
Reality:Kubenet requires manual route configuration in the Azure VNet to enable cross-node pod communication.
Why it matters:Ignoring this can cause pods on different nodes to fail communicating, breaking applications.
Expert Zone
1
Azure CNI's direct pod IP assignment allows seamless integration with Azure Network Security Groups, enabling fine-grained security controls at the pod level.
2
Kubenet's use of NAT can complicate troubleshooting network issues because pod IPs are hidden behind node IPs, making logs and monitoring less clear.
3
Subnet sizing for Azure CNI must consider maximum pods per node times number of nodes, requiring careful IP address management to avoid exhaustion.
When NOT to use
Avoid Azure CNI if your Azure VNet has limited IP address space or if you want a simpler setup with fewer IPs used. Kubenet is better for small clusters or when IP scarcity is a concern. For advanced network policies or large-scale clusters, Azure CNI is preferred. If you need to integrate with on-premises networks tightly, Azure CNI offers better options.
Production Patterns
In production, teams often use Azure CNI for clusters with many nodes and pods to leverage direct IP assignment and network policies. Kubenet is common in development or small clusters to save IPs. Network policies are applied with Azure CNI to enforce security. Hybrid cloud setups use Azure CNI with VPN or ExpressRoute for seamless connectivity.
Connections
Virtual Private Network (VPN)
Builds-on
Understanding AKS networking helps grasp how VPNs connect cloud networks securely, as both involve routing and IP management.
Operating System Network Stack
Same pattern
AKS networking plugins like kubenet use OS-level features like bridges and iptables, so knowing OS networking clarifies how pod traffic is managed.
Telephone Network Systems
Analogy-based
The way kubenet uses NAT is similar to how phone systems share lines, showing how resource sharing and address translation work in networks.
Common Pitfalls
#1Choosing Azure CNI without enough IP addresses in the subnet.
Wrong approach:az aks create --name myAKS --resource-group myRG --network-plugin azure --vnet-subnet-id /subscriptions/.../subnets/smallSubnet
Correct approach:az aks create --name myAKS --resource-group myRG --network-plugin azure --vnet-subnet-id /subscriptions/.../subnets/largeEnoughSubnet
Root cause:Not planning subnet size leads to IP exhaustion, causing pod creation failures.
#2Trying to switch network plugin after cluster creation.
Wrong approach:az aks update --name myAKS --resource-group myRG --network-plugin azure
Correct approach:Delete and recreate the AKS cluster with the desired network plugin specified at creation.
Root cause:Misunderstanding that network plugin is immutable after cluster creation.
#3Not configuring routes for kubenet cross-node pod communication.
Wrong approach:Create AKS cluster with kubenet and assume pods on different nodes communicate without extra setup.
Correct approach:Manually add user-defined routes in Azure VNet to enable pod traffic between nodes.
Root cause:Assuming kubenet handles all routing automatically leads to communication failures.
Key Takeaways
AKS networking connects pods inside clusters and to external networks using kubenet or Azure CNI plugins.
Kubenet uses node IPs with NAT for pods, saving IPs but limiting scalability and features.
Azure CNI assigns pods IPs directly from Azure VNet, enabling better integration and larger clusters.
Choosing the right network plugin at cluster creation is critical because it cannot be changed later.
Proper subnet sizing and routing configuration are essential to avoid network failures in AKS.