0
0
AWScloud~5 mins

EKS networking with VPC CNI in AWS - Commands & Configuration

Choose your learning style9 modes available
Introduction
When you run Kubernetes on AWS using EKS, your pods need to talk to each other and the internet. The VPC CNI plugin helps by assigning real AWS network addresses to pods, so they connect smoothly inside your cloud network.
When you want your Kubernetes pods to have direct IP addresses in your AWS VPC for better network performance.
When you need your pods to communicate with AWS services securely without extra network translation.
When you want to control pod networking using AWS VPC features like security groups and routing.
When you want to scale your Kubernetes cluster and keep pod networking consistent and reliable.
When you want to avoid complex overlay networks and use native AWS networking for your pods.
Config File - aws-auth-cm.yaml
aws-auth-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::123456789012:role/EKSNodeRole
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
  mapUsers: |
  mapAccounts: |
This ConfigMap allows your EKS worker nodes to join the cluster and use the VPC CNI plugin. The mapRoles section maps the IAM role of your nodes to Kubernetes system groups, enabling networking and pod management.
Commands
This command installs the AWS VPC CNI plugin into your EKS cluster, enabling pods to get IP addresses from your VPC.
Terminal
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.13.3/config/v1.13/aws-k8s-cni.yaml
Expected OutputExpected
configmap/aws-node created serviceaccount/aws-node created clusterrole.rbac.authorization.k8s.io/aws-node created clusterrolebinding.rbac.authorization.k8s.io/aws-node created daemonset.apps/aws-node created
This command checks that the aws-node pods, which run the VPC CNI plugin, are running correctly in the kube-system namespace.
Terminal
kubectl get pods -n kube-system -l k8s-app=aws-node
Expected OutputExpected
NAME READY STATUS RESTARTS AGE aws-node-abcde 1/1 Running 0 2m aws-node-fghij 1/1 Running 0 2m
-n kube-system - Specifies the namespace where the aws-node pods run
-l k8s-app=aws-node - Filters pods by label to show only aws-node pods
This command shows detailed information about the aws-node daemonset to verify its configuration and status.
Terminal
kubectl describe daemonset aws-node -n kube-system
Expected OutputExpected
Name: aws-node Namespace: kube-system Selector: k8s-app=aws-node Labels: k8s-app=aws-node Annotations: <none> Desired Number of Nodes Scheduled: 2 Current Number of Nodes Scheduled: 2 Number of Nodes Misscheduled: 0 Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: k8s-app=aws-node Containers: aws-node: Image: 602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon-k8s-cni:v1.13.3 Ports: <none> Environment Variables: AWS_VPC_K8S_CNI_LOGLEVEL: DEBUG AWS_VPC_K8S_CNI_CONFIGURE_RPFILTER: true Mounts: <none>
-n kube-system - Shows the daemonset in the kube-system namespace
Key Concept

If you remember nothing else from this pattern, remember: the VPC CNI plugin lets your Kubernetes pods get real AWS VPC IP addresses for fast and secure networking.

Common Mistakes
Not applying the aws-auth ConfigMap with correct IAM roles before installing the VPC CNI plugin
Without proper IAM role mapping, worker nodes cannot join the cluster or manage pod networking, causing failures.
Create and apply the aws-auth ConfigMap with the correct IAM role ARN for your worker nodes before installing the VPC CNI plugin.
Using an outdated version of the aws-k8s-cni.yaml manifest
Older versions may lack important fixes or features, leading to networking issues or incompatibility with your EKS version.
Always use the latest stable version of the aws-k8s-cni.yaml manifest from the official AWS GitHub repository.
Not verifying that aws-node pods are running after installation
If the aws-node pods are not running, pod networking will fail and your cluster will not function properly.
Run kubectl get pods -n kube-system -l k8s-app=aws-node to confirm the plugin pods are running and healthy.
Summary
Apply the aws-auth ConfigMap to map IAM roles for your EKS worker nodes.
Install the AWS VPC CNI plugin using the official aws-k8s-cni.yaml manifest.
Verify the aws-node pods are running in the kube-system namespace to ensure pod networking is active.