0
0
AWScloud~10 mins

EKS cluster creation in AWS - Commands & Configuration

Choose your learning style9 modes available
Introduction
Creating an EKS cluster lets you run and manage containerized applications on AWS easily. It solves the problem of setting up and managing Kubernetes infrastructure by automating the control plane and worker nodes.
When you want to deploy containerized applications on AWS with Kubernetes without managing the control plane yourself
When you need a scalable and secure environment for running microservices
When you want to integrate your Kubernetes workloads with other AWS services like IAM and CloudWatch
When you want to avoid the complexity of setting up Kubernetes masters and focus on your applications
When you want a managed Kubernetes service that handles upgrades and availability automatically
Config File - cluster.yaml
cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: example-cluster
  region: us-east-1
nodeGroups:
  - name: ng-1
    instanceType: t3.medium
    desiredCapacity: 2
    ssh:
      allow: true
      publicKeyName: example-key

This file defines the EKS cluster configuration using eksctl format.

  • metadata.name: The cluster name.
  • metadata.region: AWS region to create the cluster.
  • nodeGroups: Defines worker nodes with instance type and count.
  • ssh: Allows SSH access to nodes using the specified key.
Commands
This command creates the EKS cluster and worker nodes as defined in the cluster.yaml file. It automates the setup of control plane and node groups.
Terminal
eksctl create cluster -f cluster.yaml
Expected OutputExpected
[ℹ] eksctl version 0.140.0 [ℹ] using region us-east-1 [ℹ] setting availability zones to [us-east-1a us-east-1b us-east-1c] [ℹ] nodegroup "ng-1" will use "ami-0c55b159cbfafe1f0" [AmazonLinux2/1.21] [ℹ] creating EKS cluster "example-cluster" in "us-east-1" region [ℹ] 1 nodegroup(s) (ng-1) were included (based on the include/exclude rules) [ℹ] will create a CloudFormation stack for cluster itself and 1 nodegroup(s) [✔] all cluster resource for "example-cluster" created [ℹ] nodegroup "ng-1" has 2 node(s) [✔] EKS cluster "example-cluster" in "us-east-1" region is ready
-f - Specifies the cluster configuration file to use
This command verifies that the worker nodes are registered and ready in the Kubernetes cluster.
Terminal
kubectl get nodes
Expected OutputExpected
NAME STATUS ROLES AGE VERSION ip-192-168-10-1.us-east-1.compute.internal Ready <none> 5m v1.21.2-eks-0389ca3 ip-192-168-10-2.us-east-1.compute.internal Ready <none> 5m v1.21.2-eks-0389ca3
This command updates your local kubeconfig file to connect kubectl to the new EKS cluster.
Terminal
aws eks update-kubeconfig --name example-cluster --region us-east-1
Expected OutputExpected
Added new context arn:aws:eks:us-east-1:123456789012:cluster/example-cluster to /home/user/.kube/config
--name - Specifies the EKS cluster name
--region - Specifies the AWS region of the cluster
Key Concept

If you remember nothing else from this pattern, remember: eksctl simplifies EKS cluster creation by automating control plane and node setup with a simple config file.

Common Mistakes
Not updating kubeconfig after cluster creation
kubectl commands will fail because they don't know how to connect to the new cluster
Run 'aws eks update-kubeconfig --name example-cluster --region us-east-1' to configure kubectl
Using an incorrect or missing SSH key name in the config file
You won't be able to SSH into worker nodes if needed
Ensure the SSH key name exists in your AWS account and matches the 'publicKeyName' in the config
Skipping verification of node status with 'kubectl get nodes'
You won't know if worker nodes are ready and connected to the cluster
Always run 'kubectl get nodes' to confirm nodes are in Ready status
Summary
Create an EKS cluster using eksctl with a YAML config file defining cluster and node details.
Update your local kubeconfig to connect kubectl to the new cluster.
Verify worker nodes are ready using 'kubectl get nodes' to ensure the cluster is operational.