How to Create an EKS Cluster with Terraform: Step-by-Step Guide
To create an Amazon EKS cluster with
Terraform, define the aws_eks_cluster resource along with necessary IAM roles and networking. Use the aws_eks_node_group resource to add worker nodes. Apply the configuration with terraform init, terraform plan, and terraform apply.Syntax
The main Terraform resources to create an EKS cluster are:
aws_eks_cluster: Defines the EKS control plane.aws_iam_role: Creates roles for the cluster and nodes.aws_eks_node_group: Adds worker nodes to the cluster.aws_vpc,aws_subnet: Define networking for the cluster.
Each resource requires specific arguments like name, role_arn, and subnet_ids. The cluster needs a VPC with subnets for networking.
terraform
resource "aws_vpc" "eks_vpc" { cidr_block = "10.0.0.0/16" } resource "aws_subnet" "eks_subnet" { count = 2 vpc_id = aws_vpc.eks_vpc.id cidr_block = cidrsubnet(aws_vpc.eks_vpc.cidr_block, 8, count.index) availability_zone = element(data.aws_availability_zones.available.names, count.index) } resource "aws_iam_role" "eks_cluster_role" { name = "eks_cluster_role" assume_role_policy = jsonencode({ Version = "2012-10-17", Statement = [{ Effect = "Allow", Principal = { Service = "eks.amazonaws.com" }, Action = "sts:AssumeRole" }] }) } resource "aws_eks_cluster" "example" { name = "example-eks-cluster" role_arn = aws_iam_role.eks_cluster_role.arn vpc_config { subnet_ids = aws_subnet.eks_subnet[*].id } }
Example
This example creates a simple EKS cluster with a VPC, two subnets, an IAM role, and a managed node group with two worker nodes.
terraform
provider "aws" { region = "us-west-2" } data "aws_availability_zones" "available" {} resource "aws_vpc" "eks_vpc" { cidr_block = "10.0.0.0/16" } resource "aws_subnet" "eks_subnet" { count = 2 vpc_id = aws_vpc.eks_vpc.id cidr_block = cidrsubnet(aws_vpc.eks_vpc.cidr_block, 8, count.index) availability_zone = element(data.aws_availability_zones.available.names, count.index) } resource "aws_iam_role" "eks_cluster_role" { name = "eks_cluster_role" assume_role_policy = jsonencode({ Version = "2012-10-17", Statement = [{ Effect = "Allow", Principal = { Service = "eks.amazonaws.com" }, Action = "sts:AssumeRole" }] }) } resource "aws_iam_role_policy_attachment" "eks_cluster_policy" { role = aws_iam_role.eks_cluster_role.name policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy" } resource "aws_eks_cluster" "example" { name = "example-eks-cluster" role_arn = aws_iam_role.eks_cluster_role.arn vpc_config { subnet_ids = aws_subnet.eks_subnet[*].id } depends_on = [aws_iam_role_policy_attachment.eks_cluster_policy] } resource "aws_iam_role" "eks_node_role" { name = "eks_node_role" assume_role_policy = jsonencode({ Version = "2012-10-17", Statement = [{ Effect = "Allow", Principal = { Service = "ec2.amazonaws.com" }, Action = "sts:AssumeRole" }] }) } resource "aws_iam_role_policy_attachment" "eks_worker_node_policy" { role = aws_iam_role.eks_node_role.name policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy" } resource "aws_iam_role_policy_attachment" "eks_cni_policy" { role = aws_iam_role.eks_node_role.name policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy" } resource "aws_iam_role_policy_attachment" "ec2_container_registry_read_only" { role = aws_iam_role.eks_node_role.name policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly" } resource "aws_eks_node_group" "example_nodes" { cluster_name = aws_eks_cluster.example.name node_group_name = "example-node-group" node_role_arn = aws_iam_role.eks_node_role.arn subnet_ids = aws_subnet.eks_subnet[*].id scaling_config { desired_size = 2 max_size = 3 min_size = 1 } depends_on = [aws_iam_role_policy_attachment.eks_worker_node_policy, aws_iam_role_policy_attachment.eks_cni_policy, aws_iam_role_policy_attachment.ec2_container_registry_read_only] }
Output
Apply complete! Resources: 12 added, 0 changed, 0 destroyed.
Common Pitfalls
Common mistakes when creating an EKS cluster with Terraform include:
- Not attaching required IAM policies to roles, causing permission errors.
- Missing or incorrect subnet IDs in
vpc_config, leading to cluster creation failure. - Forgetting to wait for IAM role policy attachments before creating the cluster.
- Using incompatible AWS provider versions or missing provider configuration.
Always check AWS IAM policies and subnet configurations carefully.
terraform
/* Wrong: Missing IAM policy attachment */ resource "aws_iam_role" "eks_cluster_role" { name = "eks_cluster_role" assume_role_policy = jsonencode({ Version = "2012-10-17", Statement = [{ Effect = "Allow", Principal = { Service = "eks.amazonaws.com" }, Action = "sts:AssumeRole" }] }) } resource "aws_eks_cluster" "example" { name = "example-eks-cluster" role_arn = aws_iam_role.eks_cluster_role.arn vpc_config { subnet_ids = ["subnet-12345678"] } } /* Right: Attach required policy */ resource "aws_iam_role_policy_attachment" "eks_cluster_policy" { role = aws_iam_role.eks_cluster_role.name policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy" }
Quick Reference
Key points to remember when creating an EKS cluster with Terraform:
- Define a VPC and subnets for networking.
- Create IAM roles with correct trust policies and attach AWS managed policies.
- Use
aws_eks_clusterfor the control plane andaws_eks_node_groupfor worker nodes. - Run
terraform init, thenterraform plan, and finallyterraform applyto deploy. - Check AWS console or CLI to verify cluster status after deployment.
Key Takeaways
Always attach required IAM policies to roles before creating the EKS cluster.
Define a VPC with subnets and provide subnet IDs in the cluster's VPC config.
Use aws_eks_node_group to add worker nodes to your cluster.
Run terraform init, plan, and apply in order to deploy your infrastructure.
Verify cluster creation success via AWS console or CLI after deployment.