0
0
AWScloud~5 mins

Deploying workloads on EKS in AWS - Commands & Configuration

Choose your learning style9 modes available
Introduction
Deploying workloads on EKS means running your applications inside a managed Kubernetes cluster on AWS. This helps you run and scale apps easily without managing servers yourself.
When you want to run a web app that can handle many users and scale automatically.
When you need to deploy microservices that communicate with each other inside a secure environment.
When you want to update your app without downtime using rolling updates.
When you want to use Kubernetes tools but avoid managing the control plane.
When you want to run containerized apps with AWS integrations like load balancers and storage.
Config File - deployment.yaml
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-deployment
  labels:
    app: example-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: example-app
  template:
    metadata:
      labels:
        app: example-app
    spec:
      containers:
      - name: example-container
        image: nginx:1.23
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: example-service
spec:
  type: LoadBalancer
  selector:
    app: example-app
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

This file creates a Deployment and a Service in EKS.

Deployment: Runs 2 copies (replicas) of an Nginx container labeled 'example-app'. It manages updates and restarts.

Service: Exposes the Deployment to the internet using a LoadBalancer on port 80.

Commands
This command tells Kubernetes to create the Deployment and Service defined in the file. It starts running your app on the EKS cluster.
Terminal
kubectl apply -f deployment.yaml
Expected OutputExpected
deployment.apps/example-deployment created service/example-service created
This command lists the running pods (app copies) to check if your app is running correctly.
Terminal
kubectl get pods
Expected OutputExpected
NAME READY STATUS RESTARTS AGE example-deployment-6d4cfb7b7f-abcde 1/1 Running 0 30s example-deployment-6d4cfb7b7f-fghij 1/1 Running 0 30s
This command shows the external IP address assigned to your app by the LoadBalancer so you can access it from the internet.
Terminal
kubectl get service example-service
Expected OutputExpected
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-service LoadBalancer 10.100.200.50 a1b2c3d4e5f6.us-east-1.elb.amazonaws.com 80:31234/TCP 1m
Key Concept

If you remember nothing else from this pattern, remember: a Deployment manages your app copies and a Service exposes them to the internet.

Common Mistakes
Not labeling pods correctly in the Deployment spec.
The Service uses labels to find pods. If labels don't match, the Service won't route traffic to your app.
Ensure the pod template labels match the Service selector exactly.
Forgetting to wait for pods to be in Running status before accessing the app.
If pods are not ready, your app won't respond to requests.
Use 'kubectl get pods' and wait until pods show STATUS as Running.
Using 'kubectl apply' without the correct kubeconfig context set to your EKS cluster.
Commands will run against the wrong cluster or fail.
Set the kubeconfig context to your EKS cluster before running commands.
Summary
Use 'kubectl apply -f deployment.yaml' to create your app Deployment and Service on EKS.
Check pod status with 'kubectl get pods' to ensure your app is running.
Find your app's external address with 'kubectl get service example-service' to access it.