0
0
AWScloud~5 mins

Auto Scaling with ELB integration in AWS - Commands & Configuration

Choose your learning style9 modes available
Introduction
When your app gets busy, you want more servers to handle the load automatically. Auto Scaling with ELB integration helps add or remove servers based on traffic, and the ELB spreads user requests evenly across these servers.
When your website traffic changes a lot during the day and you want to save money by using only needed servers.
When you want to keep your app available even if some servers fail.
When you want new servers to start receiving user traffic only after they are ready.
When you want to balance user requests across multiple servers automatically.
When you want to avoid manual work of adding or removing servers as demand changes.
Config File - main.tf
main.tf
provider "aws" {
  region = "us-east-1"
}

resource "aws_launch_configuration" "example" {
  name_prefix   = "example-launch-config-"
  image_id      = "ami-0c94855ba95c71c99"
  instance_type = "t2.micro"
  security_groups = ["sg-0123456789abcdef0"]
  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_autoscaling_group" "example" {
  name                      = "example-asg"
  max_size                  = 3
  min_size                  = 1
  desired_capacity          = 1
  launch_configuration      = aws_launch_configuration.example.name
  vpc_zone_identifier       = ["subnet-0123456789abcdef0"]
  health_check_type         = "ELB"
  health_check_grace_period = 300
  target_group_arns         = [aws_lb_target_group.example.arn]
  tag {
    key                 = "Name"
    value               = "example-instance"
    propagate_at_launch = true
  }
}

resource "aws_lb" "example" {
  name               = "example-lb"
  internal           = false
  load_balancer_type = "application"
  security_groups    = ["sg-0123456789abcdef0"]
  subnets            = ["subnet-0123456789abcdef0"]
}

resource "aws_lb_target_group" "example" {
  name     = "example-tg"
  port     = 80
  protocol = "HTTP"
  vpc_id   = "vpc-0123456789abcdef0"
  health_check {
    path                = "/"
    interval            = 30
    timeout             = 5
    healthy_threshold   = 5
    unhealthy_threshold = 2
    matcher             = "200-299"
  }
}

resource "aws_lb_listener" "example" {
  load_balancer_arn = aws_lb.example.arn
  port              = 80
  protocol          = "HTTP"
  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.example.arn
  }
}

This Terraform file creates an AWS Auto Scaling Group linked to an Application Load Balancer (ALB).

aws_launch_configuration: Defines the server setup like image and type.

aws_autoscaling_group: Manages the number of servers, links to the load balancer target group, and sets health checks.

aws_lb: Creates the Application Load Balancer.

aws_lb_target_group: Groups servers for the load balancer to send traffic to, with health checks.

aws_lb_listener: Listens on port 80 and forwards traffic to the target group.

Commands
This command sets up Terraform in the current folder, downloading necessary plugins to talk to AWS.
Terminal
terraform init
Expected OutputExpected
Initializing the backend... Initializing provider plugins... - Finding latest version of hashicorp/aws... - Installing hashicorp/aws v4.0.0... - Installed hashicorp/aws v4.0.0 (signed by HashiCorp) Terraform has been successfully initialized!
This command shows what Terraform will create or change in AWS without making any changes yet.
Terminal
terraform plan
Expected OutputExpected
An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # aws_launch_configuration.example will be created + resource "aws_launch_configuration" "example" { + id = (known after apply) ... } # aws_autoscaling_group.example will be created + resource "aws_autoscaling_group" "example" { + id = (known after apply) ... } # aws_lb.example will be created + resource "aws_lb" "example" { + id = (known after apply) ... } # aws_lb_target_group.example will be created + resource "aws_lb_target_group" "example" { + id = (known after apply) ... } # aws_lb_listener.example will be created + resource "aws_lb_listener" "example" { + id = (known after apply) ... } Plan: 5 to add, 0 to change, 0 to destroy.
This command creates all the resources in AWS as defined in the config file, without asking for confirmation.
Terminal
terraform apply -auto-approve
Expected OutputExpected
aws_launch_configuration.example: Creating... aws_launch_configuration.example: Creation complete after 2s [id=example-launch-config-202406] aws_lb.example: Creating... aws_lb_target_group.example: Creating... aws_lb_target_group.example: Creation complete after 3s [id=arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/example-tg/abcdef123456] aws_lb.example: Creation complete after 5s [id=arn:aws:elasticloadbalancing:us-east-1:123456789012:loadbalancer/app/example-lb/abcdef123456] aws_lb_listener.example: Creating... aws_lb_listener.example: Creation complete after 2s [id=arn:aws:elasticloadbalancing:us-east-1:123456789012:listener/app/example-lb/abcdef123456/123456abcdef] aws_autoscaling_group.example: Creating... aws_autoscaling_group.example: Creation complete after 4s [id=example-asg] Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
-auto-approve - Skip confirmation prompt to apply changes immediately
This AWS CLI command checks the status of the Auto Scaling Group to confirm it is active and linked to the load balancer.
Terminal
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names example-asg
Expected OutputExpected
{ "AutoScalingGroups": [ { "AutoScalingGroupName": "example-asg", "DesiredCapacity": 1, "MinSize": 1, "MaxSize": 3, "TargetGroupARNs": [ "arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/example-tg/abcdef123456" ], "Instances": [ { "InstanceId": "i-0123456789abcdef0", "LifecycleState": "InService", "HealthStatus": "Healthy" } ] } ] }
--auto-scaling-group-names - Specify the name of the Auto Scaling Group to describe
Key Concept

If you remember nothing else from this pattern, remember: Auto Scaling automatically adjusts server count based on demand, and ELB ensures traffic goes only to healthy servers.

Common Mistakes
Not linking the Auto Scaling Group to the ELB target group
Without this link, new servers won't receive user traffic, defeating the purpose of scaling.
Always specify the target_group_arns in the Auto Scaling Group configuration.
Setting health_check_type to 'EC2' instead of 'ELB'
The Auto Scaling Group won't consider the load balancer's health checks, so unhealthy instances might still get traffic.
Set health_check_type to 'ELB' to use the load balancer's health checks.
Using incompatible subnets or security groups for the load balancer and instances
This causes network or permission issues, preventing traffic flow or instance registration.
Ensure subnets and security groups allow communication between the load balancer and instances.
Summary
Use terraform init to prepare Terraform with AWS plugins.
Use terraform plan to preview resource changes before applying.
Use terraform apply to create the Auto Scaling Group and ELB resources.
Verify the Auto Scaling Group status and ELB integration with AWS CLI.