Imagine a popular online store uses a Network Load Balancer (NLB) to distribute incoming customer requests. Suddenly, a flash sale causes a huge spike in traffic. How does the NLB manage this sudden increase?
Think about how NLBs are designed to handle large volumes of traffic efficiently.
Network Load Balancers are built to scale automatically and handle millions of requests per second without needing manual intervention or pre-warming. They do not queue or drop requests but distribute them efficiently.
You are designing a system using an AWS Network Load Balancer. You have two options for targets: IP addresses or instance IDs. Which choice allows you to load balance traffic to resources outside your VPC?
Consider which target type can represent resources outside the AWS environment.
IP address targets allow the NLB to route traffic to any IP, including on-premises servers or resources outside the VPC. Instance IDs only represent EC2 instances inside the VPC.
Your application expects 10 million TCP connections per minute. You plan to use an NLB to distribute this load. Which of the following is the best estimate of the NLB's capacity to handle this traffic?
Recall the NLB's design for high throughput and connection handling.
Network Load Balancers are designed to handle millions of connections per second, so 10 million connections per minute is well within a single NLB's capacity.
You need to choose between an AWS Network Load Balancer (NLB) and an Application Load Balancer (ALB) for your service. Which statement correctly describes a key tradeoff?
Think about the OSI layers where each load balancer operates and their feature sets.
NLB works at the transport layer and is optimized for high performance and low latency TCP/UDP traffic. ALB works at the application layer and supports advanced HTTP routing but with slightly higher latency.
Consider an NLB deployed across three Availability Zones (AZs) with backend instances in each AZ. A client sends a TCP request. Which sequence best describes the request flow from client to backend instance?
Think about how NLB preserves client IP and routes within AZs.
The client sends the request to the NLB's static IP. The NLB routes the request to a healthy backend instance in the same AZ to reduce latency. The backend processes and sends the response back through the NLB, which forwards it to the client.