Imagine you have a web app behind an Application Load Balancer (ALB). How does the ALB decide which server to send a user's request to?
Think about how ALB can inspect the request details before forwarding.
ALB can look at request details such as URL path or host header to route traffic to different target groups. This allows smart routing based on content.
In the architecture of an Application Load Balancer, which of these components is NOT involved?
Consider which components ALB directly manages versus external services.
DNS servers are external to ALB. ALB uses listeners, target groups, and often works with Auto Scaling groups but does not manage DNS resolution itself.
Your web app behind an ALB suddenly gets a huge spike in traffic. How does ALB help maintain performance?
Think about how ALB works with other AWS services to handle load.
ALB balances requests across healthy targets and integrates with Auto Scaling to add or remove servers based on demand, helping maintain performance during spikes.
Sticky sessions (session affinity) keep a user's requests on the same server. What is a downside of enabling sticky sessions on ALB?
Consider how sticking users to one server affects load balance.
Sticky sessions can cause some servers to receive more requests than others, leading to uneven load and potential performance issues.
You plan to deploy an ALB for a high-traffic app expecting 100,000 concurrent users. What is a reasonable estimate of ALB's concurrent connection capacity per load balancer?
Think about AWS ALB documented limits and typical usage.
AWS ALB supports around 100,000 concurrent connections per load balancer by default, which can be increased by request but is a practical baseline.