NGINX can distribute incoming web traffic to multiple backend servers. What is the main reason it does this?
Think about why spreading work helps a busy restaurant kitchen.
Load balancing spreads user requests across servers to avoid overloading one server, which helps keep the website fast and reliable.
Given this NGINX config snippet, what will happen when users access the site?
upstream backend {
server backend1.example.com;
server backend2.example.com;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}Look at how multiple servers are listed under upstream.
NGINX uses the upstream block to define multiple backend servers and distributes requests between them by default in a round-robin way.
NGINX is configured with two backend servers in the upstream block, but all traffic goes to only one server. What is a likely cause?
Sticky sessions keep users connected to the same server.
The ip_hash method makes NGINX send requests from the same client IP to the same backend server, which can cause all traffic from one IP to go to one server.
Put these steps in the correct order for how NGINX handles incoming requests with load balancing.
Think about the natural flow from receiving to responding.
NGINX first gets the request, chooses a backend, sends the request there, then returns the backend's response to the client.
NGINX supports several load balancing methods. Which method helps avoid sending too many requests to a slow backend server?
Think about which method balances load based on current server usage.
The least_conn method directs traffic to the backend with the fewest active connections, helping avoid overloading slower servers.