Imagine you have 4 servers and a load balancer using the round robin algorithm. How does it assign incoming requests?
Think about a simple way to share requests evenly without checking server load.
Round robin cycles through servers one by one, sending each new request to the next server in the list. This balances load evenly if all servers have similar capacity.
You have 3 servers with different current loads. Which algorithm helps distribute new requests to the least busy server?
Consider which algorithm checks server load before assigning requests.
Least connections algorithm sends requests to the server with the fewest active connections, helping balance uneven workloads.
Consider a system with 2 servers: one powerful and one weaker. What problem can round robin cause?
Think about how equal distribution affects servers with different power.
Round robin does not consider server capacity, so it can overload weaker servers by sending them the same number of requests as stronger ones.
Least connections balances load better but has a cost. What is a downside compared to round robin?
Think about what extra information least connections needs to work.
Least connections needs to monitor active connections on each server, which adds complexity and overhead compared to simple round robin.
Assuming the load balancer perfectly distributes connections using least connections algorithm, what is the maximum total concurrent connections supported?
Think about how least connections distributes load across all servers.
Least connections balances load so all servers are used up to their capacity, so total max connections is sum of all servers' capacities.