0
0
Nginxdevops~10 mins

Upstream blocks in Nginx - Step-by-Step Execution

Choose your learning style9 modes available
Process Flow - Upstream blocks
Define upstream block
List backend servers
Configure load balancing method
Use upstream name in server block
Client request arrives
Proxy request to one backend server
Receive response and send to client
The upstream block defines backend servers and load balancing. The server block uses this name to proxy client requests to those backends.
Execution Sample
Nginx
upstream backend {
    server 192.168.1.10;
    server 192.168.1.11;
}

server {
    location / {
        proxy_pass http://backend;
    }
}
Defines two backend servers in an upstream block and proxies client requests to them using load balancing.
Process Table
StepActionDetailsResult
1Read upstream blockIdentify 'backend' with servers 192.168.1.10 and 192.168.1.11Upstream 'backend' created with 2 servers
2Read server blockLocation '/' proxies to 'http://backend'Proxy setup to use upstream 'backend'
3Client sends requestRequest arrives at nginx serverRequest received
4Select backend serverLoad balancing chooses 192.168.1.10 (default round-robin)Request forwarded to 192.168.1.10
5Backend responds192.168.1.10 sends responseResponse received by nginx
6Send response to clientNginx forwards backend responseClient receives response
7Next client requestLoad balancing chooses 192.168.1.11Request forwarded to 192.168.1.11
8Repeat response forwardingBackend 192.168.1.11 respondsClient receives response
9No more requestsEnd of traceExecution stops
💡 No more client requests to proxy, execution ends
Status Tracker
VariableStartAfter 1After 2Final
upstream 'backend' serversempty[192.168.1.10, 192.168.1.11][192.168.1.10, 192.168.1.11][192.168.1.10, 192.168.1.11]
current backend servernone192.168.1.10192.168.1.11none
client request count0122
Key Moments - 3 Insights
Why does nginx choose 192.168.1.10 first and then 192.168.1.11?
Nginx uses round-robin load balancing by default, so it cycles through the servers in order as shown in steps 4 and 7 of the execution_table.
What happens if one backend server is down?
Nginx will skip the down server and send requests to the available ones. This is not shown in the current trace but is part of upstream block behavior.
Why do we use an upstream block instead of directly proxying to an IP?
The upstream block groups multiple servers and enables load balancing and easier management, as seen in step 1 and 2 of the execution_table.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table, which backend server handles the first client request?
A192.168.1.11
BBoth servers simultaneously
C192.168.1.10
DNo backend server
💡 Hint
Check step 4 in the execution_table where the first backend server is selected.
At which step does nginx forward the second client request to the backend?
AStep 5
BStep 7
CStep 3
DStep 9
💡 Hint
Look at the execution_table row describing the next client request forwarding.
If a third server 192.168.1.12 is added to the upstream, how would the variable 'upstream servers' change after step 1?
A[192.168.1.10, 192.168.1.11, 192.168.1.12]
B[192.168.1.12]
C[192.168.1.10, 192.168.1.11]
DNo change
💡 Hint
Refer to variable_tracker for 'upstream servers' after step 1.
Concept Snapshot
upstream backend {
    server IP1;
    server IP2;
}

Use proxy_pass http://backend; in server block.
Nginx load balances requests to backend servers.
Default method is round-robin.
Upstream groups servers for easier management.
Full Transcript
An upstream block in nginx defines a group of backend servers. The server block uses the upstream name to proxy client requests. When a client sends a request, nginx selects a backend server using load balancing (default round-robin) and forwards the request. The backend responds, and nginx sends the response back to the client. This process repeats for each client request, cycling through the backend servers. The upstream block simplifies managing multiple servers and enables load balancing automatically.