NGinx is an asynchronous event-driven web-server which has become immensely popular in recent years for its performance advantages. NGinx can be used is a reverse proxy to load balance HTTP requests among back-end servers. Here is how this can be achieved on a standard Ubuntu server:
Setup
------------------------------
|----------| Back-end Server 1 (BES1) |
| ------------------------------
---------------------- |
Internet | Front-end server |-----------' ------------------------------
-------------| Running NGinx |----------------------| Back-end Server 2 (BES2) |
| (FES) |------------ ------------------------------
---------------------- |
| ------------------------------
'----------| Back-end Server 3 (BES3) |
------------------------------
In this sample setup, we have a front-end server (FES) on which NGinx is installed and listens on the external interface. We have three back-end servers (BES1, BES2 and BES3) on which our web-application is hosted using any standard HTTP server which listens on the internal interface. Now the front-end server is configured to forward the HTTP requests on the external interface to the three back-end servers, hence acting as a reverse proxy cum HTTP load balancer.
-
Front-end server (FES)
NIC1-> eth0 with public IP xxx.xxx.xxx.xxx
NIC2-> eth1 with internal IP 192.168.1.2
Runs NGinx listening on xxx.xxx.xxx.xxx:80 -
Back-end server 1 (BES1)
NIC1-> eth1 with internal IP 192.168.1.3
Runs any http server (Eg: apache2) listening on 192.168.1.3:8080 -
Back-end server 2 (BES2)
NIC1-> eth1 with internal IP 192.168.1.4
Runs any http server (Eg: apache2) listening on 192.168.1.4:8080 -
Back-end server 3 (BES3)
NIC1-> eth1 with internal IP 192.168.1.5
Runs any http server (Eg: apache2) listening on 192.168.1.5:8080
Configuration
-
Install NGinx on the front-end server (FES):
apt-get install nginx
-
Edit the file /etc/nginx/sites-enabled/default to look like the following:
upstream lb_units { server 192.168.1.3:8080 weight=10 max_fails=3 fail_timeout=30s; # Reverse proxy to BES1 server 192.168.1.4:8080 weight=10 max_fails=3 fail_timeout=30s; # Reverse proxy to BES2 server 192.168.1.5:8080 weight=10 max_fails=3 fail_timeout=30s; # Reverse proxy to BES3 } server { listen xxx.xxx.xxx.xxx:80; # Listen on the external interface server_name my.domain-name.com; # The server name access_log /var/log/nginx/nginx.access.log; error_log /var/log/nginx/nginx_error.log debug; location / { proxy_pass http://lb_units; # Load balance the URL location "/" to the upstream lb_units } error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www/nginx-default; } }
-
Reload NGinx
/etc/init.d/nginx reload
Thats it, now my.domain-name.com is load balanced using FES on to the back-end servers BES1, BES2 and BES3.
Hope this “How To” was helpful.
5 replies on “[How To] Using NGinx as a Load Balancer”
[…] https://www.geektantra.com/2011/06/using-nginx-as-a-load-balancer/ […]
Shame that I will need another server to properly implement this.
[…] “Using NGinx as a Load Balancer” (https://www.geektantra.com/2011/06/using-nginx-as-a-load-balancer/) […]
Hi guys,
i’m using this configuration, and i have EXACTLY the same. But he keeps giving me this error:
[emerg] invalid number of arguments in “proxy_pass” directive in /etc/nginx/sites-enabled/default.save:35
Do you guys know whats wrong?
This is my configuration:
http://puu.sh/337K2/eb75e14073.png
Thanks !
[…] [How To] Using NGinx as a Load Balancer | GeekTantra. […]