Deploying a load balancer from scratch and adding backend servers

load balancer

Load balancing is a key and Major part of IT infrastructure and this is because the High availability and efficient performance of webservers are very important. The purpose of load balancing is to prevent a server from being overloaded and maintaining healthy servers by balancing incoming requests. A Load Balancer is Available, reliable, and a highly scalable web application. Furthermore, In this tutorial, I will deploy one Haproxy server and 2 backend servers in a Lab environment. However, in a subsequent post, I will show how to configure, deploy and troubleshoot an ELB on cloud platforms.

NOTE: All deployments are done on CentOS 8.

Installing and configuring HAproxy

Yum -y install Haproxy
backend servers

NOTE: Back up the default config file, should anything go wrong, we can revert.

mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg-bak

create a new haproxy config file and paste the following

vim /etc/haproxy/haproxy.cfg
log local2
chroot /var/lib/haproxy
pidfile /var/run/
maxconn 4000
user haproxy
group haproxy
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
# utilize system-wide crypto-policies
ssl-default-bind-ciphers PROFILE=SYSTEM
ssl-default-server-ciphers PROFILE=SYSTEM
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
# main frontend which proxys to the backends
frontend haproxy_balancer # define the name of the frontend.
bind # IP address of HAProxy server
option http-server-close
option forwardfor
stats uri /haproxy?stats # Specify the HAProxy status page.
default_backend webservers
# round robin balancing between the various backends
backend webservers # Specify a name for identifying an application
mode http
balance roundrobin # defines the roundrobin load balancer scheduling algorithm
option httpchk HEAD / HTTP/1.1\r\nHost:\ localhost
server nginx-web1 check # IP address of the first backend server
server nginx-web2 check # IP address of the second backend server

NOTE: Kindly change the Mode and the server IPs as specified based on your configurations

Test the configuration for errors

haproxy -c -f /etc/haproxy/haproxy.cfg

Start and Enable the Haproxy service


Checking the HAproxy status

load balancer

Open the Web Url of your front end server http://IP_address/


Check Statistics and status of Haproxy: http://IP_address/haproxy?stats

load balancer

Now you can keep adding backend servers.
– NOTE: When deploying a load balancer from scratch, ensure that backend servers have Nginx installed. Additionally, confirm the availability and firewall permissions for the backend port.

I hope you found this blog post on deploying a load balancer from scratch helpful. Please let me know in the comment session if you have any questions.

Notify of

Inline Feedbacks
View all comments
Would love your thoughts, please comment.x