Example Load Balancing with HAProxy on CentOS

Example we have 3 server, we install HAProxy on a server (, and nginx on two other servers ( and running Nginx on port 80)

Install on main server

Step 1:

yum install haproxy -y


[root@load-balancer-1 ~]# yum install haproxy -y
Loaded plugins: fastestmirror, ovl
Loading mirror speeds from cached hostfile
 * base: mirror.alpha-labs.net
 * epel: epel.mirror.wearetriple.com
 * extras: mirror.plusserver.com
 * updates: mirror.plusserver.com
Resolving Dependencies
--> Running transaction check
---> Package haproxy.x86_64 0:1.5.18-8.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

 Package           Arch             Version                Repository      Size
 haproxy           x86_64           1.5.18-8.el7           base           834 k

Transaction Summary
Install  1 Package

Total download size: 834 k
Installed size: 2.6 M
Downloading packages:
haproxy-1.5.18-8.el7.x86_64.rpm                            | 834 kB   00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : haproxy-1.5.18-8.el7.x86_64                                  1/1
  Verifying  : haproxy-1.5.18-8.el7.x86_64                                  1/1

  haproxy.x86_64 0:1.5.18-8.el7


Step 2:
Default HAProxy Configuration:

[root@load-balancer-1 ~]# cat /etc/haproxy/haproxy.cfg
# Example configuration for a possible web application.  See the
# full configuration options online.
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt

# Global settings
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #    local2.*                       /var/log/haproxy.log
    log local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

# main frontend which proxys to the backends
frontend  main *:5000
    acl url_static       path_beg       -i /static /images /javascript /stylesheets
    acl url_static       path_end       -i .jpg .gif .png .css .js

    use_backend static          if url_static
    default_backend             app

# static backend for serving up images, stylesheets and such
backend static
    balance     roundrobin
    server      static check

# round robin balancing between the various backends
backend app
    balance     roundrobin
    server  app1 check
    server  app2 check
    server  app3 check
    server  app4 check

Load Balancing Configuration
To get started balancing traffic between our two HTTP listeners, we need to set some options within HAProxy:

frontend – where HAProxy listens to connections
backend – Where HAPoxy sends incoming connections
stats – Optionally, setup HAProxy web tool for monitoring the load balancer and its nodes
Here’s an example frontend:

frontend localnginx
    bind *:80
    mode http
    default_backend nginx

bind *:80 – I’ve bound this frontend to all network interfaces on port 80. HAProxy will listen on port 80 on each available network for new HTTP connections

mode http – This is listening for HTTP connections. HAProxy can handle lower-level TCP connections as well, which is useful for load balancing things like MySQL read databases, if you setup database replication

default_backend nginx – This frontend should use the backend named nginx, which we’ll see next.

Example backend configuration:

backend nginx
    mode http
    balance roundrobin
    option forwardfor
    http-request set-header X-Forwarded-Port %[dst_port]
    http-request add-header X-Forwarded-Proto https if { ssl_fc }
    option httpchk HEAD / HTTP/1.1\r\nHost:localhost
    server web01 check
    server web02 check


mode http – This will pass HTTP requests to the servers listed
balance roundrobin – Use the roundrobin strategy for distributing load amongst the servers

Some other example:

balance url_param userid
balance url_param session_id check_post 64
balance hdr(User-Agent)
balance hdr(host)
balance hdr(Host) use_domain_only

option forwardfor – Adds the X-Forwarded-For header so our applications can get the clients actually IP address. Without this, our application would instead see every incoming request as coming from the load balancer’s IP address
http-request set-header X-Forwarded-Port %[dst_port] – We manually add the X-Forwarded-Port header so that our applications knows what port to use when redirecting/generating URLs.
* Note that we use the dst_port “destination port” variable, which is the destination port of the client HTTP request.
option httpchk HEAD / HTTP/1.1\r\nHost:localhost – Set the health check HAProxy uses to test if the web servers are still responding. If these fail to respond without error, the server is removed from HAProxy as one to load balance between. This sends a HEAD request with the HTTP/1.1 and Host header set, which might be needed if your web server uses virtualhosts to detect which site to send traffic to
http-request add-header X-Forwarded-Proto https if { ssl_fc } – We add the X-Forwarded-Proto header and set it to “https” if the “https” scheme is used over “http” (via ssl_fc). Similar to the forwarded-port header, this can help our web applications determine which scheme to use when building URL’s and sending redirects (Location headers).
server web01-2 xx.xx.xx.xx:80 check – These two lines add the web servers for HAProxy to balance traffic between. It arbitrarily names each one web01-web02, set’s their IP address and port, and adds the directive check to tell HAProxy to health check the server

Other example backend configuration:

backend nodes
    # Other options above omitted for brevity
    cookie SRV_ID prefix
    server web01 cookie 01 check
    server web02 cookie 02 check

Means if your server send a cookie with name SRV_ID like:


You will receive a response header:

Set-Cookie: SRV_ID=02~

you can use:

cookie SRV_ID rewrite

You will receive a response header:

Set-Cookie: SRV_ID=01

If you use with insert, your server doesn’t need to send a cookie with name SRV_ID

cookie SRV_ID insert

You will receive a response header:

Set-Cookie: SRV_ID=02; path=/

Step 3:
service haproxy restart

Done! now you can access

Leave a Reply