Building a Load Balancer with LVS - Linux Virtual Server

In previous blogs I spent some time setting up load balancers using HAProxy, Pound and Nginx. What is common among them is that they all act as a Layer 7 reverse proxy. The common disadvantage of these technologies is that they are not very efficient at distributing Layer 4 traffic. They suffer from lots of context switching between user space and kernel space, which introduces delays, especially under heavy traffic with many short lived connections.
A better solution that runs entirely in kernel space is LVS [1]. Linux Virtual Server has been around since 1998, very mature and stable code that is compiled in the kernel, since 2.4.23 branch.
Layer 4 Switching determines the path of packets based on information available at layer 4 of the OSI 7 layer protocol stack. This means that the IP address and port are available as is the underlying protocol, TCP/IP or UDP/IP.
There are five  Forwarding Types in LVS - LVS-NAT, LVS-DR, LVS-Tun, LVS-FullNAT and LVS-SYNPROXY:
  • LVS-NAT as the name implies uses NAT from the Load Balancer (or the Director in LVS speak) to the back-end servers (or the Real Servers). The Director uses the ability of Linux kernel to change the network IP addresses and ports as packets pass through the kernel.There used to be a significant overhead when using this method, but not anymore. I'll demonstrate how to set this up later in this article.
  • LVS-DR stands for direct routing. The Director forwards all incoming requests to the nodes inside the cluster, but the nodes inside the cluster send their replies directly back to client computers. 
  • LVS-Tun uses IPIP tunneling. IP tunneling can be used to forward packets from one subnet or virtual LAN, to another subnet or VLAN, even when the packets must pass through another network or Internet. Building on the IP tunneling capability that is part of the Linux kernel, the LVS-TUN forwarding method allows you to place cluster nodes on a cluster network that is not on the same network segment as the Director. 
  • LVS-FullNAT, this is a relatively new module that introduces local ip address (IDC internal ip address, lip), IPVS translates cip-vip to/from lip-rip, in which lip and rip both are IDC internal ip address, so that LVS load balancer and real servers can be in different vlans, and real servers only need to access internal network.
  • LVS-SYNPROXY is based on tcp syncookies.
Please note that FullNAT and SYNPROXY have limited testing at the time of writing this article.

Now that we have the basics covered let's create a load balancers that listens on port 80, and distributes TCP connections in a round-robin fashion to two back-end nodes using NAT.

First let's install the user-space tools used to manage LVS:
Then let's describe the topology:
Line 1 adds a TCP virtual service on port 80, using round-robin algorithm. This is your Director, or load balancer.
Lines 2 and 3 add two real servers (back end nodes, running Apache) to the virtual service specified on line 1.

To list the current configuration and the various stats, run:
To save the current configuration use:
To restore previously saved config execute:
To clean the current setup run:
To test the configuration just connect to the load balancer using curl or nc:
And that's all it takes to configure a TCP load balancer that distributes connections to two real servers listening on port 80.

One thing to keep in mind is that LVS does not know when a real server (back-end node) is down and it will still send traffic to it. LVS blindly forwards packets based on the configured rules and this is all it does. This of course is not very useful in production environments

To solve this problem we need some monitoring in place that will remove real servers from the LVS configuration if they are no longer able to accept connections.

There are many tools out there that do just that, but in this example I am going to use mon. I am not going to go into great details about how mon works, but in a nutshell it's a daemon that runs custom tests (in this case I'll use http test) and based on if the test fails or passes mon will execute a script that does something. It's extremely extendable and one can write its own monitoring or action scripts.

Let's first install it:


[root@host1 ~]# yum install -y mon
The configuration file is in /etc/mon. Here's an example using the two real servers configured earlier: Lines 18 and 29 define a hostgroup, which consist of our real servers to be monitored.
Line 22 defines the interval the monitor should run.
Line 23 sets up the monitor type. You can see all monitors that come with the mon package in /usr/lib64/mon/mon.d/
Line 26 specifies what script to execute when the test fails.
Line 27 defines what script to run when the test succeeds after a failure. You can see all alert scripts that come with the mon package in /usr/lib64/mon/alert.d/

Let's create our own test.alert script that will add and remove real servers from LVS:


[root@host1 ~]# vi /usr/lib64/mon/alert.d/test.alert 

# $Id: test.alert,v 2004/06/09 05:18:07 trockij Exp $
#echo "`date` $*" >> /tmp/test.alert.log

if [ "$9" = "-u" ]
   echo "`date` Real Server $6 is UP" >> /tmp/test.alert.log
   ipvsadm -a -t -r $6:80 -m
   echo "`date` Real Server $6 is DOWN" >> /tmp/test.alert.log
   ipvsadm -d -t -r $6:80 
With everything in place let's start the service: When Apache is no longer accessible on port 80 on the first real server mon will put the following message in /tmp/test.alert.log and remove the node from LVS. When Apache is accessible again (-u will be passed from mon to the test.alert script as argument at $9), the test.alert script will add the node back in LVS.



  1. Really good tutorial, thanks a lot :)

  2. Great tutorial works like a champ, thank you very much

  3. Great!.
    Does it works with other services, ie: RDP ?


    1. LVS works at layer 4, so most other protocols on top of that should work, unless they require some special hand shakes like MySQL for example.

  4. Can i add backend with public ip?!

  5. What changes we need in order to make it work for HTTPS(443) load balancing? It doesn't work for 443 mostly due to certificate issue.Can we solve this somehow?

    1. Try this:
      ipvsadm -A -t -s rr
      ipvsadm -a -t -r -m

  6. We have a question about Load Balancing the load balancer... We have as now 2 LVS load balanced in active/passive configuration with keepalived. We want to introduce L7 load balancer (HAProxy) in active / active configuration, so we have not only HA configuration but also load balanced configuration of load balancer. We think we can do that using the two active / passive LVS machine to load balancing request on 2 HAProxy machine, using correctly persistence (LVS) and stickiness (HAProxy) so application / session behave as expected. We do not found such solution on the Internet, do you think this is a bad design ?

    1. Sounds overly complicated to me. If you get to the point where you need to load balance the load balancers you might be better off using round robing DNS for the load balancers (just let DNS distribute the load between a set of load balancers), or look into scaling geographically with Geo DNS load balancing. What you are proposing would probably work though, but the complexity will introduce fragility. The best designs in my opinion are the one that are very simple.

  7. I found the latest Press News on the official web page of LVS is posted on Wednesday, August 8, 2012, three years ago. Is LVS still an active project? I'd like to know its status now. Thank you.

    1. I don't think new features are being developed and no outstanding bugs are present, since the project is pretty mature.

  8. Hi, I am trying to configure LVS on Ubuntu 14.04 for SNMP traffic over udp ( 162 port). I followed the steps as mentioned above.
    But I do not see VIP forwarding the packets to RealServers? Can you point to detailed configuration on LVS server and Real Server for UDP?

    1. This is the output:
      ubuntu@ip-172-31-12-46:~$ sudo ipvsadm -L --stats
      IP Virtual Server version 1.2.1 (size=4096)
      Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes
      -> RemoteAddress:Port
      UDP 2 34 0 10738 0
      -> 2 34 0 10738 0

  9. Do we need to enable ipforwarding or anything similar on the load balancer host? I followed the tutorial exactly, but it doesn't seem to work:

    On the load balancer server:
    # ipvsadm -L -n --stats
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes
    -> RemoteAddress:Port
    TCP 4 5 0 300 0
    -> 4 5 0 300 0

    From the client trying to send request:
    # curl
    curl: (7) couldn't connect to host

    If connecting directly, it is okay
    # curl
    I am new!

    Did I miss out anything?

    Many thanks,


  10. Can I use this LVM load balancer for WebSocket?
    Cause some articles say that WebSocket need a 'presistent/sticky' session.


    1. I would rather use HAProxy with WebSocket. It offers better flexibility and control, plus WebSocket is somewhat of a Layer 7 implementation.

  11. How can we run ipvs on AWS, if we assign the EIP to an instance its not on the instance so to create the ipvs load balancer. I am a little confused on that part and also if I could do SSL with it.