Network Card Bonding and Bridging On CentOS and Debian

Network Bonding

In the following I will use the word bonding because practically we will bond interfaces as one. Bonding allows you to aggregate multiple ports into a single group, effectively combining the bandwidth into a single connection. Bonding also allows you to create multi-gigabit pipes to transport traffic through the highest traffic areas of your network. For example, you can aggregate three megabits ports into a three-megabits trunk port. That is equivalent with having one interface with three megabytes speed.

Where should I use bonding?

You can use it wherever you need redundant links, fault tolerance or load balancing networks. It is the best way to have a high availability network segment. A very useful way to use bonding is to use it in connection with 802.1q VLAN support (your network equipment must have 802.1q protocol implemented).

Diverse modes of bonding:

mode=1 (active-backup)

Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.

mode=2 (balance-xor)

XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.

mode=3 (broadcast)

Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.

mode=4 (802.3ad)

IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.

* Ethtool support in the base drivers for retrieving the speed and duplex of each slave.
* A switch that supports IEEE 802.3ad Dynamic link aggregation. Most switches will require some type of configuration to enable 802.3ad mode.

mode=5 (balance-tlb)

Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

* Prerequisite: Ethtool support in the base drivers for retrieving the speed of each slave.

mode=6 (balance-alb)

Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.
Also you can use multiple bond interface but for that you must load the bonding module as many as you need.


In the /etc/modprobe.conf file add the following:

In the /etc/sysconfig/network-scripts/ directory create ifcfg-bond0:

Change the ifcfg-eth0 to:

Change the ifcfg-eth1 to:

Restart the network subsystem:

Check the bonding:

Alternatively you can check:

To change the active interface run:

You can also setup bonding on the fly by executing:

Network Bridging

To create bridge on top of the bond (for example KVM virtual servers use network bridge) you can create the following config files:

Restart the network subsystem:

You can also build a bridge with the brctl command that is part of the bridge-utils package:

Here's an example of bonding, bridging and vlan tagging in Debian:


  1. Hi, I see your Bridge bonding configuration is using mode 1, could it be possible with mode 5 ?

    1. I am not 100% sure, but on theory I don't see why not, as the packets leaving the bond and going to the bridge never actually leave the kernel, so it should work, but I suggest you test it first.

    2. FWIW...I'm not sure if you're using this for visualization, but according to the RHEL documentation, modes 1, 2, or 4 can only be used on the guest virtual machine.

  2. In your example, I notice that bond0 has IP address while br0 has IP address Is the external network while the internal network?
    I am a little unclear why a bridge has an IP address. Bridges work at the Ethernet level (level 2) while IP addresses work at the IP level (level 3).
    Nice write up, by the way. By following your instructions, I was able to but together a configuration that doesn't generate any kernel panics or error messages.

    1. The two examples are completely unrelated and the IP's are just random IP's from the private range. They can be either public or private depending on what you are doing. It is kind of confusing when you put an IP on the bridge, but it makes sense in a case where all the other bridge ports are virtual machines.

  3. mode=5 works better for load balancing. it worked perfectly fine on my IBM with virtualisation inside.
    my setup is
    (Nic0+Nic1) bond0 --> br0 bridge (ip add is here)

  4. Hello guys, I ended up here searching for answers, I bonded my interfaces (eth0+eth1)-->bond0 (mode=1)---> br0 .
    I have installed Xen and spinned up a VM. I am trying to assign an IP to the VM and so that the VM appears on the network and its not working out, any help is appreciated. --Vj

  5. I have bonded my two interfaces eth0 and eth1 to bond0(mode=1). Also I created br0 bridge interface which holds the IP Address. When one of the interfaces (eth0 or eth1) goes down then there is a noticeable 3 continuous ping failure before other interface starts replying to ping. How to avoid this ping failure in this case of one of the interfaces go down?