Deploying DRBD on Linux

DRBD stands for Distributed Replicated Block Device and refers to block devices designed as a building block to form high availability clusters. This is done by mirroring a whole block device via an assigned network. DRBD can be understood as network based raid-1.

DRBD works on top of block devices, i.e., hard disk partitions or LVM's logical volumes. It mirrors each data block that it is written to disk to the peer node.

What follows next is a detailed explanation on installing and configuring DRBD on two server nodes. For more information you can refer to [1].

1. Install the kernel headers, gcc and flex if not already present on the system, then download and extract the source code from

2. Configure and compile:

The --with-km option will compile the kernel module as well.

3. Load the kernel module and make drbd execute after system reboot:

4. You can use almost any block device with DRBD. For this demo we'll use an LVM logical volumes for the data and metadata.

5. Create the global configuration file drbd.conf with the following configuration in /etc:

6. Create /etc/drbd.d/ that will contain the global file that all resources will share - global_common.conf

7. Create a resource named r0, that will make a DRBD device on top of our logical volume we've created earlier:

For more information on the options in this file refer to [1]

8. Repeat the above steps on the second node, ensuring the configuration files are all identical

9. Create device metadata - this step must be completed only on initial device creation. It initializes DRBD’s metadata. Needs to be run on both nodes:

10. Enable the resource - this step associates the resource with its backing device, sets replication parameters, and connects the resource to its peer. Run this on both nodes:

11. Initial device synchronization - select an initial sync source node, in our case drbd1, and run:

This will sync all data from node drbd1 /dev/drbd1 on to node drbd2 /dev/drbd1, erasing all data if any on drbd2.
At that point the DRBD device is fully operational, even before the initial synchronization has completed (albeit with slightly reduced performance). You may now create a filesystem on the device, use it as a raw block device, mount it, and perform any other operation you would with an accessible block device, but keep in mind that you can only do this on the Primary node.

12. Checking the status of DRBD:

13. Switch Primary and Secondary nodes - you can make the Primary node Secondary and vice-versa with the following:

On node drbd1

On node drbd2

14. Dealing with node failure - if a node that currently has a resource in the secondary role fails temporarily, no further intervention is necessary - the two nodes will simply re-establish connectivity upon system start-up.
After this, DRBD replicates all modifications made on the primary node in the meantime, to the secondary node. When the failed node is repaired and returns to the cluster, it does so in the secondary role. If the failed node was the primary node, DRBD does not promote the surviving node to the primary role, it is the cluster management application’s responsibility to do so, or it can be done manually as described in step 13.

15. Split brain recovery - if your nodes fail to reconnect to each other after a crash recovery, check the logs for the following signs of split-brain condition:

After split brain has been detected, one node will always have the resource in a StandAlone connection state. The other might either also be in the StandAlone state (if both nodes detected the split brain simultaneously), or in WFConnection (if the peer tore down the connection before the other node had a chance to detect split brain).
At this point, unless you configured DRBD to automatically recover from split brain, you must manually intervene by selecting one node whose modifications will be discarded (this node is referred to as the split brain victim).

On the victim node run:

On the other node (the split brain survivor) run:

This should reconnect your nodes and resolve the split-brain condition.

Now that DRBD is setup you can integrate it with HA solution like Red Hat Cluster or LVS. For more information refer to [2].



  1. Great tutorial! I'm going to try this out as soon as I can.
    One question: Can this tutorial be used for an existing LVM volume (formatted and with data in it, instead of starting with a blank volume)?

  2. Good question, I haven't really tried this, but I am pretty sure once you enable the resource with "drbdadm up r0" any data on the specified logical volume will be lost. Let me know if you test this and I am wrong.

  3. So excited forgot to say thanks! Thank you...can't wait to try it ;)

  4. Very good tutorial to understand the DRBD. We have a DRBD Cluster with corosync. Now a days we often seeing split brain issue and unable to find out the reason. Can you please advise some tips which ll help us to identify the root cause.

    1. Make sure both nodes can communicate between each other at all times is the first thing I would check.

  5. Can you please tell us how to recover from split brain issue automatically.

    1. Even though you can script around what's in the logs I would advice against automating this task, since a person has to make the decision which of the two nodes is not authoritative. The whole idea behind the split brain scenario is the fact that the cluster does not have quorum and is unable to determine which node should be the authoritative one. You should look into why your cluster is getting into this state to begin with.