As stated in the OVS documentation, hypervisors need the ability to bridge traffic between VMs and the outside world. On Linux-based hypervisors, this used to mean using the built-in L2 switch - the Linux bridge.
Open vSwitch is targeted at multi-server virtualization deployments where VM mobility and network dynamics are important.
Open vSwitch supports a number of features that allow a network control system to respond and adapt as the environment changes. This includes simple accounting and visibility support such as NetFlow and sFlow. But perhaps more useful, Open vSwitch supports a network state database (OVSDB) that supports remote triggers. Therefore, a piece of orchestration software can "watch" various aspects of the network and respond if/when they change. This is used heavily today, for example, to respond to and track VM migrations.
Open vSwitch also supports OpenFlow as a method of exporting remote access to control traffic. There are a number of uses for this including global network discovery through inspection of discovery or link-state traffic (e.g. LLDP, CDP, OSPF, etc.).
The goal with Open vSwitch is to keep the in-kernel code as small as possible (as is necessary for performance) and to re-use existing subsystems when applicable (for example Open vSwitch uses the existing QoS stack).
For more information on OVS refer to .
Most Linux distributions now come with the OVS user-space tools and the kernel module, but I prefer to get the latest code and compile it manually (the source code also comes with spec files for building rpm or deb packages).
First lets download and compile the code:
Alternatively you can build the kernel modules (openvswitch.ko and brcompat.ko) and the user-space tools as packages and install them: Remove the bridge module if loaded before loading the openvswitch one:
Load the openvswitch module:
As of this writing I was only able to start my LXC container by first loading the brcompat compatibility module: Without this module loaded I was getting the following error: Load the OVS modules at boot and blacklist the bridge module (on RHEL): Initialize the configuration database using ovsdb-tool:
Before starting ovs-vswitchd itself, you need to start its configuration database, ovsdb-server. Each machine on which Open vSwitch is installed should run its own copy of ovsdb-server.
If you built Open vSwitch without SSL support, then omit --private-key, --certificate, and --bootstrap-ca-cert.
Then initialize the database using ovs-vsctl. This is only necessary the first time after you create the database with ovsdb-tool (but running it at any time is harmless):
Then start the main Open vSwitch daemon, telling it to connect to the same Unix domain socket:
If you build the openvswitch user-space tools from the packages as I showed above, alternatively you can start OVS from the init script: Time to create the bridge and add ports to it:
This will add eth2 which is my public interface and veth0 which is a virtual interface of one of my LXC containers to the OVS bridge.
To directly connect your VM to the bridge and not use NAT make sure your VM network definition looks similar to this:
Now your VM is connected to the OVS and can talk to whatever network is connected to eth2, as long as it's configured in the same subnet.
To see the ports that are connected to the bridge run:
The network configuration should look like the following (on RHEL): Note the last two lines, that specify what kernel module to use.