Using LXC with OpenStack

In this post we are going to explore a fully automated way of provisioning LXC containers on a set of servers, using OpenStack.

OpenStack is a cloud operating system that allows for the provisioning of virtual machines, LXC containers, load balancers, databases, and storage and network resources in a centralized, yet modular and extensible way. It’s ideal for managing a set of compute resources (servers) and selecting the best candidate target to provision services on based on criteria such as CPU load, memory utilization, VM/container density, to name just few.
In this blog we are going to deploy the following OpenStack components and services:

· Deploy the Keystone identity service that will provide a central directory of users and services and a simple way to authenticate using tokens.
· Install the Nova compute controller, which will manage a pool of servers and provision LXC containers on them.
· Configure the Glance image repository, which will store the LXC images.
· Provision the Neutron networking service that will manage DHCP, DNS and the network bridging on the compute hosts.
· And finally, we are going to provision an LXC container using the libvirt OpenStack driver.
Deploying OpenStack with LXC support on Ubuntu

An OpenStack deployment may consist of multiple components that interact with each other through exposed APIs, or a message bus like RabbitMQ.

We are going to deploy a minimum set of those components – Keystone, Glance, Nova and Neutron - which will be sufficient to provision LXC containers and still take advantage of the scheduler logic and scalable networking that OpenStack provides.

For this tutorial we are going to be using Ubuntu Xenial and, as of time of this writing, the latest Newton OpenStack release.

Preparing the host

To simplify things, we are going to use a single server to hosts all services. In production environments it’s a common approach to separate each service into its own set of servers for scalability and high availability. By following the steps in this post, you can easily deploy on multiple hosts, by replacing the IP addresses and hostnames as needed.

If using multiple servers, you need to make sure the time is synchronized on all hosts by using services like ntpd.

Let's begin by ensuring we have the latest packages and installing the repository that contains the Newton OpenStack release:


Make sure to add the name of the server, in this example “controller” to /etc/hosts.
Installing the database service

The services we are going to deploy all use a database as their back-end store. We are going to use MariaDB for this example. Install it by running:
root@controller:~# apt install mariadb-server python-pymysql

A minimal configuration file should look like the following:


Replace the IP address the service binds to with whatever is on your server, then start the service and run the script that will secure the installation:


The command above will prompt for a new root password. For simplicity we are going to use “lxcpassword” as a password for all services, for the rest of the post.
Installing the message queue service

OpenStack supports the following message queues – RabbitMQ, Qpid and ZeroMQ – which facilitate inter-process communication between services. We are going to use RabbitMQ:


Add a new user and a password:


And grant permissions for that user:


Installing the caching service

The identity service Keystone caches authentication tokens using Memcached. To install it execute:
Replace the localhost address with the IP address of your server:


The config file should look similar to the following:


Installing and configuring Identity service

The Keystone identity service provides a centralized point for managing authentication and authorization for the rest of the OpenStack components. Keystone also keeps a catalog of services and the endpoints they provide, that the user can locate by querying it.
To deploy Keystone, first create a database and grant permissions to the keystone user:


Next install the identity service components:


The following is a minimal working configuration for Keystone:


If you are using the same hostname and password as in this tutorial, no changes are required.
Next, populate the Keystone database by running:


Keystone uses tokens to authenticate and authorize users and services. There are different token formats available such as UUID, PKI and Fernet tokens. For this example deployment we are going to use the Fernet tokens, which unlike the other types do not need to be persisted in a back end. To initialize the Fernet key repositories run:


For more information on the available identity tokens refer to http://docs.openstack.org/admin-guide/identity-tokens.html

Perform the basic bootstrap process by executing:


We are going to use Apache with the WSGI module to drive Keystone. Add the following stanza in the Apache config file and restart it:

Delete the default SQLite database that Keystone ships with: Let's create the administrative account by defining the following environment variables:
Time to create our first project in Keystone. Projects represent a unit of ownership, where all resources are owned by a project. The “service” project we are going to create next will be used by all the services we are going to deploy in this post.


To list the available projects run:


Lets create an unprivileged project and user that can be used by regular users instead of the OpenStack services:


Next, create a user role and associate it with the lxc project and user we created in the previous two steps:


Use the following file to configure the Web Service Gateway Interface (WSGI) middleware pipeline for Keystone:


Let's test the configuration so far, by requesting a token for the admin and the lxc users:


We can create two files that will contain the admin and user credentials we configured earlier:


To use the admin user for example, source the file as follows:


Notice the new environment variables:


With the admin credentials loaded, lets request an authentication token that we can use later with the other OpenStack services:


Installing and configuring Image service

The image service provides an API for users to discover, register and obtain images for virtual machines, or images that can be used as the root filesystem for LXC containers. Glance supports multiple storage backends, but for simplicity we are going to use the file store, that will keep the LXC image directly on the file system.

To deploy Glance, first create a database and a user, like we did for Keystone:


Next, create the glance user and add it to the admin role:


Time to create the Glance service record:


Create the Glance API endpoints in Keystone:


OpenStack supports multi-region deployments for achieving high availability. For simplicity however, we are going to deploy all services in the same region.


Now that Keystone knows about the Glance service, lets install it:


Use the following two minimal configuration files, replacing the password and hostname as needed:


Populate the Glance database by running:


Start the Glance services:


We can build an image for the LXC containers by hand, or download a pre-build image from an Ubuntu repository. Lets download an image and extract it:


The file that contains the rootfs has the .img extension. Lets add it to the Image service:


Please note that LXC uses the “raw” disk format and the “bare” container format.

The image is now stored at the location defined in the glance-api.conf as the filesystem_store_datadir parameter, as we saw in the configuration example above:


Lets list the available images in Glance:


Installing and configuring Compute service

The OpenStack Compute service manage a pool of compute resources (servers) and various virtual machines, or containers running on said resources. It provides a scheduler service that takes a request for a new VM or container from the queue and decides on which compute host to create and start it.

For more information on the various Nova services, refer to: http://docs.openstack.org/developer/nova/

Let's begin by creating the nova database and user:


Once the database is created and the user permissions granted, create the nova user and add it to the admin role in the Identity service:


Next, create the nova service and endpoints:


Time to install the nova packages that will provide the API, the conductor, the console and the scheduler services:


The Nova packages we just installed provide the following services:
· The nova-api service accepts and responds to user requests through a RESTful API. We use that for creating, running, stopping instances, etc.
· The nova-conductor service sits between the nova database we created earlier and the nova-compute service, which runs on the compute nodes and creates the VMs and containers. We are going to install that service later in this post.
· The nova-consoleauth service authorizes tokens for users that want to use various consoles to connect to the VMs or containers.
· The nova-novncproxy grants access to instances running VNC.
· The nova-scheduler as mentioned earlier, makes decisions where to provision a VM or LXC container.

The following is a minimal functioning Nova configuration:


With the config file in place we can now populate the Nova database:


And finally, start the Compute services:


Since we are going to use a single node for this OpenStack deployment we need to install the nova-compute service. In production, usually we have a pool of compute servers, that only run that service.


Use the following minimal configuration file that will allow running nova-compute and the rest of the nova services on the same server:


Notice under the libvirt section how we specify LXC as the default virtualization type we are going to use. To enable LXC support in Nova install the following package:


The package provides the following configuration file:


Restart the nova-compute service and list all available Nova services:


With all the Nova services configured and running, time to move to the networking part of the deployment.

Installing and configuring Networking service

The networking component of OpenStack, codenamed Neutron, manages networks, IP addresses, software bridging and routing. In the previous posts we had to create the Linux Bridge, add ports to it, configure DHCP to assign IPs to the containers, etc. Neutron exposes all of these functionalities through a convenient API and libraries that we can use.

Lets start by creating the database, user and permissions:


Next, create the neutron user and add it to the admin role in Keystone:


Create the neutron service and endpoints:


With all the services and endpoints defined in the Identity service, install the following packages:
The Neutron packages that we installed above, provide the following services:
· The neutron-server provides API to dynamically request and configure virtual networks.
· The neutron-plugin-ml2 is a framework that enables the use of various network technologies such as the Linux Bridge, Open vSwitch, GRE and VXLAN.
· The neutron-linuxbridge-agent provides the Linux bridge plugin agent.
· The neutron-l3-agent performs forwarding and NAT functionality between software defined networks, by creating virtual routers.
· The neutron-dhcp-agent controls the DHCP service that assigns IP addresses to the instances running on the compute nodes.
· The neutron-metadata-agent is a service that passes instance metadata to Neutron.
The following is a minimal working configuration file for Neutron:


We need to define what network extension we are going to support and the type of network. All this information is going to be used when creating the LXC container and its configuration file, as we’ll see later:


Define the interface that will be added to the software bridge and the IP the bridge will be bound to. In this case we are using the eth1 interface and its IP address:


We specify the bridge driver for the L3 agent as follows:

The configuration file for the DHCP agent should look similar to this: And finally the configuration for the metadata agent is as the following:
We need to update the configuration file for the Nova services. The new complete files should look like this, replace the IP address as needed:


Populate the Neutron database:

Finally, start all Networking services and restart nova-compute:

Lets verify the Neutron services are running:



Defining the LXC instance flavour, generating a key pair and creating security groups


Before we can create an LXC instance, we need to define its flavor – CPU, memory and disk size. The following creates a flavor named lxc.medium with 1 virtual CPU, 1GB RAM and 5GB disk:


In order to SSH to the LXC containers we can have the SSH keys managed and installed during the instance provisioning, if we don’t want them to be baked inside the actual image. To generate the SSH key pair and add it to OpenStack run:


To list the new key pair we just added execute:


By default, once a new LXC container is provisioned iptables will disallow access to it. Lets create two security groups that will allow ICMP and SSH, so we can test connectivity and connect to the instance:



Creating the networks

Let's start by creating a new network in Neutron called “nat”:


Next, define the DNS server, the default gateway and the subnet range that will be assigned to the LXC container:


Update subnet's information in Neutron:


As the lxc user, create a new software router:


As the admin user, add the subnet we created earlier as an interface to the router:


Lets list the network namespaces that were created:


To show the ports on the software router and the default gateway for the LXC containers, run:


Provision LXC container with OpenStack

Before we launch our LXC container with OpenStack, lets double-check we have all the requirements in place.
Start by listing the available networks:


Display the compute flavors we can choose from:


Next, list the available images:


And display the default security group we created earlier:


Time to load the Network Block Device kernel module, as Nova expects it:


Finally, to provision LXC container with OpenStack, we execute:


Notice how we specified the instance flavor, the image name, and the id of the network, the security group, the key pair name and the name of the instance.

Make sure to replace the IDs with the output returned on your system.

To list the LXC container, its status and assigned IP address, run:


As we saw earlier in the post, OpenStack uses the libvirt driver to provision LXC containers. We can use the virsh command to list the LXC containers on the host:


If we list the processes on the host, we can see the libvirt_lxc parent process spawned the init process for the container:


The location of the containers configuration file and disk is located at:


Let's examine the container's configuration file:


With the networking managed by Neutron, we should see the bridge and the containers interface added as a port:


Let's configure an IP address on the bridge interface and allow NAT connectivity to the container:


To connect to the LXC container using SSH and the key pair we generated earlier, execute:


Finally, to delete the LXC container using OpenStack run: