Virtualization using KVM with libvirt on RHEL

KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. KVM also requires a modified QEMU although work is underway to get the required changes upstream.
Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.
The kernel component of KVM is included in mainline Linux, as of 2.6.20.
KVM merges the hypervisor with the kernel, thus reducing redundancy and speeding up execution times. A KVM driver communicates with the kernel and acts as an interface for a userspace virtual machine. Scheduling of processes and memory management is handled through the kernel itself. A small Linux kernel module introduces the guest mode, sets up page tables for the guest, and emulates certain key instructions. Current versions of KVM come with a modified version of the Qemu emulator, which manages I/O and operates as a virtual home for the guest system. The guest system runs within Qemu, and Qemu runs as an ordinary process in user space.
Each guest consists of two parts: the userspace part (Qemu) and the guest part (the guest itself). The guest physical memory is mapped in the task’s virtual memory space, so guests can be swapped as well. Virtual processors within a virtual machine simply are threads in the host process.
Paravirtualization support is also available for Linux and Windows guests using the VirtIO framework; this includes a paravirtual Ethernet card, a disk I/O controller and a balloon device for adjusting guest memory-usage. Just note the bus='virtio' line in the configuration file described further down this blog.

Why KVM and not Xen ?

Xen is an external hypervisor; it assumes control of the machine and divides resources among guests. On the other hand, KVM is part of Linux and uses the regular Linux scheduler and memory management. This means that KVM is much smaller and simpler to use; it is also more featureful; for example KVM can swap guests to disk in order to free RAM.
KVM only run on processors that supports x86 hvm (vt/svm instructions set) whereas Xen also allows running modified operating systems on non-hvm x86 processors using a technique called paravirtualization. KVM does not support paravirtualization for CPU but may support paravirtualization for device drivers to improve I/O performance.

What is the difference between QEMU and KVM ?

QEMU is a generic and open source machine emulator and virtualizer. KVM uses processor extensions (HVM) for virtualization. When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU. QEMU supports virtualization when executing under the Xen hypervisor or using the KVM kernel module in Linux. When using KVM, QEMU can virtualize x86, server and embedded PowerPC, and S390 guests.

 What is libvirt and why it's useful ?

Libvirt is collection of software that provides a convenient way to manage virtual machines and other virtualization functionality, such as storage and network interface management. These software pieces include an API library, a daemon (libvirtd), and a command line utility (virsh).
A primary goal of libvirt is to provide a single way to manage multiple different virtualization providers/hypervisors. For example, the command 'virsh list --all' can be used to list the existing virtual machines for any supported hypervisor (KVM, Xen, VMWare ESX, etc.) No need to learn the hypervisor specific tools!
It supports the Xen and KVM hypervisors and the QEMU emulator.
Some of the major libvirt features are:
  • VM management: Various domain lifecycle operations such as start, stop, pause, save, restore, and migrate. Hotplug operations for many device types including disk and network interfaces, memory, and cpus.
  • Remote machine support: All libvirt functionality is accessible on any machine running the libvirt daemon, including remote machines. A variety of network transports are supported for connecting remotely, with the simplest being SSH, which requires no extra explicit configuration. If example.com is running libvirtd and SSH access is allowed, the following command will provide access to all virsh commands on the remote host for qemu/kvm:

    [root@host]# virsh --connect qemu+ssh://root@example.com/system
    

    For more info, see: http://libvirt.org/remote.html
  • Storage management: Any host running the libvirt daemon can be used to manage various types of storage: create file images of various formats (qcow2, vmdk, raw, ...), mount NFS shares, enumerate existing LVM volume groups, create new LVM volume groups and logical volumes, partition raw disk devices, mount iSCSI shares, and much more. Since libvirt works remotely as well, all these options are available for remote hosts as well.

    For more info, see: http://libvirt.org/storage.html
  • Network interface management: Any host running the libvirt daemon can be used to manage physical and logical network interfaces. Enumerate existing interfaces, as well as configure (and create) interfaces, bridges, vlans, and bond devices. This is with the help of netcf.

    For more info, see: https://fedorahosted.org/netcf/
  • Virtual NAT and Route based networking: Any host running the libvirt daemon can manage and create virtual networks. Libvirt virtual networks use firewall rules to act as a router, providing VMs transparent access to the host machines network.

    For more info, see: http://libvirt.org/archnetwork.html
Installing KVM and libvirt

To install kvm and libvirt run:

[root@host]# yum install kvm
[root@host]# yum install virt-manager libvirt libvirt-python python-virtinst kvm-tools
[root@host]# service libvirtd start

Installing a virtualized RHEL guest on a dedicated block device /dev/mapper/VGkvmimages-LVstorage using kickstart

You can use virt-install to install the new guest using vnc by running:

[root@host]# virt-install -n testbox --vcpus=4 -r 4096 --os-type=linux  --os-variant=rhel5 --accelerate --location=http://192.168.1.1/rhel/Server/5.5-x64/extract/ --extra-args=ks=http://192.168.1.1/ks/testbox.ks --disk path=/dev/mapper/VGkvmimages-LVstorage,device=block,bus=virtio --network=bridge:br0 --noreboot --vnc

or by connecting to the serial console instead of using graphical interface:

[root@host]# virt-install -n $GUEST --vcpus=4 -r 10000 --os-type=linux --os-variant=rhel5 --accelerate --location=http://10.25.5.80/rhel/Server/5.5-x64/extract/ --extra-args="ks=http://10.25.5.80/ks/$GUEST text console=tty0 utf8 console=ttyS0,115200" --disk path=/dev/mapper/mpd-$GUEST,device=disk,bus=virtio --network=bridge:br0 --nographics

Editing of the guest configuration file

There are two ways to edit and make changes to the guest configuration.
You can dump the config file by executing:

[root@host]# virsh dumpxml testbox > testbox.xml

Then edit the file, save and run:

[root@host]# virsh define testbox.xml

For the changes to take effecet you have to destroy and create the guest:

[root@host]# virsh destroy testbox
[root@host]# virsh create testbox.xml

Alternatively you can use:

[root@host]# virsh edit testbox

This will dump the configuration file, open an editor, and define the file after you save the changes.

Making a guest start after a host reboot

If you want your guest OS to start after you reboot the host automatically run:

[root@host]# virsh autostart testbox

Hot adding a block device to a guest

There are two ways to add a device (block, file or network) without destroying and recreateing the guest:

First load the acpiphp module if not already loaded on the guest. This allows for autoinsertion of a PCI devices:

[root@testbox]# modprobe acpiphp

Then on the host create an xml file describing the device you want to hot-add, in this case a block device /dev/mapper/VGkvmimages-LVstorage:

[root@host]# cat newdisk.xml
   
      
      
      
    

Save the file and run:

[root@host]# virsh attach-device testbox newdisk.xml

You can also use:

[root@host]# virsh attach-disk testbox /dev/mapper/VGkvmimages-LVstorage vdb

Make sure vdb is not in use on the guest machine.

To hot-remove the device use:

[root@host]# virsh detach-device testbox newdisk.xml

or

[root@host]# virsh detach-disk testbox vdb

Live KVM migration

A guest can be migrated to another host with the virsh command. First make sure that the gust is running by running:

[root@host]# virsh list

To migrate the host execute:

[root@host]# virsh migrate --live testbox qemu+ssh://remotelinuxbox/system

This will keep all the connections to the migrated host alive after the migration with no perceived outage.

Adding console access to KVM guest

If you want to be able to login to kvm guest through the serial console for diagnostic purposes in case you are unable to ssh or use vnc make the following changes:

1. Add these two lines before the kernel definition in grub.conf (above title):

[root@host]# vi /boot/grub/grub.conf
serial --unit=0 --speed=115200
terminal --timeout=5 serial console

2. Append console=ttyS0 at the end of the kernel line like so (all on one line):

[root@host]# vi /boot/grub/grub.conf
kernel /vmlinuz-2.6.18-194.17.1.el5 ro root=/dev/VolGroup00/LVroot rhgb quiet console=ttyS0,115200


3. Finally allow root to connect to the serial console by adding ttyS0 to securetty

[root@host]# vi /etc/securetty
ttyS0

Now you should be able to connect to the console from the KVM host by running:

[root@host]# virsh console testbox

On Debian make the following changes instead:

[root@host]# vi /etc/inittab

T0:23:respawn:/sbin/getty -L ttyS0 9600 vt100

[root@host]# vi /etc/default/grub

GRUB_CMDLINE_LINUX="console=tty0"

[root@host]# update-grub 

And on Ubuntu:

[root@host]# vi /etc/init/tty0.conf 

# tty0 - getty
#
# This service maintains a getty on tty0 from the point the system is
# started until it is shut down again.

start on stopped rc RUNLEVEL=[2345] and (
            not-container or
            container CONTAINER=lxc or
            container CONTAINER=lxc-libvirt)

stop on runlevel [!2345]

respawn
exec /sbin/getty -L ttyS0 9600 vt100

Terminology

Virtualization  - Virtualization is a broad computing term for running software, usually operating systems, concurrently and isolated from other programs on one system. Most existing implementations of virtualization use a hypervisor, a software layer on top of an operating system, to abstract hardware. The hypervisor allows multiple operating systems to run on the same physical system by giving the guest operating system virtualized hardware. There are various methods for virtualizing operating systems:
• Hardware-assisted virtualization is the technique used for full virtualization with Xen and KVM
• Para-virtualization is a technique used by Xen to run Linux guests
• Software virtualization or emulation. Software virtualization uses binary translation and other emulation techniques to run unmodified operating systems. Software virtualization is significantly slower than hardware-assisted virtualization or para-virtualization. Software virtualization, in the form of QEMU, is unsupported by Red Hat Enterprise Linux.

Para-virtualization - Para-virtualization uses a special kernel, sometimes referred to as the Xen kernel or the kernel-xen package. Para-virtualized guest kernels are run concurrently on the host while using the host's libraries and devices. A para-virtualized installation can have complete access to all devices on the system which can be limited with security settings (SELinux and file controls). Para-virtualization is faster than full virtualization. Para-virtualization can effectively be used for load balancing, provisioning, security and consolidation advantages. As of Fedora 9 a special kernel will no longer be needed. Once this patch is accepted into the main Linux tree all Linux kernels after that version will have para-virtualization enabled or available.

Full virtualization - Xen and KVM can use full virtualization. Full virtualization uses hardware features of the processor to provide total abstraction of the underlying physical system (Bare-metal) and create a new virtual machine in which the guest operating systems can run. No modifications are needed in the guest operating system. The guest operating system and any applications on the guest are not aware of the virtualized environment and run normally. Para-virtualization requires a modified version of the Linux operating system.

For more information you can read a Linux Magazine article on KVM here