Intro#
In short QEMU is an emulator, KVM is a special mode of QEMU to support virtualization.
KVM, Kernel-based Virtual Machine, is a hypervisor built into the Linux kernel. It is similar to Xen in purpose but much simpler to get running. Unlike native QEMU, which uses emulation, KVM is a special operating mode of QEMU that uses CPU extensions (HVM) for virtualization via a kernel module.
libvirt is a middleware library that provides various functions for many hypervisor. Virt-Manager is a GUI for libvirt an so allows us to manage KVM VMs.
Unlike other virtualization programs such as VirtualBox and VMware, QEMU does not provide a GUI to manage virtual machines (other than the window that appears when running a virtual machine), nor does it provide a way to create persistent virtual machines with saved settings. All parameters to run a virtual machine must be specified on the command line at every launch, unless you have created a custom script to start your virtual machine(s).
Libvirt provides a convenient way to manage QEMU virtual machines. See list of libvirt clients for available front-ends.
Requirements#
Check the KVM wiki page to see if you meet the requirements.
Installation#
Install all the dependencies for our setup.
Configuring KVM#
To use as a normal user without root we need to configure KVM, we need to set the UNIX domain socket ownership to libvirt and the UNIX socket permission to read and write.
So uncomment the following lines.
sudoedit /etc/libvirt/libvirtd.conf
Then we need to add our user to the libvirt user group.
To take this change into account we either need to restart or run newgrp libvirt
.
We also need to add our user to /etc/libvirt/qemu.conf
.
Activating KVM#
Let's enable auto-start for KVM and start it:
Upon opening Virt-Manager, it will default to the system variant (root) of the QEMU connection. This can be changed to the user connection by going to:
File < Connection < Add New Connection
.Now select
QEMU/KVM User session
as the Hypervison and click OK. This will now auto-connect to the user session. You can now disconnect and remove the system connection if desired.
Guest support and transition from Virtualbox#
If we want to reuse a disk from a Virtualbox VM, we'll need to convert VDI disk to qcow2.
Then to take advantage of the VirtIO devices we'll need to enable some kernel modules.
To be able to launch my VM the first time in KVM I had to add the disk as SATA device rather than a VirtIO device. Then I started my VM and made the following changes.
To boot from a virtio disk, the initial ramdisk must contain the necessary modules, I'm not sure mkinitcpio hook autodetect
does it automatically in my case so I went with the manual approach to include the necessary modules.
/etc/mkinitcpio.conf
Then we can force rebuild the init ramdisk:
The dev name of the disk with virtio instead of sata will change from /dev/sda
to /dev/vda
, I have no change to operate in /etc/fstab
or /efi/refind_linux.conf
because I'm using UUID to reference disk but otherwise you'll have to do the changes here.
Now we should be able to use virtio devices.
We can remove the old guest additions from VirtualBox:
We'll install QEMU guest additions instead:
Since the VM will be using SPICE we can improve the user experience by installing a few packages:
spice-vdagent
: to enable shared clipboardxf86-video-qxl
: appropriate video driver (if using QXL rather than virtio)
Optionally remove old drivers (but better keep it for some fallbacks):
Shared folder#
Unfortunately, virtiofs shared folder is not possible in session mode the way it is nicely integrated in virt-manager.
It is possible to launch virtiofsd externally but this requires manual configuration, is not flexible and is not integrated in virt-manager so it would require manual XML configuration which is not ideal.
SPICE WebDAV#
There is the SPICE shared folder option but while we can create the spice-webdav channel there is no SPICE WebDAV integration on the client (virt-manager) so we can't mount the shared folder in the VM.
But there is still a way using virt-viewer
client.
In virt-manager VM settings:
- Hit the
Add Hardware
button - Select a
Channel
device - Choose
org.spice-space.webdav.0
in Name,Spice Port (spirceport)
in Device Type and leeporg.spice-space.webdav.0
in Channel.
Install virt-viewer: pacman -S virt-viewer
.
Optionally we can check our VM name with virsh
:
Then connect to it:
On virt-viewer interface go into the Preferences
menu that will show a Spice
tab where there is an option for folder sharing. Alternatively you can add the --spice-shared-dir=/home/noraj/Share
option of virt-viewer
but you'll still have to go enable it in the preferences sicne it's always disabled by default.
Then on the guest, we have to install a WebDAV server.
Then start it and optionally enable auto-start:
Finally we need a WebDAV client to mount the share. For example Dolphin.
We can specify 127.0.0.1
as server and 9843
for the port when creating a Network Folder or browsing webdav://localhost:9843/
directly.
It's also possible to use Cockpit but the remote viewer is just using virt-viewer
so it makes an extra layer that is not necessary.
virtio-9p#
Alternatively, if we want to stay with virt-manager all along, we can still use the older virtio-9p
that is way slower (than virtiofs) but works in session mode.
In virt-manager VM settings:
- Hit the Add Hardware button
- Choose a Filesystem device
- Select virtio-9p as Driver
- Define the source path (folder to share from the host) and target path (a mount tag)
And in the guest, mount the share:
For example:
Note: it's possible to tweak the msize option to increase performance.
It's also possible to add it to /etc/fstab
to mount it automatically.
Bridged interface#
As we run QEMU/libvirt as user (qemu:///session
), we can't use bridged interface out of the box nor use the Virtual Networks options in the Connection Details of virt-manager that would require QEMU/libvirt to be run as root (qemu:///system
).
So by default our VM will use NAT, eg. of Virtual Network Interface settings on virt-manager:
- Network source:
Usermod networking
- Device model:
virtio
But with NAT the interface is isolated.
Let's see how we can use a bridged device instead so that the VM can have an interface exposed to other machines.
On the host we need to specify in /etc/qemu/bridge.conf
the bridge interface that QEMU will be authorized to use.
Both of the two following options are using Tap networking, and virt-manager will use qemu-bridge-helper.
Option 1: create a NAT bridge with virsh#
The NAT bridge can be used so that the VM can interact with the host or other VM on the same bridge but will not be exposed to the LAN. The DHCP will be managed by libvirt.
We'll create a XML file for the network $EDITOR virbr0.xml
.
The bridge interface virbr0 will have Spanning Tree Protocol (STP) disabled and the traffic will be forwarded via NAT. Of course you need to replace MAC with a proper value. You are also free to change the bridge address or the DHCP range.
To import a network from a XML file:
Then we can start the network:
To enable (auto-start) it:
We can check the last two commands were accepted by listing all interfaces:
Then we can confirm the bridge interface is up:
Now the only thing we need is to change the NIC settings of the VM to:
- Network source:
Bridge device
- Device name:
virbr0
- Device model:
virtio
We can run ip link show master virbr0
to see which interfaces are attached to the bridge.
No interface will be outputed when no VM are running. One TAP interface will be created per machine when VM are running. No physical interfaces should ever be displayed in the bridge since it uses NAT to provide connectivity.
Option 2: create a full bridge with nmcli and virsh#
The full bridge can be used so that the VM can interact with every machine on the LAN. The DHCP will be managed by the DHCP server on the LAN.
We will create the bridge interface with nmcli
. Why? Because iproute2 configuration is volatile, persistent configuration requires a network manager and I'm using NetworkManager. Also brctl
from bridge-utils is deprecated. Intead of nmcli
if you use another network manager, an ArchLinux user can use netctl or systemd-networkd for others.
In theory on KDE (plasma-nm
) you could configure a bridge interface from the GUI on NetworkManager settings, while it's possible to create the interface I haven't been successful to make it work that way. So instead I created it from the CLI.
Create the bridge interface with STP off:
Making our physical interface a slave to the bridge:
Then we will disable the existing connection (you can get it with nmcli connection show --active
), it will cut you from internet doing that:
Starting the new bridge (and its slave):
We can keep a dynamic IP on the bridge interface so it will use DHCP to get an address:
But I don't want to use the hardcoded DNS server from my ISP box, so I discard the DNS received from DHCP:
And I set some others (for example Cloudflare + FreeNom):
Apply the changes:
Then we can confirm the bridge interface is up:
We can run ip link show master virbr0
to see which interfaces are attached to the bridge.
There should be the physical interface outputed when no VM are running. In addition, one TAP interface will be created per machine when VM are running.
But we are not done yet. We still need to create the virtual network that will use the bridge:
We'll create a XML file for the network $EDITOR virbr0.xml
.
To import a network from a XML file:
Then we can start the network:
To enable (auto-start) it:
We can check the last two commands were accepted by listing all interfaces:
Now the only thing we need is to change the NIC settings of the VM to:
- Network source:
Bridge device
- Device name:
virbr0
- Device model:
virtio
PS: Bridge interfaces created with nmcli will appear on the NM UI but it won't be possible to modify it that way and the slave interface won't appears on the settings.