Install QEMU/KVM/Virt-Manager on ArchLinux

Intro#

In short QEMU is an emulator, KVM is a special mode of QEMU to support virtualization.

KVM, Kernel-based Virtual Machine, is a hypervisor built into the Linux kernel. It is similar to Xen in purpose but much simpler to get running. Unlike native QEMU, which uses emulation, KVM is a special operating mode of QEMU that uses CPU extensions (HVM) for virtualization via a kernel module.

Ref.

libvirt is a middleware library that provides various functions for many hypervisor. Virt-Manager is a GUI for libvirt an so allows us to manage KVM VMs.

Unlike other virtualization programs such as VirtualBox and VMware, QEMU does not provide a GUI to manage virtual machines (other than the window that appears when running a virtual machine), nor does it provide a way to create persistent virtual machines with saved settings. All parameters to run a virtual machine must be specified on the command line at every launch, unless you have created a custom script to start your virtual machine(s).

Libvirt provides a convenient way to manage QEMU virtual machines. See list of libvirt clients for available front-ends.

Ref.

Requirements#

Check the KVM wiki page to see if you meet the requirements.

Installation#

Install all the dependencies for our setup.

sudo pacman -S virt-manager qemu-desktop libvirt edk2-ovmf dnsmasq iptables-nft

Configuring KVM#

To use as a normal user without root we need to configure KVM, we need to set the UNIX domain socket ownership to libvirt and the UNIX socket permission to read and write.

So uncomment the following lines.

sudoedit /etc/libvirt/libvirtd.conf

...
unix_sock_group = 'libvirt'
...
unix_sock_rw_perms = '0770'
...

Then we need to add our user to the libvirt user group.

sudo usermod -a -G libvirt noraj

To take this change into account we either need to restart or run newgrp libvirt.

We also need to add our user to /etc/libvirt/qemu.conf.

# Some examples of valid values are:
#
#       user = "qemu"   # A user named "qemu"
#       user = "+0"     # Super user (uid=0)
#       user = "100"    # A user named "100" or a user with uid=100
#
user = "noraj"

# The group for QEMU processes run by the system instance. It can be
# specified in a similar way to user.
group = "noraj"

Activating KVM#

Let's enable auto-start for KVM and start it:

sudo systemctl enable libvirtd.service
sudo systemctl start libvirtd.service

Upon opening Virt-Manager, it will default to the system variant (root) of the QEMU connection. This can be changed to the user connection by going to: File < Connection < Add New Connection.

Now select QEMU/KVM User session as the Hypervison and click OK. This will now auto-connect to the user session. You can now disconnect and remove the system connection if desired.

Ref.

Guest support and transition from Virtualbox#

If we want to reuse a disk from a Virtualbox VM, we'll need to convert VDI disk to qcow2.

Then to take advantage of the VirtIO devices we'll need to enable some kernel modules.

To be able to launch my VM the first time in KVM I had to add the disk as SATA device rather than a VirtIO device. Then I started my VM and made the following changes.

To boot from a virtio disk, the initial ramdisk must contain the necessary modules, I'm not sure mkinitcpio hook autodetect does it automatically in my case so I went with the manual approach to include the necessary modules.

/etc/mkinitcpio.conf

MODULES=(virtio virtio_blk virtio_pci virtio_net)

Then we can force rebuild the init ramdisk:

mkinitcpio -p linux

The dev name of the disk with virtio instead of sata will change from /dev/sda to /dev/vda, I have no change to operate in /etc/fstab or /efi/refind_linux.conf because I'm using UUID to reference disk but otherwise you'll have to do the changes here.

Now we should be able to use virtio devices.

We can remove the old guest additions from VirtualBox:

sudo pacman -Rns virtualbox-guest-utils

We'll install QEMU guest additions instead:

sudo pacman -S qemu-guest-agent

Since the VM will be using SPICE we can improve the user experience by installing a few packages:

  • spice-vdagent: to enable shared clipboard
  • xf86-video-qxl: appropriate video driver (if using QXL rather than virtio)
sudo pacman -S spice-vdagent xf86-video-qxl

Optionally remove old drivers (but better keep it for some fallbacks):

sudo pacman -Rns xf86-video-vmware

Shared folder#

Unfortunately, virtiofs shared folder is not possible in session mode the way it is nicely integrated in virt-manager.

It is possible to launch virtiofsd externally but this requires manual configuration, is not flexible and is not integrated in virt-manager so it would require manual XML configuration which is not ideal.

SPICE WebDAV#

There is the SPICE shared folder option but while we can create the spice-webdav channel there is no SPICE WebDAV integration on the client (virt-manager) so we can't mount the shared folder in the VM.

But there is still a way using virt-viewer client.

In virt-manager VM settings:

  1. Hit the Add Hardware button
  2. Select a Channel device
  3. Choose org.spice-space.webdav.0 in Name, Spice Port (spirceport) in Device Type and leep org.spice-space.webdav.0 in Channel.

Install virt-viewer: pacman -S virt-viewer.

Optionally we can check our VM name with virsh:

virsh list --all
 Id   Name      State
-------------------------
 2    DevArch   running

Then connect to it:

virt-viewer --connect qemu:///session DevArch

On virt-viewer interface go into the Preferences menu that will show a Spice tab where there is an option for folder sharing. Alternatively you can add the --spice-shared-dir=/home/noraj/Share option of virt-viewer but you'll still have to go enable it in the preferences sicne it's always disabled by default.

Then on the guest, we have to install a WebDAV server.

sudo pacman -S phodav

Then start it and optionally enable auto-start:

sudo systemctl start spice-webdavd.service
sudo systemctl enable spice-webdavd.service

Finally we need a WebDAV client to mount the share. For example Dolphin.

We can specify 127.0.0.1 as server and 9843 for the port when creating a Network Folder or browsing webdav://localhost:9843/ directly.

It's also possible to use Cockpit but the remote viewer is just using virt-viewer so it makes an extra layer that is not necessary.

virtio-9p#

Alternatively, if we want to stay with virt-manager all along, we can still use the older virtio-9p that is way slower (than virtiofs) but works in session mode.

In virt-manager VM settings:

  1. Hit the Add Hardware button
  2. Choose a Filesystem device
  3. Select virtio-9p as Driver
  4. Define the source path (folder to share from the host) and target path (a mount tag)

And in the guest, mount the share:

sudo mount -t 9p -o trans=virtio <mount-tag> <mount-point>

For example:

sudo mount -t 9p -o trans=virtio hostshare /home/noraj/Share

Note: it's possible to tweak the msize option to increase performance.

It's also possible to add it to /etc/fstab to mount it automatically.

# 9P shared folder
hostshare /home/noraj/Share 9p  trans=virtio,rw,_netdev 0 0

Bridged interface#

As we run QEMU/libvirt as user (qemu:///session), we can't use bridged interface out of the box nor use the Virtual Networks options in the Connection Details of virt-manager that would require QEMU/libvirt to be run as root (qemu:///system).

So by default our VM will use NAT, eg. of Virtual Network Interface settings on virt-manager:

  • Network source: Usermod networking
  • Device model: virtio

But with NAT the interface is isolated.

Let's see how we can use a bridged device instead so that the VM can have an interface exposed to other machines.

On the host we need to specify in /etc/qemu/bridge.conf the bridge interface that QEMU will be authorized to use.

allow virbr0

Both of the two following options are using Tap networking, and virt-manager will use qemu-bridge-helper.

Option 1: create a NAT bridge with virsh#

The NAT bridge can be used so that the VM can interact with the host or other VM on the same bridge but will not be exposed to the LAN. The DHCP will be managed by libvirt.

We'll create a XML file for the network $EDITOR virbr0.xml.

<network>
  <name>NAT-bridge</name>
  <forward mode='nat'/>
  <bridge name='virbr0' stp='off' delay='0'/>
  <mac address='ff:ff:ff:ff:ff:ff'/>
  <ip address='192.168.142.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.142.2' end='192.168.142.254'/>
    </dhcp>
  </ip>
</network>

The bridge interface virbr0 will have Spanning Tree Protocol (STP) disabled and the traffic will be forwarded via NAT. Of course you need to replace MAC with a proper value. You are also free to change the bridge address or the DHCP range.

To import a network from a XML file:

$ sudo virsh net-define virbr0.xml

Then we can start the network:

$ sudo virsh net-start NAT-bridge

To enable (auto-start) it:

$ sudo virsh net-autostart NAT-bridge

We can check the last two commands were accepted by listing all interfaces:

$ sudo virsh net-list --all

Then we can confirm the bridge interface is up:

$ ip addr show dev virbr0

Now the only thing we need is to change the NIC settings of the VM to:

  • Network source: Bridge device
  • Device name: virbr0
  • Device model: virtio

We can run ip link show master virbr0 to see which interfaces are attached to the bridge.

No interface will be outputed when no VM are running. One TAP interface will be created per machine when VM are running. No physical interfaces should ever be displayed in the bridge since it uses NAT to provide connectivity.

Option 2: create a full bridge with nmcli and virsh#

The full bridge can be used so that the VM can interact with every machine on the LAN. The DHCP will be managed by the DHCP server on the LAN.

We will create the bridge interface with nmcli. Why? Because iproute2 configuration is volatile, persistent configuration requires a network manager and I'm using NetworkManager. Also brctl from bridge-utils is deprecated. Intead of nmcli if you use another network manager, an ArchLinux user can use netctl or systemd-networkd for others.

In theory on KDE (plasma-nm) you could configure a bridge interface from the GUI on NetworkManager settings, while it's possible to create the interface I haven't been successful to make it work that way. So instead I created it from the CLI.

Create the bridge interface with STP off:

$ nmcli connection add type bridge ifname virbr0 stp no

Making our physical interface a slave to the bridge:

$ nmcli connection add type bridge-slave ifname enp9s0 master virbr0

Then we will disable the existing connection (you can get it with nmcli connection show --active), it will cut you from internet doing that:

$ nmcli connection down "Wired connection 1"

Starting the new bridge (and its slave):

$ nmcli connection up bridge-virbr0
$ nmcli connection up bridge-slave-enp9s0

We can keep a dynamic IP on the bridge interface so it will use DHCP to get an address:

$ nmcli connection modify bridge-virbr0 ipv4.method auto

But I don't want to use the hardcoded DNS server from my ISP box, so I discard the DNS received from DHCP:

$ nmcli connection modify bridge-virbr0 ipv4.ignore-auto-dns true

And I set some others (for example Cloudflare + FreeNom):

$ nmcli connection modify bridge-virbr0 ipv4.dns 9.9.9.9,149.112.112.112,80.80.80.80,80.80.81.81

Apply the changes:

$ nmcli connection up bridge-virbr0

Then we can confirm the bridge interface is up:

$ ip addr show dev virbr0

We can run ip link show master virbr0 to see which interfaces are attached to the bridge.

There should be the physical interface outputed when no VM are running. In addition, one TAP interface will be created per machine when VM are running.

But we are not done yet. We still need to create the virtual network that will use the bridge:

We'll create a XML file for the network $EDITOR virbr0.xml.

<network>
    <name>full-bridge</name>
    <forward mode="bridge" />
    <bridge name="virbr0" />
</network>

To import a network from a XML file:

$ sudo virsh net-define virbr0.xml

Then we can start the network:

$ sudo virsh net-start full-bridge

To enable (auto-start) it:

$ sudo virsh net-autostart full-bridge

We can check the last two commands were accepted by listing all interfaces:

$ sudo virsh net-list --all

Now the only thing we need is to change the NIC settings of the VM to:

  • Network source: Bridge device
  • Device name: virbr0
  • Device model: virtio

PS: Bridge interfaces created with nmcli will appear on the NM UI but it won't be possible to modify it that way and the slave interface won't appears on the settings.

Share