Getting Started With Kubevirt
When I first got into homelab as a hobby, I had a single server. Of course, that’s how it usually starts for those of us in the homelab community; but for most of us, it doesn’t end there. I know this, because I am most of us.
Instead of a single server, my homelab now consists of 1-3 Minisforum UM480XT mini PCs. I say “1-3” because I switch things up a lot depending on my mood and what I feel like maintaining.
My interests eventually led me to learning about containerization and orchestration, the latter being an essential concept in providing highly available infrastructure for applications in spite of hardware failure.
I opted to use my homelab to get hands on experience with the orchestration platform I wanted to learn, Kubernetes. Maybe you’ve already heard of it.
Almost everything I currently run in my homelab is well-suited to containerization and is a good fit for deploying with Kubernetes. There are a couple of exceptions to this.
- My managed switches are UniFi switches, so I run the UniFi controller application on the network management VLAN. I use Glenn R’s UniFi installation scripts for this, since the UniFi controller application is all packaged up for Ubuntu.
- I occasionally spin up VMs for the traditional VPS experience, for learning in a sandbox, experimenting with operating systems, or for running stateful components like databases (or anything that comes packaged with a database.)
Those two exceptions don’t make up the bulk of my software deployments, so I’d prefer to optimize for the common case which is deploying stateless applications on Kubernetes. To that end, I’ve chosen to deploy Kubernetes on one of my mini PCs, and I figured I’d bring the virtual machines along for the ride in case I settle on Kubernetes for the longer term in my homelab.
Up until now, I’ve relied on the traditional Linux KVM virtualization stack, which is KVM, QEMU, libvirt, and Virt Tools.
Luckily, I don’t have to leave it behind! KubeVirt bridges the traditional Linux KVM stack to an orchestration-based paradigm using Kubernetes.
Requirements
- Virtual machines need to be on their designated VLANs (especially the UniFi controller, as mentioned above.)
- There shouldn’t be a network stack inbetween the virtual machine and its storage devices. I don’t want the CPU, memory, or network overhead of highly-available CSI implementations and if any migrations need to happen, offline migrations are perfectly acceptable (KubeVirt only supports live migration if the PersistentVolume’s access mode is ReadWriteMany, which local NVMe storage won’t be.)
Host networking
To start, I did a fresh installation of Ubuntu 24.04 with LVM on one of my idle mini PCs.
For the VLAN requirement, let’s start with setting up networking on the host. I always do this step first because I don’t have a remote management interface for these mini PCs and messing up the networking over SSH means I have to get up and plug the computer into a monitor and keyboard anyway.
# /etc/netplan/net.yaml
network:
version: 2
ethernets:
eno1: {}
vlans:
vlan1:
id: 1
link: eno1
vlan10:
id: 10
link: eno1
bridges:
br1:
interfaces:
- vlan1
dhcp4: no
br10:
interfaces:
- vlan10
dhcp4: yes
This is fairly straightforward.
“vlan1” is my network management VLAN.
“vlan10” is my regular LAN.
I want my Kubernetes node on the regular VLAN, which is why dhcp4
is set to yes
for br10
and no
for br1.
br1 is only there for the UniFi controller VM to be on the network management VLAN.
I configured a DHCP static lease on my router after running sudo netplan apply
and was off to the races.
At this point, I was able to do the rest of the setup from the comfort of my couch over SSH.
Host storage
Now for the storage requirement. LVM doesn’t consume the entire VG for the initial installation of Ubuntu, so it offers a lot of flexibility for what I want to do with the rest of the space even though I just have the single NVMe drive installed.
$ sudo lvcreate --type thin-pool -L 250G ubuntu-vg -n workload
It’s not necessary to create an LV for each base image, especially if you use the KubeVirt Containerized Data Importer, but I’m not currently using it.
$ sudo lvcreate -V 5G -T ubuntu-vg/workload -n ubuntu-24.04_amd64
$ curl -LkO https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
$ sudo qemu-img convert -O raw noble-server-cloudimg-amd64.img /dev/ubuntu-vg/ubuntu-24.04_amd64
The qemu-img
step is just because Ubuntu offers the image as a QCOW2 image, but the LV is a raw block device and so QCOW2 won’t work here.
Kubernetes
At this point, I’m ready to install Kubernetes. I’ll be installing K3s, which is a lightweight Kubernetes distribution. I might move onto something like kubeadm or something more manual in the future, but I was excited to get started on this project so K3s it is!
I won’t be overly opinionated here, the only exception is disabling the local path provisioner and passing an argument to the Kubelet so that it’s possible to deploy privileged pods (which is necessary for the KubeVirt pods to have access to /dev/kvm
for hardware accelerated virtualization.)
# /etc/rancher/k3s/config.yaml
---
write-kubeconfig-mode: "0644"
tls-san:
- "metalk8s.local.connorkuehl.net"
disable:
- local-storage
kubelet-arg:
- "allow-privileged=true"
node-ip: "192.168.88.147"
bind-address: "192.168.88.147"
cluster-init: true
Note that the mode I chose for the write-kubeconfig-mode
is overly permissive and is not acceptable for production.
I chose to use Multus to connect the virtual machine pods to their respective bridge device on the host.
Something I really like about K3s is that if you lay down Kubernetes manifests under a well-known directory, a K3s controller will apply them. Even Helm charts:
# /var/lib/rancher/k3s/server/manifests/multus-cni.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: multus
namespace: kube-system
spec:
repo: https://rke2-charts.rancher.io
chart: rke2-multus
targetNamespace: kube-system
valuesContent: |-
config:
fullnameOverride: multus
cni_conf:
confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d
binDir: /var/lib/rancher/k3s/data/cni/
kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig
multusAutoconfigDir: /var/lib/rancher/k3s/agent/etc/cni/net.d
With Multus installed, let’s teach it about the host bridges:
# /var/lib/rancher/k3s/server/manifests/vlan1.yaml
---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: "vlan1"
spec:
config: '{
"cniVersion": "0.3.1",
"type": "bridge",
"bridge": "br1",
"vlan": 1,
"ipam": {}
}'
# /var/lib/rancher/k3s/server/manifests/vlan10.yaml
---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: "vlan10"
spec:
config: '{
"cniVersion": "0.3.1",
"type": "bridge",
"bridge": "br10",
"vlan": 10,
"ipam": {}
}'
Note 1: on a multiple node setup, this will not work unless the host networking is configured identically. For example, if br10 bridges to VLAN 10 on node A, but VLAN 11 (or doesn’t exist) on node B, that would be a problem.
Note 2: the IPAM block is disabled because I will only use these NetworkAttachmentDefinitions with KubeVirt VMs whose guest operating systems will DHCP to get an IP address from my router.
Alright, what about the LVM thin pool I set up?
I chose to use local-static-provisioner to expose the LVs that I precreate on the node automatically. All I have to do is create the LV and then drop a symlink to it in a discovery directory and the controller will automatically create the PersistentVolume object in the kube-apiserver for me. The alternative would be writing a bunch of YAML like this after setting up the LV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv-9c25bcdb
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
local:
path: /mnt/disks/nvme/hellokubevirt
persistentVolumeReclaimPolicy: Retain
storageClassName: local-nvme
volumeMode: Block
I’d rather not do so much typing.
Let’s set up the discovery directory:
$ sudo mkdir -p /mnt/disks/nvme
I’ll install local-static-provisioner with Helm, so here are the values for the chart:
# local-static-provisioner.values.yaml
classes:
- name: local-nvme
hostDir: /mnt/disks/nvme
volumeMode: Block
storageClass:
reclaimPolicy: Retain
$ helm template \
--debug local-static-provisioner/local-static-provisioner \
--version v2.8.0 \
--namespace local-static-provisioner \
-f local-static-provisioner.values.yaml | sudo tee /var/lib/rancher/k3s/server/manifests/local-static-provisioner.generated.yaml
For some reason, I couldn’t get this working as a HelmChart Addon like I did with Multus above.
Lastly, I followed the KubeVirt installation guide.
Creating a test VM
First, I’ll create a copy of the base image:
$ sudo lvcreate –V 10G -T ubuntu-vg/workload -n hellokubevirt
$ sudo qemu-img convert -O raw /dev/ubuntu-vg/ubuntu-24.04_amd64 /dev/ubuntu-vg/hellokubevirt
And expose it to the local-static-provisioner
$ sudo ln -s /dev/ubuntu-vg/hellokubevirt /mnt/disks/nvme/
And finally… a VM.
I’ll concede it’s a bit of a shock in its final form all at once, but I’m too lazy to type up the progression that resulted from running kubectl explain ...
for just about everything.
# hellokubevirt.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: hellokubevirt
namespace: default
spec:
volumeName: local-pv-9c25bcdb
volumeMode: Block
accessModes:
- ReadWriteOnce
storageClassName: local-nvme
resources:
requests:
storage: 10G
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: hellokubevirt
namespace: default
spec:
runStrategy: Always
template:
spec:
domain:
resources:
requests:
memory: 1024M
cpu:
cores: 1
devices:
disks:
- name: disk0
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- name: net0
bridge: {}
networks:
- name: net0
multus:
networkName: default/vlan10
volumes:
- name: disk0
persistentVolumeClaim:
claimName: hellokubevirt
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#cloud-config
hostname: hellokubevirt
users:
- name: root
ssh_import_id: ['gh:connorkuehl']
$ kubectl create -f hellokubevirt.yaml
$ virtctl console hellokubevirt
# ...
# Output omitted for brevity.
[ OK ] Reached target cloud-init.target - Cloud-init target.
Ubuntu 24.04.3 LTS hellokubevirt ttyS0
hellokubevirt login:
One last wrinkle
If I stop then start the VM, networking breaks. Astute readers might have noticed I did not specify a MAC address in the VirtualMachine spec. Each time I stop then start the VM, KubeVirt generates a new VMI from the VirtualMachine spec, including a MAC address if one was omitted. This makes netplan very upset in the guest OS.
To fix, I just include a MAC address (or use the one that was generated first):
$ kubectl get vmi hellokubevirt -o=jsonpath='{.spec.domain.devices.interfaces[0].macAddress}'
b6:12:29:88:50:59
$ kubectl edit vm hellokubevirt
interfaces:
- bridge: {}
macAddress: b6:12:29:88:50:59
name: net0
That’s sufficient for me.
I did find an alternative workaround on Xe Iaso’s blog post, but I chose not to use it because I need a stable MAC address anyway as I prefer to configure static DHCP leases at my router.
Conclusion
Overall, I’m quite pleased with how easy it was to get started with KubeVirt.
Will I stick with it? Only time will tell. I do miss virt-install
and I’m a big fan of how simple plain old Linux with libvirt is.