Five years ago I wrote this blog entry talking about KVM, at that time I needed to run Microsft Windows 7 VM to make some tests for my work.
This time I need to make a lab with a couple of VMs to run a Kubernetes cluster in my laptop and I’ve decided to use KVM to create the required infrastructure.
Checking Requirements
First of all let’s check if we have enable virtualization at BIOS level we can check it running:
# dmesg |grep -i kvm [ 3.614335] kvm: disabled by bios
As it’s shown in log virtualization needs to be enabled entering in BIOS menu to enable VT-X options, with this change in the next boot it can be found in boot log:
# dmesg |grep -i virt [ 1.017892] DMAR: Intel(R) Virtualization Technology for Directed I/O
In addition is also required to check if our CPU support hardware virtualization, we are going to include encryption check running this command:
$ egrep -wo 'vmx|lm|aes' /proc/cpuinfo | sort | uniq\ | sed -e 's/aes/Hardware encryption=Yes (&)/g' \ -e 's/lm/64 bit cpu=Yes (&)/g' -e 's/vmx/Intel hardware virtualization=Yes (&)/g' Hardware encryption=Yes (aes) 64 bit cpu=Yes (lm) Intel hardware virtualization=Yes (vmx)
Reviewing information from our CPU from /proc/cpuinfo we can find these flags:
lm – this flag means your system has a 64 bit CPU (Intel or AMD)
vmx – Intel VT-x, virtualization support
aes – AES/AES-NI advanced encryption support
Installing software
Let’s install all needed packages:
# apt install qemu-kvm libvirt-clients libvirt-daemon-system virtinst bridge-utils
virtinst package would be required if you want to use virt-install command.
To connect to local libvirt:
# adduser libvirt $ virsh list --all Id Name State --------------------
We are going to use libvirt which is an open-source API, daemon and management tool for managing platform virtualization to make easier manager our KVM environment.
Setting up our network (host)
As a requirement we need that our VMs are connected between them because they are going be part of a cluster so let’s setup a bridged networking :
# ip link add name br0 type bridge # ip link set dev br0 up # ip link set dev enp0s31f6 master br0 $ ip a s ... 5: br0: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 6e:7b:73:a6:a3:4c brd ff:ff:ff:ff:ff:ff
In my case enp0s31f6
is the host ethernet interface to be connected to the bridge you can find yours using this command:
$ ip -f inet address show # Or in a short way $ ip -f inet a s
We’re setting up our bridge using a separated file located here /etc/network/interfaces.d/br0:
iface br0 inet static address 192.168.1.2 broadcast 192.168.1.255 netmask 255.255.255.0 gateway 192.168.1.1 bridge_ports enp0s31f6 bridge_stp off # disable Spanning Tree Protocol bridge_waitport 0 # no delay before a port becomes available bridge_fd 0 # no forwarding delay
Let’s check our new configuration restarting network-manager service (you should check if your network set up is controlled by Network Manager):
# systemctl restart network-manager # systemctl status network-manager $ ip a s br0 6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 6e:7b:73:a6:a3:4c brd ff:ff:ff:ff:ff:ff inet 192.168.1.2/24 brd 192.168.1.255 scope global br0 valid_lft forever preferred_lft forever
Edit /etc/sysctl.conf file adding the following options:
net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 net.ipv4.ip_forward = 1
A reboot is required, although let’s check bridge setup within KVM:
$ virsh net-list --all Name State Autostart Persistent ---------------------------------------------- default inactive no yes
To get br0 MAC address run:
$ ip address show dev br0 | awk '$1=="link/ether" {print $2}'
If we want to modify some information to setup the range in DHCP it can be done like that:
$ export LIBVIRT_DEFAULT_URI='qemu:///system' $ virsh net-edit default ... $ virsh net-dumpxml default <network> <name>default</name> <uuid>XXXXXXXX</uuid> <forward mode='nat'/> <bridge name='br0' stp='off' delay='0'/> <mac address='6e:7b:73:a6:a3:4c'/> <ip address='192.168.1.2' netmask='255.255.255.0'> <dhcp> <range start='192.168.1.3' end='192.168.1.8'/> </dhcp> </ip> </network> $ virsh net-start default $ virsh net-autostart default Network default started $ virsh net-list --all Name State Autostart Persistent -------------------------------------------- default active yes yes
Managing VM’s from command line
Some of the most commands to work with VMs:
$ export LIBVIRT_DEFAULT_URI='qemu:///system' $ virsh start k8s-master #Boot a VM $ virsh shutdown k8s-master #Stop a VM $ virsh suspend k8s-master ## Delete a VM (destroy and undefine): $ virsh destroy k8s-master $ virsh undefine k8s-master Domain k8s-master has been undefined ### Remember to remove image disk file to release space in your host
With our bridge correctly configured we can setup next VM deployments using this flag:
$ virt-install --network bridge=br0...
Let’s script VM creation
Let’s define a script to create our VM’s (memory, disk and vcpu will depend on your host hardware):
#!/bin/sh if [ -z "$1" ] ; then echo Specify a VM name. exit 1 fi virt-install \ --name $1 \ --ram 4096 \ --disk path=/your/path/images/$1.img,size=10 \ --vcpus 2 \ --os-type linux \ --os-variant debian10 \ --network bridge:br0,model=virtio \ --graphics none \ --console pty,target_type=serial \ --location /your/path/images/debian-10.1.0-amd64-netinst.iso \ --extra-args 'console=ttyS0,115200n8 serial'
Let’s create our first VM:
$ create_vm.sh k8s-master
In this tutorial I’m not going to describe how to setup a Debian GNU/Linux VM but some minimal requirements in our case could be:
- NOT setup Swap partition (Manual partition option to just create one partition)
- If you have swap enabled you could disable swap using these commands:
# swapoff -a && sed -i '/ swap / s/^/#/' /etc/fstab
- Install OpenSSH server and Standard packages (disable Desktop environment and Print server)
- Setup static IP configuration
Configuring our master node (control plane)
Once we have our VMs setup it’s time to install kubernetes cluster software steps in MASTER NODE:
# apt install -y docker.io curl sudo gnugpg # cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF # mkdir -p /etc/systemd/system/docker.service.d # systemctl daemon-reload # systemctl restart docker $ curl -sL https://gist.githubusercontent.com/neklaf/92ff1b15b26c8bd42feb5755586d1af5/raw/c786c40959303e7d51485c271535cbb9bdd6a638/config.sh | sudo sh # curl https://docs.projectcalico.org/v3.9/manifests/calico.yaml -O # vim calico.yaml ... - name: CALICO_IPV4POOL_CIDR value: "192.168.0.0/24" # kubeadm init --pod-network-cidr 192.168.0.0/24 ... Your Kubernetes control-plane has initialized successfully! ... # mkdir -p $HOME/.kube # cp -i /etc/kubernetes/admin.conf $HOME/.kube/config # chown $(id -u):$(id -g) $HOME/.kube/config # kubectl apply -f calico.yaml # kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 7m58s v1.15.0 # source <(kubectl completion bash) # echo "source <(kubectl completion bash)" >> ~/.bashrc (OPTIONAL) # kubectl taint nodes --all node-role.kubernetes.io/master- node/master untainted
Let’s create a Discovery Token CA Hash in the master node to ensure that a worker joins the cluster in a secure way. To generate this hash:
# openssl x509 -pubkey \ -in /etc/kubernetes/pki/ca.crt |openssl rsa \ -pubin -outform der 2>/dev/null |openssl dgst \ -sha256 -hex |sed 's/^.* //' 06dfd9ea6a48fe5a985f7041dd28ae567d797b6140cab2e10360ebe95f0bd58
This directory keeps all manifest files: /etc/kubernetes/manifests
Setting up a worker node
Steps in a WORKER NODE:
# apt install -y docker.io curl sudo gpg # cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF # mkdir -p /etc/systemd/system/docker.service.d # systemctl daemon-reload # systemctl restart docker $ curl -sL https://gist.githubusercontent.com/neklaf/92ff1b15b26c8bd42feb5755586d1af5/raw/412c31c364fd0cc87e49fac530d150dc6bcf2a06/config.sh | sudo sh # kubeadm join --token qziqar.oou2omii1bdwi64e <MASTER-IP>:6443 --discovery-token-ca-cert-hash sha256:06dfd9ea6a48fe5a985f7041dd28ae567d797b6140cab2e10360ebe95f0bd58b ... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. master# scp -r ~/.kube root@<WORKER-IP>:
If your token has expired you should follow these steps:
master# kubeadm token create gmovhu.m6dw6x3meehusbr1 master# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS gmovhu.m6dw6x3meehusbr1 23h 2020-01-29T18:56:56+01:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token node2:~# kubeadm join --token gmovhu.m6dw6x3meehusbr1 master:6443 --discovery-token-ca-cert-hash sha256:06dfd9ea6a48fe5a985f7041dd28ae567d797b6140cab2e10360ebe95f0bd58b master# scp ~/.kube/config node2:./kube/config node2:~# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 64d v1.15.0 node1 Ready <none> 64d v1.15.0 node2 Ready <none> 3m17s v1.15.0
Let’s check in the master node if we can see the new node added to the cluster (has to be checked in all cluster nodes):
master# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 58m v1.15.0 node1 Ready <none> 2m17s v1.15.0 master# kubectl run nginx --image nginx kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. deployment.apps/nginx created worker# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-7bb7cd8db5-ghmwv 1/1 Running 0 40m 192.168.0.129 node1 <none> <none>
Both master and worker should have these network interfaces you run this command to make a quick check:
# ip a| grep -e dock -e cali -e tun 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 4: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000 inet 192.168.0.128/32 brd 192.168.0.128 scope global tunl0 7: cali9ee1dd76ccc@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
Note that if something was wrong during installation process probably the quickest way to fix the problem could be to restart a fresh installation process quickly destroying VM and starting again.
References:
- https://blog.alexellis.io/kvm-kubernetes-primer/
- https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
- https://jamielinux.com/docs/libvirt-networking-handbook/bridged-network.html
- https://wiki.libvirt.org/page/Networking
Enjoy your new fresh k8s cluster!
—
First they ignore you,
then they laugh of you,
then they try to copy you.
Then you change the world.
— Elisabeth Holmes
3 thoughts on “Installing a K8s cluster on Debian Bullseye with KVM”