Create Kubernetes cluster locally using Kubespray and libvert



This post will show a simple tool to create a Kubernetes cluster using libvert and kubespray. Libvert needs to know the introduction; if you are working on a Linux environment, you are likely a fan of libvert already. I have written a set of playbooks to leverage libvert to spawn a cluster of virtual machines, optionally expose them via load balancers and use kubespray to install Kubernetes load on that virtual machine.

These playbooks in this repo enable us to create a single node cluster to a multi-node cluster with many controller nodes and load balancers exposing the control plane. If your work requires you to create->work->dump the cluster, then this is a suitable tool to try.

You can find the git repo with all the code HERE.

Important considerations

  • This tool is only tested on the default libvert network.
  • Several configuration options are provided in the host vars, like the count of controller nodes, worker nodes, load balancer nodes, resources for each type of node, etc.
  • I highly recommend checking the host vars before using the playbook.
  • The development of this playbook (the one before kubespray is triggered) is done on ubuntu22.04, so your mileage may vary if you use a different OS.

step-1: Clone the repo

git clone https://github.com/technekey/ha-kubernetes-cluster


Step-2: This step is only for info; playbook will install the following tools if not already present

- virt-install
- virsh
- virt-ls
- virt-cat
- qemu-img
- cloud-localds


Step-3: Install the required ansible collections

- ansible-galaxy collection install community.libvirt
- ansible-galaxy collection install community.crypto


Step-4: Do the cluster node creation

cd ha-kubernetes-cluster/
ansible-playbook cluster-provisioner.yml -e cluster_name=development


The above command would create a bunch of virtual machines in your host machine and configure the load balancers if they ha_enable are set to true in the host var file. E.g.:

virsh list
 Id   Name                              State
-------------------------------------------------
 15   development-kube-controller-1     running
 16   development-kube-controller-2     running
 17   development-kube-worker-1         running
 18   development-kube-worker-2         running
 19   development-kube-worker-3         running
 20   development-kube-loadbalancer-1   running
 21   development-kube-loadbalancer-2   running


Step-5: Trigger the kubespray to do the magic

cd development/kubespray
ansible-playbook -i inventory/development/hosts.yaml --become --become-user=root cluster.yml -u technekey  --private-key ../id_ssh_rsa


Step-6: Validate the changes; installation is DONE!

kubectl cluster-info --kubeconfig inventory/development/artifacts/admin.conf 
Kubernetes control plane is running at https://192.168.122.211:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
kubectl get node -owide --kubeconfig inventory/development/artifacts/admin.conf 
NAME                            STATUS   ROLES           AGE     VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION      CONTAINER-RUNTIME
development-kube-controller-1   Ready    control-plane   3m51s   v1.24.3   192.168.122.240   <none>        Ubuntu 22.04 LTS   5.15.0-41-generic   containerd://1.6.6
development-kube-controller-2   Ready    control-plane   3m25s   v1.24.3   192.168.122.35    <none>        Ubuntu 22.04 LTS   5.15.0-41-generic   containerd://1.6.6
development-kube-worker-1       Ready    <none>          2m29s   v1.24.3   192.168.122.24    <none>        Ubuntu 22.04 LTS   5.15.0-41-generic   containerd://1.6.6
development-kube-worker-2       Ready    <none>          2m29s   v1.24.3   192.168.122.106   <none>        Ubuntu 22.04 LTS   5.15.0-41-generic   containerd://1.6.6
development-kube-worker-3       Ready    <none>          2m29s   v1.24.3   192.168.122.75    <none>        Ubuntu 22.04 LTS   5.15.0-41-generic   containerd://1.6.6


Step-7: More playbooks are present to stop, start and delete the cluster.

#delete the cluster, but save the VM disk

ansible-playbook cluster-delete.yml  -e cluster_to_delete=development

#delete the VM in the cluster and their disks
ansible-playbook cluster-delete.yml  -e cluster_to_delete=development  -e delete_disk=true

#shutdown all the VM in the cluster
ansible-playbook cluster-stop.yml  -e cluster_to_stop=development

#start all the VM in the cluster
ansible-playbook cluster-start.yml  -e cluster_to_start=development


Summary:


If you are not familiar with high availability, then check this page. HA in Kubernetes is discussed here, basically, the concepts discussed HERE are automated on the current page.

0 0 votes
Please, Rate this post
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Scroll to Top