How to create Kubernetes cluster using Kurl

In this post, we will describe one of the simplest ways to configure a Kubernetes cluster using a tool called kurl. Note that, kurl is listed as one of the “Certified Kubernetes - Installers” for setting a Kubernetes cluster. You can see the whole list of certified installers HERE.

Kurl is a very flexible tool that lets us design the cluster and design what we want in our cluster with a mouse click.After designing the cluster by selecting what we want from the kurl dashboard, it will generate a bash script for the cluster creation. This tool is fantastic, especially for airgap installation of Kubernetes cluster as it also provides a packaged file for installation of the entire cluster inside an air-gapped environment.

Selecting over choices in the kurl dashboard will generate a custom script per our requirement. We can run that script on our nodes to take care of the installation.

Pre-requisite?


You must have your nodes(or virtual machines) up; you can use virtual manager, KVM/libvert, vagrant, or any other tool you choose. For example, I will create a three-node cluster in this post to bring up three virtual machines. One node will be the controller node, and the other two will be marked worker nodes. Make sure your VM satisfies the system requirements laid out by kurl.

  • 4 AMD64 CPUs or equivalent per machine
  • 8 GB of RAM per machine
  • 40 GB of Disk Space per machine.
    • Note: 10GB of the total 40GB should be available to /var/lib/rook. For more information see Rook
  • TCP ports 2379, 2380, 6443, 6783, 10250, 10251 and 10252 open between cluster nodes
  • UDP ports 6783 and 6784 open between cluster nodes
192.168.122.46      kurl-kube-controller-1   running
192.168.122.226     kurl-kube-worker-1       running
192.168.122.130     kurl-kube-worker-2       running


How to use Kurl?


Kurl Is a very flexible tool that lets us design the cluster; it lets us design our cluster and the content of the cluster only with a few mouse clicks. You need to visit the official website for kurl at https://kurl.sh; it provides a form listing the Kubernetes version, add-ons, etc. You have to select the desired tools to generate the configuration file. Finally, kurl will provide a custom bash script per generated installer configuration file. We can run that script to take care of installing the Kubernetes nodes.

For example, The following image shows how you can select your cluster’s Kubernetes distribution and version. For Example, kubeadm, RKE2(not supported on Ubunutu nodes at the time of writing this), and k3s.



To select the CRI, you have the option of DOCKER or contained.


Similarly, I can select the monitoring solutions like Fluentd, Prometheus, metric server, etc.


Example Installer configuration file(see below image):

The configuration file will be modified to reflect your selection whenever you select anything. Also, the URL for installation will change. Try selecting any addon and observing the Installation URL(shown below). If you just curl https://kurl.sh/d14c7c5, it will return a shell script doing all the heavy lifting of Kubernetes node installation.

Installation


Once you select the addons, you will have the Installation URL ready to start the installation.

Option-1. For installation on nodes with internet access, run the following on the MASTER NODE

Step-1: Download the script generated by your installer configuration.
curl https://kurl.sh/f9e7264 -o kubernetes_installation_kurl.sh
sudo bash kubernetes_installation_kurl.sh

Example:

# run the following commands on the master node(s).

curl https://kurl.sh/f9e7264 -o kubernetes_installation_kurl.sh

#inspect the file kubernetes_installation_kurl.sh to ensure you aren't doing something nasty 


#for me the command tool 11 minutes to complete
time sudo bash kubernetes_installation_kurl.sh
...
........
trimmed....
................
		Installation
		  Complete ✔


The UIs of Prometheus, Grafana and Alertmanager have been exposed on NodePorts 30900, 30902 and 30903 respectively.

To access Grafana use the generated user:password of admin:e5oIQV0MF .



To access the cluster with kubectl, reload your shell:

    bash -l



Node join commands expire after 24 hours.

To generate new node join commands, run curl -fsSL https://kurl.sh/version/v2022.08.25-0/f9e7264/tasks.sh | sudo bash -s join_token on this node.

To add worker nodes to this installation, run the following script on your other nodes:
    curl -fsSL https://kurl.sh/version/v2022.08.25-0/f9e7264/join.sh | sudo bash -s kubernetes-master-address=192.168.122.46:6443 kubeadm-token=2hi5tj.ynkrm5t811nr5tto kubeadm-token-ca-hash=sha256:7afec58fd9361b112f4e3305152d1c4bf50adf871af7fccc718a5919129e9ffc kubernetes-version=1.23.10 docker-registry-ip=10.96.2.219 primary-host=192.168.122.46



real	11m38.917s
user	0m0.018s
sys	0m0.115s

technekey@kurl-kube-controller-1:~$ bash -l
technekey@kurl-kube-controller-1:~$ kubectl get pod
No resources found in default namespace.
technekey@kurl-kube-controller-1:~$ kubectl get node
NAME                     STATUS   ROLES                  AGE   VERSION
kurl-kube-controller-1   Ready    control-plane,master   54m   v1.23.10
technekey@kurl-kube-controller-1:~$ 
technekey@kurl-kube-controller-1:~$ 
technekey@kurl-kube-controller-1:~$ kubectl get pod -A
NAMESPACE         NAME                                             READY   STATUS      RESTARTS   AGE
kube-system       coredns-64897985d-cw4b7                          1/1     Running     0          54m
kube-system       coredns-64897985d-pfmjp                          1/1     Running     0          54m
kube-system       etcd-kurl-kube-controller-1                      1/1     Running     0          54m
kube-system       kube-apiserver-kurl-kube-controller-1            1/1     Running     0          54m
kube-system       kube-controller-manager-kurl-kube-controller-1   1/1     Running     0          54m
kube-system       kube-proxy-ps57b                                 1/1     Running     0          54m
kube-system       kube-scheduler-kurl-kube-controller-1            1/1     Running     0          54m
kube-system       weave-net-46s56                                  2/2     Running     0          54m
kurl              registry-7cdc86c77b-d6q9n                        2/2     Running     0          52m
kurl              registry-7cdc86c77b-xk5fg                        2/2     Running     0          52m
logging           fluentd-gcqdc                                    1/1     Running     0          51m
longhorn-system   csi-attacher-6bc5b8f794-f8hpt                    1/1     Running     0          53m
longhorn-system   csi-attacher-6bc5b8f794-pfk5d                    1/1     Running     0          53m
longhorn-system   csi-attacher-6bc5b8f794-q9d7t                    1/1     Running     0          53m
longhorn-system   csi-provisioner-8699678f7c-fnggw                 1/1     Running     0          53m
longhorn-system   csi-provisioner-8699678f7c-sf9rk                 1/1     Running     0          53m
longhorn-system   csi-provisioner-8699678f7c-snp9d                 1/1     Running     0          53m
longhorn-system   csi-resizer-66cc6b8585-df5bq                     1/1     Running     0          53m
longhorn-system   csi-resizer-66cc6b8585-pvmn6                     1/1     Running     0          53m
longhorn-system   csi-resizer-66cc6b8585-wnrxn                     1/1     Running     0          53m
longhorn-system   csi-snapshotter-7c64b69d99-854fd                 1/1     Running     0          53m
longhorn-system   csi-snapshotter-7c64b69d99-h75gc                 1/1     Running     0          53m
longhorn-system   csi-snapshotter-7c64b69d99-sgrx4                 1/1     Running     0          53m
longhorn-system   engine-image-ei-766a591b-tqvss                   1/1     Running     0          53m
longhorn-system   instance-manager-e-038666a0                      1/1     Running     0          53m
longhorn-system   instance-manager-r-45fe4a4f                      1/1     Running     0          53m
longhorn-system   longhorn-admission-webhook-747db784cb-7j5p9      1/1     Running     0          53m
longhorn-system   longhorn-admission-webhook-747db784cb-mhg55      1/1     Running     0          53m
longhorn-system   longhorn-conversion-webhook-86844666d7-msztq     1/1     Running     0          53m
longhorn-system   longhorn-conversion-webhook-86844666d7-vff7r     1/1     Running     0          53m
longhorn-system   longhorn-csi-plugin-xqnvh                        2/2     Running     0          53m
longhorn-system   longhorn-driver-deployer-5cf45fc669-qsdxr        1/1     Running     0          53m
longhorn-system   longhorn-manager-wgqk5                           1/1     Running     0          53m
minio             minio-656f9cd984-pcqj6                           1/1     Running     0          53m
monitoring        alertmanager-prometheus-alertmanager-0           2/2     Running     0          51m
monitoring        alertmanager-prometheus-alertmanager-1           0/2     Pending     0          51m
monitoring        alertmanager-prometheus-alertmanager-2           0/2     Pending     0          51m
monitoring        grafana-6f64bc8b4b-9pjfk                         3/3     Running     0          51m
monitoring        kube-state-metrics-69f4fdd5bf-tvkn6              1/1     Running     0          51m
monitoring        prometheus-adapter-6489568dbf-vt84d              1/1     Running     0          51m
monitoring        prometheus-k8s-0                                 2/2     Running     0          51m
monitoring        prometheus-k8s-1                                 0/2     Pending     0          51m
monitoring        prometheus-node-exporter-fsjvb                   1/1     Running     0          51m
monitoring        prometheus-operator-7b45b488b4-vlw7b             1/1     Running     0          51m
projectcontour    contour-5648bd5684-6vgdx                         1/1     Running     0          52m
projectcontour    contour-5648bd5684-r655n                         1/1     Running     0          52m
projectcontour    contour-certgen-v1.22.0-n2qgq                    0/1     Completed   0          52m
projectcontour    envoy-kp662                                      2/2     Running     0          52m
technekey@kurl-kube-controller-1:~$ 
step-2: Join the worker nodes by running the command produced in the previous step


Run the command to join the cluster on all the worker nodes; this command is printed on the stdout in the previous step.

curl -fsSL https://kurl.sh/version/v2022.08.25-0/f9e7264/join.sh | sudo bash -s kubernetes-master-address=192.168.122.46:6443 kubeadm-token=2hi5tj.ynkrm5t811nr5tto kubeadm-token-ca-hash=sha256:7afec58fd9361b112f4e3305152d1c4bf50adf871af7fccc718a5919129e9ffc kubernetes-version=1.23.10 docker-registry-ip=10.96.2.219 primary-host=192.168.122.46


Example: Executed the cluster join command on both worker nodes

technekey@kurl-kube-worker-1:~$ curl -fsSL https://kurl.sh/version/v2022.08.25-0/f9e7264/join.sh |time  sudo bash -s kubernetes-master-address=192.168.122.46:6443 kubeadm-token=2hi5tj.ynkrm5t811nr5tto kubeadm-token-ca-hash=sha256:7afec58fd9361b112f4e3305152d1c4bf50adf871af7fccc718a5919129e9ffc kubernetes-version=1.23.10 docker-registry-ip=10.96.2.219 primary-host=192.168.122.46
technekey@kurl-kube-worker-2:~$ curl -fsSL https://kurl.sh/version/v2022.08.25-0/f9e7264/join.sh |time  sudo bash -s kubernetes-master-address=192.168.122.46:6443 kubeadm-token=2hi5tj.ynkrm5t811nr5tto kubeadm-token-ca-hash=sha256:7afec58fd9361b112f4e3305152d1c4bf50adf871af7fccc718a5919129e9ffc kubernetes-version=1.23.10 docker-registry-ip=10.96.2.219 primary-host=192.168.122.46


You will get a response something like the below:

✔ Node joined successfully

		Installation
		  Complete ✔


Now verify the node list for worker nodes’ presence.

technekey@kurl-kube-controller-1:~$ kubectl get node
NAME                     STATUS   ROLES                  AGE    VERSION
kurl-kube-controller-1   Ready    control-plane,master   125m   v1.23.10
kurl-kube-worker-1       Ready    <none>                 14m    v1.23.10
kurl-kube-worker-2       Ready    <none>                 14m    v1.23.10
technekey@kurl-kube-controller-1:~$ 

Option-2: For air-gapped installations


You can click on the “Download airgap installer” and follow the instructions as shown in the below image:

Reference:

  • https://kurl.sh/docs/install-with-kurl/
  • https://www.cncf.io/certification/software-conformance/
0 0 votes
Please, Rate this post
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Scroll to Top