k8s via proxmox (2) - worker node & master node setup (en)
2022. 10. 03.
This post continues from k8s via proxmox (1) - Creating an Ubuntu server template.
1. Creating k8s-ctrlr & k8s-node
Using the template created earlier, generate the k8s-ctrlr
and k8s-node
.
Wait a moment after booting until the cloud-init configuration is complete.
2. Setting up static IPs
Set static IPs for k8s-ctrlr
and k8s-node
.
As always, use netplan
.
Depending on your setup, it will likely be similar if you're running a Proxmox server with a typical router and a single computer.
Before editing, make a backup just in case:
Edit the /etc/netplan/50-cloud-init.yaml
file as follows:
In my case, I’m running a local DNS server using pi.hole
on 192.168.0.120
, so I added it as a nameserver.
Set the static IPs as 192.168.0.180
for k8s-ctrlr
and 192.168.0.185
for k8s-node
.
Apply the changes with:
3. Installing containerd
Install containerd
on both k8s-ctrlr
and k8s-node
.
3.1. Configuring containerd
Save the default configuration with the following command:
Then, edit the /etc/containerd/config.toml
file as follows:
Change SystemdCgroup = false
to true
.
4. Additional Network Settings
Open the /etc/sysctl.conf
file and modify it as follows:
Find the second line above and remove the #
comment.
Add the following to the /etc/modules-load.d/k8s.conf
file:
Reboot to apply the updated network settings.
5. Kubernetes Installation
The above commands fetch the key, add the Kubernetes apt repository, and install Kubernetes packages.
Check the official documentation for updates and modify accordingly if needed.
6. Creating a k8s-node Template
Let’s create a k8s-node template to generate as many workers as needed.
The following commands resolve the issue where cloned VMs have the same IP due to duplicate machine IDs.
Then, power off the VM and click Convert to Template in the web UI.
Once converted to a template, you can create as many workers as needed.
Here, we will create two workers: k8s-node-1
and k8s-node-2
.
8. Modifying VM Specifications
Adjust the specifications of k8s-ctrlr
, k8s-node-1
, and k8s-node-2
.
Power off all VMs and modify their specifications in the web UI.
Depending on the server running Proxmox, the recommended specs are:
k8s-ctrlr
: 2 CPUs, 4 GB RAMk8s-node-1
,k8s-node-2
: 2 CPUs, 2 GB RAM
If the server lacks sufficient resources, set all to 2 CPUs and 2 GB RAM, which are the minimum requirements.
Even with high specs, increasing the number of workers is preferred over allocating more RAM.
Once done, power on all VMs.
For VMs cloned from a template, static IPs may revert to DHCP. Change them back to static IPs if desired.
9. Creating a Cluster
Log in to k8s-ctrlr
and create a cluster using kubeadm
.
For example, <vm-ip>
could be 192.168.0.180
, and <vm-hostname>
could be k8s-ctrlr
.
Next, run the following commands:
Then, copy the join
command generated below and execute it on other nodes.
If the command is unavailable, execute the following on k8s-ctrlr
to generate a new one:
9.1. Creating a Pod Network
On k8s-ctrlr
, execute the following command:
Finally, verify that all nodes are successfully created:
If all nodes show as Ready
, the setup is complete:
The cluster creation process ends here.
Using Kubernetes
We've created a cluster with some effort, but what should we do now? Let's deploy a web server using Kubernetes.
1. Deploying nginx pod
Let's create and deploy a single nginx pod. On k8s-ctrlr, create a pod.yaml file with the following content:
Then deploy it using the following command:
Let's verify if the pod was created successfully:
On k8s-node-1, we can confirm that nginx is running on the pod network at 10.244.1.2:80.
You can verify the connection from k8s-ctrlr, k8s-node-1, and k8s-node-2 using the following command:
2. Creating nginx service
However, directly accessing pods is not recommended. This is because pods get new IPs when they restart or are deleted. Also, while cluster internal access is possible from any node, external access is not. To solve these issues, we create a service. On k8s-ctrlr, create a service-nodeport.yaml file with the following content:
Then deploy it using the following command:
Let's verify if the service was created successfully:
If you see the following output, it was created successfully:
You can verify the connection from k8s-ctrlr, k8s-node-1, and k8s-node-2 using the following command:
Additionally, you can access it externally using the IP of any cluster member:
While on AWS you can use loadbalancer type instead of nodeport type to access through a single address, we used nodeport type here because loadbalancer type is not available locally.
Conclusion
In this post, we learned how to install Kubernetes and deploy nginx. In the next post, we'll explore other Kubernetes features. We'll look at ingress, configmap, secret, volume, deployment, statefulset, etc. Also, if possible, we'll try to set up CI/CD using Jenkins.