Deploy containerized applications with Kubernetes on Vagrant machines from scratch

Shawn Wong
7 min readAug 22, 2021

This is a brief learning note from when I learned how to use Kubernetes.

Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.

It groups containers that make up an application into logical units for easy management and discovery.

The pre-requisition

Turn on two(or more) vagrant machines with CentoOS 7.

Kubernetes requires each machine to have memory of at least 2GB and CPU at least 2 cores. Edit the Vagrantfile in the folder that you vagrant init the machine. Add below configuration.

config.vm.provider “virtualbox” do |vb|
vb.memory = “2048”
vb.cpus = 2
end

Since eth0 in Vagrant is always NAT. On a multi-vm vagrant env, this IP is always the same(going to be conflict).

If we define a NAT network for every single VM, the IP conflict will be solved but vagrant machines can not communicate with each other in this trick.

One solution here is the “Multi-Machine” feature of Vagrant.

Using “config.vm.define” to define multiple machines in the same project Vagrantfile.

Noted, we only set the public ip address for node machine, cause the application we are going to deployed with K8s will only be hosted on node machine.

Then turn on the machines with “vagrant up”. Use “vagrant ssh master” to connect to master vm, use “vagrant ssh node” to connect to node vm. From the Virtualbox Manager user interface, check two machines are on and the memory and processors are correct.

Below are operations to be run on both machines:

Disable Swap:

sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab

Disable Firewall:

sudo systemctl stop firewalldsudo systemctl disable firewalld

Disable selinux:

Edit /etc/selinux/config file, set SELINUX = disabled.

Add hosts in /etc/hosts

Set netbridge configuration in /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

Then run “sudo sysctl - -system”

Set the date/time via NTP

sudo yum install ntpdate
sudo ntpdate time.windows.com

Reboot to have settings into effect.

Install Kubernetes

Install Docker/kubeadm/kubelet at both master and node machines.

Install Docker:

sudo yum remove docker docker-common docker-selinux docker-engine-selinux docker-engine docker-cesudo yum install -y yum-utils device-mapper-persistent-data lvm2sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.reposudo yum install docker-cedocker -v

Configure Docker

cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF

Start and Enable Docker

sudo systemctl enable docker.service
sudo systemctl start docker.service

Install K8s

Setup repo in /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Then yum install:

sudo yum install kubeadm-1.19.4 kubelet-1.19.4 kubectl-1.19.4 -ysudo systemctl enable kubelet.service

Check if the installation is correct:

yum list installed | grep kubelet
yum list installed | grep kubeadm
yum list installed | grep kubectl

In which:

[kubelet] is the primary “node agent” that runs on each node;

[kubeadm] is a deployment tool. Using kubeadm, we can create a minimum viable Kubernetes cluster that conforms to best practices. It’s a simple way to try out Kubernetes for the first time.

[kubectl] is the command-line tool of Kubernetes.

Initiate the cluster at the Master machine

At master machine:

Run “kubeadm init”

sudo kubeadm init --apiserver-advertise-address="10.0.2.10" --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16

This command may take a while, cause it needs to pull the docker images that K8s requires. Can run “sudo kubeadm config images pull” in advance to make sure the images be pulled to local correctly. Can also open a new console, then use “docker images” to check the progress of the image pulling. Below screen is the state that the pulling completed.

If you didn’t initiate successfully at the first run(which is highly likely going to happen like when I did it :-P), when you do it again, there might be variet of errors such as “The connection to the server was refused” or “error execution phase mark-control-plane: timed out waiting for the condition” etc. Use “kubeadm reset” and below command to revert the changes made by “kubeadm init” .

kubeadm reset
ifconfig cni0 down && ip link delete cni0
ifconfig flannel.1 down && ip link delete flannel.1
rm -rf /var/lib/cni/
rm -rf .kube

If initiated successfully, the console output would be something like this:

Following the instruction to run the below commands:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Then we can use kubectl to check the status of the nodes:

kubectl get nodes

The output would be something like:

If met the error of “The connection to the server …was refused”, try “systemctl restart kubelet”

Join node into Kubernetes cluster

Copy the command from the last part of the successfully init output, run it at node machine:

sudo kubeadm join 10.0.2.10:6443 --token ptuzb0.h3zf5i9qx9ai85sy \
--discovery-token-ca-cert-hash sha256:925be000674fd2f08c3a1e8ee5abdf6ab570c03afbba0d5dfebc310a1321bfed

If the join command run successfully, the output would be something like:

Run “kubectl get nodes” at master vm to check:

Can see the node vm has joined the kubenetes cluster.

Then download and deploy kubernetes- flannel

wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.ymlkubectl apply -f kube-flannel.yml

If everything goes correctly, the status of the nodes will be Ready now.

If the node status became “not ready” after reboot the machine, may try “systemctl restart kubelet” to solve.

May use “kubectl get pods -n kube-system” to check if everything is running properly:

So far, the environment of k8s is ready to go.

Deploy a microservice (Nginx) in Kubernetes cluster

Create the deployment of “nginx” with below command. This operation will pull the docker image of “nginx” and run it at one of the node machine(here we have only one node).

kubectl create deployment nginx --image=nginx

Use “kubectl get pod” to check if the creating succussfully.

The pod with a name “nginx-6799fc88d8-x49mt” has been running. Be noted, the actual docker image was pulled and running at the “node machine”, though we were creating the deployment at “master machine”.

Check docker containers running on maser machine, no result:

Check docker containers running on node machine, can see there is a docker container of nginx running:

At master machine, use the below command to expose the service:

kubectl expose deployment nginx --port=80 --type=NodePort

Use “kubectl get services” to check if the service is on:

From the output info, we can see a service named “nginx” has been running. There are two ports listed for this service, in which 80 is the internal port for communication within the k8s cluster, while the latter one 32740 is the port for public access.

Access the microservice(Nginx) from browser

Use the public IP address we set for the node machine in the vagantFile: 192.168.1.33

And the port for public access to the nginx container: 32740

Input both the ip address and port in browser’s address bar:

Successfully reached the nginx default landing page!

Mission accomplished!

Ref Links:

Vagrant multiple machines

Kubernetes 101

Bootstrapping clusters with kubeadm

--

--

Shawn Wong

Ruby on Rails developer, AWS cloud practitioner, GeoBlacklight, Omeka practitioner.