Saturday 18 May 2024

Provisioning multi-node cluster on the local machine using Kubeadm and Vagrant

This article extends my notes from an Udemy course "Kubernetes for the Absolute Beginners - Hands-on". All course content rights belong to course creators. 

The previous article in the series was Kubernetes on Cloud | My Public Notepad

In previous articles we used Minikube which creates a single node which launches a single VM inside which it runs a single node which is both a master and a worker. This setup is used only for local tests. In production, we have multiple nodes e.g. at least two master nodes (usually 3 to 5 - see How Many Nodes for Your Kubernetes Control Plane? - The New Stack) in a control plane and then few (or few hundreds or thousands! - see Hey all, how many nodes do you run? : r/kubernetes).

For an emulation of the minimal production environment, let's assume we want to provision a Kubernetes cluster with one master and two worker nodes. In one of the past articles, Introduction to Kubernetes | My Public Notepad, we learned about which component need to be installed on master and which in worker nodes. 

To provision this cluster, we will be using a special tool called kubeadm. [Kubeadm | Kubernetes]

kubeadm tool is used to:
  • bootstrap a Kubernetes cluster by installing all of the necessary components on the right nodes in the right order
  • by design, it cares only about bootstrapping, not about provisioning machines
  • built to provide kubeadm init and kubeadm join as Kubernetes best-practice "fast paths" for creating multi-node Kubernetes clusters
  • performs the actions necessary to get a minimum viable cluster up and running
  • take care of requirements around security and certifications to enable communication between all of the components
  • install all of these various components individually across different nodes
  • modify all of the necessary configuration files to make sure all the components point to each other
  • set up certificates

Let's revise the tools that have similar names and which are involved in provisioning and managing Kubernetes cluster:
  • kubeadm: the command to bootstrap the cluster.
  • kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
  • kubectl: the command line util to talk to your cluster.

Here are the steps to set up a Kubernetes cluster using the kubeadm tool at a high level:
  • provision desired number of nodes for Kubernetes cluster
    • can be physical or virtual machines
  • designate one as the master and the rest as worker nodes
  • install a container runtime on the hosts, on all nodes (master and worker)
    • it needs to support Container Runtime Interface (CRI) - an API for integration wtih kubelet
    • example: containerd
  • install the kubeadm tool on all the nodes
  • initialize the master server
    • this ensures all of the required components are installed and configured on the master server
  • ensure that the network prerequisites are met
    • normal network connectivity is not enough for this
    • Kubernetes requires a special networking solution between the master and worker nodes called the Pod Network
  • worker nodes to join the cluster (to join the master node)
  • deploy our application onto the Kubernetes environment


Exercise: set up a Kubernetes cluster using the kubeadm tool on the local environment and deploy Nginx web server application in this cluster. Test it by accessing the application from the local browser.

Tools: 



Setting up node host machines



Cluster node machines can be a bare metal computers, VMs, EC2 instances in AWS etc...We'll be using local VMs. 

Initially, none of our 3 VMs is created:

../certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox$ vagrant status
Current machine states:

controlplane              not created (virtualbox)
node01                    not created (virtualbox)
node02                    not created (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

Let's create all 3 VMs:

../certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox$ vagrant up
Bringing machine 'controlplane' up with 'virtualbox' provider...
Bringing machine 'node01' up with 'virtualbox' provider...
Bringing machine 'node02' up with 'virtualbox' provider...
==> controlplane: Importing base box 'ubuntu/jammy64'...
==> controlplane: Matching MAC address for NAT networking...
==> controlplane: Setting the name of the VM: controlplane
==> controlplane: Clearing any previously set network interfaces...
==> controlplane: Preparing network interfaces based on configuration...
    controlplane: Adapter 1: nat
    controlplane: Adapter 2: bridged
==> controlplane: Forwarding ports...
    controlplane: 22 (guest) => 2222 (host) (adapter 1)
==> controlplane: Running 'pre-boot' VM customizations...
==> controlplane: Booting VM...
==> controlplane: Waiting for machine to boot. This may take a few minutes...
    controlplane: SSH address: 127.0.0.1:2222
    controlplane: SSH username: vagrant
    controlplane: SSH auth method: private key
    controlplane: Warning: Remote connection disconnect. Retrying...
    controlplane: Warning: Connection reset. Retrying...
    controlplane: 
    controlplane: Vagrant insecure key detected. Vagrant will automatically replace
    controlplane: this with a newly generated keypair for better security.
    controlplane: 
    controlplane: Inserting generated public key within guest...
    controlplane: Removing insecure key from the guest if it's present...
    controlplane: Key inserted! Disconnecting and reconnecting using new SSH key...
==> controlplane: Machine booted and ready!
==> controlplane: Checking for guest additions in VM...
    controlplane: The guest additions on this VM do not match the installed version of
    controlplane: VirtualBox! In most cases this is fine, but in rare cases it can
    controlplane: prevent things such as shared folders from working properly. If you see
    controlplane: shared folder errors, please make sure the guest additions within the
    controlplane: virtual machine match the version of VirtualBox you have installed on
    controlplane: your host and reload your VM.
    controlplane: 
    controlplane: Guest Additions Version: 6.0.0 r127566
    controlplane: VirtualBox Version: 7.0
==> controlplane: Setting hostname...
==> controlplane: Configuring and enabling network interfaces...
==> controlplane: Mounting shared folders...
    controlplane: /vagrant => /home/bojan/dev/github/certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox
==> controlplane: Running provisioner: setup-hosts (shell)...
    controlplane: Running: /tmp/vagrant-shell20240523-380226-2rpsff.sh
==> controlplane: Running provisioner: setup-dns (shell)...
    controlplane: Running: /tmp/vagrant-shell20240523-380226-ufm04a.sh
==> controlplane: Running provisioner: setup-ssh (shell)...
    controlplane: Running: /tmp/vagrant-shell20240523-380226-zynnbo.sh
==> controlplane: Running provisioner: file...
    controlplane: ./ubuntu/tmux.conf => $HOME/.tmux.conf
==> controlplane: Running provisioner: file...
    controlplane: ./ubuntu/vimrc => $HOME/.vimrc
==> controlplane: Running action triggers after up ...
==> controlplane: Running trigger: Post provisioner...
    Nothing to do here
==> node01: Importing base box 'ubuntu/jammy64'...
==> node01: Matching MAC address for NAT networking...
==> node01: Setting the name of the VM: node01
==> node01: Fixed port collision for 22 => 2222. Now on port 2200.
==> node01: Clearing any previously set network interfaces...
==> node01: Preparing network interfaces based on configuration...
    node01: Adapter 1: nat
    node01: Adapter 2: bridged
==> node01: Forwarding ports...
    node01: 22 (guest) => 2200 (host) (adapter 1)
==> node01: Running 'pre-boot' VM customizations...
==> node01: Booting VM...
==> node01: Waiting for machine to boot. This may take a few minutes...
    node01: SSH address: 127.0.0.1:2200
    node01: SSH username: vagrant
    node01: SSH auth method: private key
    node01: Warning: Remote connection disconnect. Retrying...
    node01: Warning: Connection reset. Retrying...
    node01: Warning: Connection reset. Retrying...
    node01: 
    node01: Vagrant insecure key detected. Vagrant will automatically replace
    node01: this with a newly generated keypair for better security.
    node01: 
    node01: Inserting generated public key within guest...
    node01: Removing insecure key from the guest if it's present...
    node01: Key inserted! Disconnecting and reconnecting using new SSH key...
==> node01: Machine booted and ready!
==> node01: Checking for guest additions in VM...
    node01: The guest additions on this VM do not match the installed version of
    node01: VirtualBox! In most cases this is fine, but in rare cases it can
    node01: prevent things such as shared folders from working properly. If you see
    node01: shared folder errors, please make sure the guest additions within the
    node01: virtual machine match the version of VirtualBox you have installed on
    node01: your host and reload your VM.
    node01: 
    node01: Guest Additions Version: 6.0.0 r127566
    node01: VirtualBox Version: 7.0
==> node01: Setting hostname...
==> node01: Configuring and enabling network interfaces...
==> node01: Mounting shared folders...
    node01: /vagrant => /home/bojan/dev/github/certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox
==> node01: Running provisioner: setup-hosts (shell)...
    node01: Running: /tmp/vagrant-shell20240523-380226-js33wi.sh
==> node01: Running provisioner: setup-dns (shell)...
    node01: Running: /tmp/vagrant-shell20240523-380226-ql2nuy.sh
==> node01: Running provisioner: setup-ssh (shell)...
    node01: Running: /tmp/vagrant-shell20240523-380226-8e61q9.sh
==> node01: Running action triggers after up ...
==> node01: Running trigger: Post provisioner...
    Nothing to do here
==> node02: Importing base box 'ubuntu/jammy64'...
==> node02: Matching MAC address for NAT networking...
==> node02: Setting the name of the VM: node02
==> node02: Fixed port collision for 22 => 2222. Now on port 2201.
==> node02: Clearing any previously set network interfaces...
==> node02: Preparing network interfaces based on configuration...
    node02: Adapter 1: nat
    node02: Adapter 2: bridged
==> node02: Forwarding ports...
    node02: 22 (guest) => 2201 (host) (adapter 1)
==> node02: Running 'pre-boot' VM customizations...
==> node02: Booting VM...
==> node02: Waiting for machine to boot. This may take a few minutes...
    node02: SSH address: 127.0.0.1:2201
    node02: SSH username: vagrant
    node02: SSH auth method: private key
    node02: Warning: Connection reset. Retrying...
    node02: Warning: Remote connection disconnect. Retrying...
    node02: 
    node02: Vagrant insecure key detected. Vagrant will automatically replace
    node02: this with a newly generated keypair for better security.
    node02: 
    node02: Inserting generated public key within guest...
    node02: Removing insecure key from the guest if it's present...
    node02: Key inserted! Disconnecting and reconnecting using new SSH key...
==> node02: Machine booted and ready!
==> node02: Checking for guest additions in VM...
    node02: The guest additions on this VM do not match the installed version of
    node02: VirtualBox! In most cases this is fine, but in rare cases it can
    node02: prevent things such as shared folders from working properly. If you see
    node02: shared folder errors, please make sure the guest additions within the
    node02: virtual machine match the version of VirtualBox you have installed on
    node02: your host and reload your VM.
    node02: 
    node02: Guest Additions Version: 6.0.0 r127566
    node02: VirtualBox Version: 7.0
==> node02: Setting hostname...
==> node02: Configuring and enabling network interfaces...
==> node02: Mounting shared folders...
    node02: /vagrant => /home/bojan/dev/github/certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox
==> node02: Running provisioner: setup-hosts (shell)...
    node02: Running: /tmp/vagrant-shell20240523-380226-kpm0j5.sh
==> node02: Running provisioner: setup-dns (shell)...
    node02: Running: /tmp/vagrant-shell20240523-380226-d6xikj.sh
==> node02: Running provisioner: setup-ssh (shell)...
    node02: Running: /tmp/vagrant-shell20240523-380226-obnj0e.sh
==> node02: Running action triggers after up ...
==> node02: Running trigger: Post provisioner...
    Gathering IP addresses of nodes...
    Setting /etc/hosts on nodes...
Uploading hosts.tmp to /tmp/hosts.tmp
Upload has completed successfully!

  Source: hosts.tmp
  Destination: /tmp/hosts.tmp
192.168.0.19  controlplane
192.168.0.21  node01
192.168.0.22  node02
Uploading hosts.tmp to /tmp/hosts.tmp
Upload has completed successfully!

  Source: hosts.tmp
  Destination: /tmp/hosts.tmp
192.168.0.19  controlplane
192.168.0.21  node01
192.168.0.22  node02
Uploading hosts.tmp to /tmp/hosts.tmp
Upload has completed successfully!

  Source: hosts.tmp
  Destination: /tmp/hosts.tmp
192.168.0.19  controlplane
192.168.0.21  node01
192.168.0.22  node02

VM build complete!

Use either of the following to access any NodePort services you create from your browser
replacing "port_number" with the number of your NodePort.

  http://192.168.0.21:port_number
  http://192.168.0.22:port_number


Let's check the current status of  Vagrant machines:

../certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox$ vagrant status
Current machine states:

controlplane              running (virtualbox)
node01                    running (virtualbox)
node02                    running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

Let's now ssh to each of these machines:

$ vagrant ssh controlplane
...
 IPv4 address for enp0s3: 10.0.2.15
...

vagrant@controlplane:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 02:92:d8:07:a3:a8 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic enp0s3
       valid_lft 86109sec preferred_lft 86109sec
    inet6 fe80::92:d8ff:fe07:a3a8/64 scope link 
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:a2:bf:35 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.19/24 metric 100 brd 192.168.0.255 scope global dynamic enp0s8
       valid_lft 86112sec preferred_lft 86112sec
    inet6 fe80::a00:27ff:fea2:bf35/64 scope link 
       valid_lft forever preferred_lft forever

Let's check that we can ping other two VMs (similar tests should be done on both other worker nodes):

vagrant@controlplane:~$ ping node01
PING node01 (192.168.0.21) 56(84) bytes of data.
64 bytes from node01 (192.168.0.21): icmp_seq=1 ttl=64 time=3.99 ms
64 bytes from node01 (192.168.0.21): icmp_seq=2 ttl=64 time=0.640 ms
64 bytes from node01 (192.168.0.21): icmp_seq=3 ttl=64 time=1.07 ms
64 bytes from node01 (192.168.0.21): icmp_seq=4 ttl=64 time=1.08 ms
^C
--- node01 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3124ms
rtt min/avg/max/mdev = 0.640/1.695/3.988/1.335 ms

vagrant@controlplane:~$ ping node02
PING node02 (192.168.0.22) 56(84) bytes of data.
64 bytes from node02 (192.168.0.22): icmp_seq=1 ttl=64 time=1.59 ms
64 bytes from node02 (192.168.0.22): icmp_seq=2 ttl=64 time=0.493 ms
64 bytes from node02 (192.168.0.22): icmp_seq=3 ttl=64 time=0.997 ms
^C
--- node02 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2029ms
rtt min/avg/max/mdev = 0.493/1.026/1.588/0.447 ms


$ vagrant ssh node01
// ping controlplane and node02

$ vagrant ssh node02
// ping controlplane and node01


Let's create SSH key pair and copy public key to worker nodes so we can SSH from master to them:

vagrant@controlplane:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/vagrant/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/vagrant/.ssh/id_rsa
Your public key has been saved in /home/vagrant/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:1Tkdo+7JZttTP42vA9eHfgJbFM5ZJiYb5fj0i9R0oEY vagrant@controlplane
The key's randomart image is:
+---[RSA 3072]----+
|            E.=  |
|           ooB++o|
|          . OB+B.|
|         . o.+B..|
|        S   .o.+.|
|           o+.= =|
|            *O ++|
|           o.o*.+|
|            . oBo|
+----[SHA256]-----+

vagrant@controlplane:~$ ssh-copy-id -o StrictHostKeyChecking=no vagrant@node01
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/vagrant/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
(vagrant@node01) Password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh -o 'StrictHostKeyChecking=no' 'vagrant@node01'"
and check to make sure that only the key(s) you wanted were added.

vagrant@controlplane:~$ ssh-copy-id -o StrictHostKeyChecking=no vagrant@node02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/vagrant/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
(vagrant@node02) Password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh -o 'StrictHostKeyChecking=no' 'vagrant@node02'"
and check to make sure that only the key(s) you wanted were added.


Initial setup on all nodes



The next step is to install kubeadm tool on each of these nodes. 

We need to have installed a container runtime which implements Container Runtime Interface (CRI) as Kubernetes (since version 1.30) uses CRI to talk to the container runtime (which runs containers in pods).

Container Runtimes | Kubernetes lists several options for container runtime:
  • containerd - we'll install this one
  • CRI-O
  • Docker Engine
    • Docker Engine does not implement the CRI so an additional service cri-dockerd has to be installed.
  • Mirantis Container Runtime
Regardless of the chosen container runtime, we need to configure prerequisites (https://kubernetes.io/docs/setup/production-environment/container-runtimes/#install-and-configure-prerequisites).

First we need to update apt index and install packages used by Kubernetes:

vagrant@controlplane:~$ {
    sudo apt-get update
    sudo apt-get install -y apt-transport-https ca-certificates curl
}

Then we need to configure kernel modules and make them persistent:

vagrant@controlplane:~$ {
    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

    sudo modprobe overlay
    sudo modprobe br_netfilter
}
overlay
br_netfilter4


On both master and worker nodes we'll enable IPv4 packet forwarding [https://kubernetes.io/docs/setup/production-environment/container-runtimes/#prerequisite-ipv4-forwarding-optional] by executing:

vagrant@controlplane:~$ {
    # sysctl params required by setup, params persist across reboots
    cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
    # Apply sysctl params without reboot
    sudo sysctl --system
}
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf ...
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
* Applying /etc/sysctl.d/10-ptrace.conf ...
kernel.yama.ptrace_scope = 1
* Applying /etc/sysctl.d/10-zeropage.conf ...
vm.mmap_min_addr = 65536
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.default.accept_source_route = 0
sysctl: setting key "net.ipv4.conf.all.accept_source_route": Invalid argument
net.ipv4.conf.default.promote_secondaries = 1
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
net.ipv4.ping_group_range = 0 2147483647
net.core.default_qdisc = fq_codel
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
fs.protected_regular = 1
fs.protected_fifos = 1
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/99-cloudimg-ipv6.conf ...
net.ipv6.conf.all.use_tempaddr = 0
net.ipv6.conf.default.use_tempaddr = 0
* Applying /usr/lib/sysctl.d/99-protect-links.conf ...
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
* Applying /etc/sysctl.conf ...


This above is the output from the master node. Let's verify IPv4 forwarding:

vagrant@controlplane:~$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1



We are running on Ubuntu Jammy Jellyfish on all nodes:

vagrant@controlplane:~$ cat /etc/os-release 
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

There are several options for installing containerd but we'll use the apt package manager. 

Containerd is an open-source Container runtime originally developed by Docker. The containerd.io packages in DEB and RPM formats are distributed by Docker (not by the containerd project). 

Containerd facilitates operations on containers by directly interfacing with your operating system. The Docker Engine sits on top of containerd and provides additional functionality and developer experience enhancements. [containerd vs. Docker | Docker]

containerd is a container runtime and Docker is a container engine [containerd vs. Docker: What’s the Difference? | Pure Storage Blog]

Docker uses Containerd as its runtime for creating Containers from images. Essentially, it acts as an interface (API) that allows users to use Containerd to perform low-level functionality. Simply put, when you run Docker commands in the terminal, Docker relays those commands to its low-level runtime (Containerd) that carries out all the necessary procedures. [Docker vs Containerd: A Detailed Comparison]


We can now install containerd, on all nodes. Here is the output from the master:

vagrant@controlplane:~$ sudo apt-get install -y containerd
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  runc
The following NEW packages will be installed:
  containerd runc
0 upgraded, 2 newly installed, 0 to remove and 1 not upgraded.
Need to get 40.3 MB of archives.
After this operation, 152 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 runc amd64 1.1.7-0ubuntu1~22.04.2 [4267 kB]
Get:2 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 containerd amd64 1.7.2-0ubuntu1~22.04.1 [36.0 MB]
Fetched 40.3 MB in 13s (3111 kB/s)                                                                                                                                                                      
Selecting previously unselected package runc.
(Reading database ... 64026 files and directories currently installed.)
Preparing to unpack .../runc_1.1.7-0ubuntu1~22.04.2_amd64.deb ...
Unpacking runc (1.1.7-0ubuntu1~22.04.2) ...
Selecting previously unselected package containerd.
Preparing to unpack .../containerd_1.7.2-0ubuntu1~22.04.1_amd64.deb ...
Unpacking containerd (1.7.2-0ubuntu1~22.04.1) ...
Setting up runc (1.1.7-0ubuntu1~22.04.2) ...
Setting up containerd (1.7.2-0ubuntu1~22.04.1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service.
Processing triggers for man-db (2.10.2-1) ...
Scanning processes...                                                                                                                                                                                    
Scanning linux images...                                                                                                                                                                                 

Running kernel seems to be up-to-date.

No services need to be restarted.

No containers need to be restarted.

No user sessions are running outdated binaries.

No VM guests are running outdated hypervisor (qemu) binaries on this host.


We can check the status of containerd service:

$ systemctl status containerd


https://kubernetes.io/docs/setup/production-environment/container-runtimes/#cgroup-drivers explains why systemd should be used as the cgroup driver for the kubelet and the container runtime when systemd is the selected init system. (We need to do that on all 3 nodes)

Let's first confirm what is the init system on the node:

$ ps -p 1
    PID TTY          TIME CMD
      1 ?        00:00:04 systemd


https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd shows instructions how to set systemd as the cgroup driver for the containerd.

vagrant@controlplane:~$ {
    sudo mkdir -p /etc/containerd
    containerd config default | sed 's/SystemdCgroup = false/SystemdCgroup = true/' | sudo tee /etc/containerd/config.toml
}
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2

[cgroup]
  path = ""

[debug]
  address = ""
  format = ""
  gid = 0
  level = ""
  uid = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
  tcp_address = ""
  tcp_tls_ca = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0

[metrics]
  address = ""
  grpc_histogram = false

[plugins]

  [plugins."io.containerd.gc.v1.scheduler"]
    deletion_threshold = 0
    mutation_threshold = 100
    pause_threshold = 0.02
    schedule_delay = "0s"
    startup_delay = "100ms"

  [plugins."io.containerd.grpc.v1.cri"]
    cdi_spec_dirs = ["/etc/cdi", "/var/run/cdi"]
    device_ownership_from_security_context = false
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = true
    disable_proc_mount = false
    disable_tcp_service = true
    drain_exec_sync_io_timeout = "0s"
    enable_cdi = false
    enable_selinux = false
    enable_tls_streaming = false
    enable_unprivileged_icmp = false
    enable_unprivileged_ports = false
    ignore_image_defined_volumes = false
    image_pull_progress_timeout = "1m0s"
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    sandbox_image = "registry.k8s.io/pause:3.8"
    selinux_category_range = 1024
    stats_collect_period = 10
    stream_idle_timeout = "4h0m0s"
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    systemd_cgroup = false
    tolerate_missing_hugetlb_controller = true
    unset_seccomp_profile = ""

    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
      ip_pref = ""
      max_conf_num = 1
      setup_serially = false

    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      ignore_blockio_not_enabled_errors = false
      ignore_rdt_not_enabled_errors = false
      no_pivot = false
      snapshotter = "overlayfs"

      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        privileged_without_host_devices_all_devices_allowed = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""
        sandbox_mode = ""
        snapshotter = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = ""
          cni_conf_dir = ""
          cni_max_conf_num = 0
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          privileged_without_host_devices_all_devices_allowed = false
          runtime_engine = ""
          runtime_path = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"
          sandbox_mode = "podsandbox"
          snapshotter = ""

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = true

      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        privileged_without_host_devices_all_devices_allowed = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""
        sandbox_mode = ""
        snapshotter = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]

    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = "node"

    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]

    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""

  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"

  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"

  [plugins."io.containerd.internal.v1.tracing"]
    sampling_ratio = 1.0
    service_name = "containerd"

  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"

  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false

  [plugins."io.containerd.nri.v1.nri"]
    disable = true
    disable_connections = false
    plugin_config_path = "/etc/nri/conf.d"
    plugin_path = "/opt/nri/plugins"
    plugin_registration_timeout = "5s"
    plugin_request_timeout = "2s"
    socket_path = "/var/run/nri/nri.sock"

  [plugins."io.containerd.runtime.v1.linux"]
    no_shim = false
    runtime = "runc"
    runtime_root = ""
    shim = "containerd-shim"
    shim_debug = false

  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]
    sched_core = false

  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]

  [plugins."io.containerd.service.v1.tasks-service"]
    blockio_config_file = ""
    rdt_config_file = ""

  [plugins."io.containerd.snapshotter.v1.aufs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.btrfs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.devmapper"]
    async_remove = false
    base_image_size = ""
    discard_blocks = false
    fs_options = ""
    fs_type = ""
    pool_name = ""
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.native"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.overlayfs"]
    root_path = ""
    upperdir_label = false

  [plugins."io.containerd.snapshotter.v1.zfs"]
    root_path = ""

  [plugins."io.containerd.tracing.processor.v1.otlp"]
    endpoint = ""
    insecure = false
    protocol = ""

  [plugins."io.containerd.transfer.v1.local"]
    config_path = ""
    max_concurrent_downloads = 3
    max_concurrent_uploaded_layers = 3

    [[plugins."io.containerd.transfer.v1.local".unpack_config]]
      differ = ""
      platform = "linux/amd64"
      snapshotter = "overlayfs"

[proxy_plugins]

[stream_processors]

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar"

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar+gzip"

[timeouts]
  "io.containerd.timeout.bolt.open" = "0s"
  "io.containerd.timeout.metrics.shimstats" = "2s"
  "io.containerd.timeout.shim.cleanup" = "5s"
  "io.containerd.timeout.shim.load" = "5s"
  "io.containerd.timeout.shim.shutdown" = "3s"
  "io.containerd.timeout.task.state" = "2s"

[ttrpc]
  address = ""
  gid = 0
  uid = 0



After we apply this change, we need to restart containerd:

$ sudo systemctl restart containerd

We need to repeat these steps on all 3 nodes. 

After that is done, we have container runtime installed.

The next step is installing kubeadm, kubelet and kubectl packages on all of our machines. Let's just remember what are these tools:
  • kubeadm: the command to bootstrap the cluster.
  • kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
  • kubectl: the command line util to talk to your cluster.
Here is the output of commands https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl applied to worker node 1 (but these commands need to be executed on all 3 nodes: master and 2 worker nodes):

vagrant@node01:~$ sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
Hit:1 https://download.docker.com/linux/ubuntu jammy InRelease
Hit:2 http://archive.ubuntu.com/ubuntu jammy InRelease
Get:3 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB]
Get:4 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB]
Hit:5 http://archive.ubuntu.com/ubuntu jammy-backports InRelease    
Get:6 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages [1472 kB]
Get:7 http://security.ubuntu.com/ubuntu jammy-security/main Translation-en [253 kB]
Get:8 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages [1876 kB]
Get:9 http://security.ubuntu.com/ubuntu jammy-security/restricted Translation-en [318 kB]
Fetched 4148 kB in 2s (1960 kB/s)                          
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
ca-certificates is already the newest version (20230311ubuntu0.22.04.1).
curl is already the newest version (7.81.0-1ubuntu1.16).
gpg is already the newest version (2.2.27-3ubuntu2.1).
gpg set to manually installed.
The following NEW packages will be installed:
  apt-transport-https
0 upgraded, 1 newly installed, 0 to remove and 1 not upgraded.
Need to get 1510 B of archives.
After this operation, 170 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 apt-transport-https all 2.4.12 [1510 B]
Fetched 1510 B in 0s (32.3 kB/s)        
Selecting previously unselected package apt-transport-https.
(Reading database ... 64038 files and directories currently installed.)
Preparing to unpack .../apt-transport-https_2.4.12_all.deb ...
Unpacking apt-transport-https (2.4.12) ...
Setting up apt-transport-https (2.4.12) ...
Scanning processes...                                                                                                               
Scanning linux images...                                                                                                            

Running kernel seems to be up-to-date.

No services need to be restarted.

No containers need to be restarted.

No user sessions are running outdated binaries.

No VM guests are running outdated hypervisor (qemu) binaries on this host.


Let's determine latest version of Kubernetes: 

vagrant@controlplane:~$ KUBE_LATEST=$(curl -L -s https://dl.k8s.io/release/stable.txt | awk 'BEGIN { FS="." } { printf "%s.%s", $1, $2 }')

$ echo ${KUBE_LATEST}
v1.30

As we will add Kubernetes apt repository to the list of apt repositories, we first need to download its public key:

vagrant@controlplane:~$ {
    sudo mkdir -p /etc/apt/keyrings
    curl -fsSL https://pkgs.k8s.io/core:/stable:/${KUBE_LATEST}/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
}

Now we can add Kubernetes apt repository:

vagrant@controlplane:~$ echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/${KUBE_LATEST}/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /

Let's now install Kubernetes tools:

vagrant@controlplane:~$ {
    sudo apt-get update
    sudo apt-get install -y kubelet kubeadm kubectl
    sudo apt-mark hold kubelet kubeadm kubectl
}
Hit:1 http://security.ubuntu.com/ubuntu jammy-security InRelease
Hit:2 http://archive.ubuntu.com/ubuntu jammy InRelease              
Hit:3 http://archive.ubuntu.com/ubuntu jammy-updates InRelease      
Hit:4 http://archive.ubuntu.com/ubuntu jammy-backports InRelease    
Get:5 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb  InRelease [1186 B]
Get:6 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb  Packages [3957 B]
Fetched 5143 B in 1s (7439 B/s)     
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  conntrack cri-tools ebtables kubernetes-cni socat
The following NEW packages will be installed:
  conntrack cri-tools ebtables kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 8 newly installed, 0 to remove and 1 not upgraded.
Need to get 93.9 MB of archives.
After this operation, 343 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 conntrack amd64 1:1.4.6-2build2 [33.5 kB]
Get:4 http://archive.ubuntu.com/ubuntu jammy/main amd64 ebtables amd64 2.0.11-4build2 [84.9 kB]      
Get:2 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb  cri-tools 1.30.0-1.1 [21.3 MB]
Get:8 http://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB]                 
Get:3 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb  kubeadm 1.30.1-1.1 [10.4 MB]
Get:5 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb  kubectl 1.30.1-1.1 [10.8 MB]                                                                              
Get:6 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb  kubernetes-cni 1.4.0-1.1 [32.9 MB]                                                                        
Get:7 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb  kubelet 1.30.1-1.1 [18.1 MB]                                                                              
Fetched 93.9 MB in 15s (6378 kB/s)                                                                                                                                                                      
Selecting previously unselected package conntrack.
(Reading database ... 64087 files and directories currently installed.)
Preparing to unpack .../0-conntrack_1%3a1.4.6-2build2_amd64.deb ...
Unpacking conntrack (1:1.4.6-2build2) ...
Selecting previously unselected package cri-tools.
Preparing to unpack .../1-cri-tools_1.30.0-1.1_amd64.deb ...
Unpacking cri-tools (1.30.0-1.1) ...
Selecting previously unselected package ebtables.
Preparing to unpack .../2-ebtables_2.0.11-4build2_amd64.deb ...
Unpacking ebtables (2.0.11-4build2) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../3-kubeadm_1.30.1-1.1_amd64.deb ...
Unpacking kubeadm (1.30.1-1.1) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../4-kubectl_1.30.1-1.1_amd64.deb ...
Unpacking kubectl (1.30.1-1.1) ...
Selecting previously unselected package kubernetes-cni.
Preparing to unpack .../5-kubernetes-cni_1.4.0-1.1_amd64.deb ...
Unpacking kubernetes-cni (1.4.0-1.1) ...
Selecting previously unselected package socat.
Preparing to unpack .../6-socat_1.7.4.1-3ubuntu4_amd64.deb ...
Unpacking socat (1.7.4.1-3ubuntu4) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../7-kubelet_1.30.1-1.1_amd64.deb ...
Unpacking kubelet (1.30.1-1.1) ...
Setting up conntrack (1:1.4.6-2build2) ...
Setting up kubectl (1.30.1-1.1) ...
Setting up ebtables (2.0.11-4build2) ...
Setting up socat (1.7.4.1-3ubuntu4) ...
Setting up cri-tools (1.30.0-1.1) ...
Setting up kubernetes-cni (1.4.0-1.1) ...
Setting up kubeadm (1.30.1-1.1) ...
Setting up kubelet (1.30.1-1.1) ...
Processing triggers for man-db (2.10.2-1) ...
Scanning processes...                                                                                                                                                                                    
Scanning linux images...                                                                                                                                                                                 

Running kernel seems to be up-to-date.

No services need to be restarted.

No containers need to be restarted.

No user sessions are running outdated binaries.

No VM guests are running outdated hypervisor (qemu) binaries on this host.
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.


crictl is a command-line interface for CRI-compatible container runtimes. You can use it to inspect and debug container runtimes and applications on a Kubernetes node.

Let's set up crictl in case we need to examine containers:

vagrant@controlplane:~$ sudo crictl config \
    --set runtime-endpoint=unix:///run/containerd/containerd.sock \
    --set image-endpoint=unix:///run/containerd/containerd.sock


Creating a cluster (from control plane)



We are now set to create a  cluster with kubeadm: Creating a cluster with kubeadm | Kubernetes. This is done from the master node only.

On the master node, let's check the IP address:

vagrant@controlplane:~$ ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 02:92:d8:07:a3:a8 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic enp0s3
       valid_lft 69914sec preferred_lft 69914sec
    inet6 fe80::92:d8ff:fe07:a3a8/64 scope link 
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:a2:bf:35 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.19/24 metric 100 brd 192.168.0.255 scope global dynamic enp0s8
       valid_lft 66347sec preferred_lft 66347sec
    inet6 fe80::a00:27ff:fea2:bf35/64 scope link 
       valid_lft forever preferred_lft forever

We want to make kubelet use for listening (bridge) network (with primary IP address) and not the one chosen by default which is the first non-local loop network adapter, which might be NAT adapter: 

vagrant@controlplane:~$ cat <<EOF | sudo tee /etc/default/kubelet
KUBELET_EXTRA_ARGS='--node-ip ${PRIMARY_IP}'
EOF
KUBELET_EXTRA_ARGS='--node-ip 192.168.0.19'

vagrant@node01:~$ cat <<EOF | sudo tee /etc/default/kubelet
KUBELET_EXTRA_ARGS='--node-ip ${PRIMARY_IP}'
EOF
KUBELET_EXTRA_ARGS='--node-ip 192.168.0.21'

vagrant@node02:~$ cat <<EOF | sudo tee /etc/default/kubelet
KUBELET_EXTRA_ARGS='--node-ip ${PRIMARY_IP}'
EOF
KUBELET_EXTRA_ARGS='--node-ip 192.168.0.22'



When we run kubeadm init command, we'll have to specify --pod--network--cidr to provide what is going to be the pod's network. We'll use 10.244.0.0/16:

vagrant@controlplane:~$ POD_CIDR=10.244.0.0/16
vagrant@controlplane:~$ SERVICE_CIDR=10.96.0.0/16

We'll also use --apiserver-advertise-address to specify what is the address that API server will listen on and we'll set it to be the static IP that we set for the master node. This will make API server accessible to all worker nodes.

Let's now provision the cluster:

vagrant@controlplane:~$ sudo kubeadm init --pod-network-cidr $POD_CIDR --service-cidr $SERVICE_CIDR --apiserver-advertise-address $PRIMARY_IP
[init] Using Kubernetes version: v1.30.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0523 00:26:36.688123    4302 checks.go:844] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [controlplane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.19]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [controlplane localhost] and IPs [192.168.0.19 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [controlplane localhost] and IPs [192.168.0.19 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.013183546s
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 7.505965178s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node controlplane as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node controlplane as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 7dorxi.kf6ttb6y0bpiku7t
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.19:6443 --token 7dorxi.kf6ttb6y0bpiku7t \
        --discovery-token-ca-cert-hash sha256:be00e778c55e32fdf0dec5057a0dacffbcb32aeb7cfa91cb5e7edbed853536b2 


The command above creates kubectl config file (etc/kubernetes/admin.conf) which we need to copy to location where kubectl will be looking for it.
 
Let's run the suggested kubeconfig setup commands from above, in the master node only:

vagrant@controlplane:~$ {

    mkdir ~/.kube

    sudo cp /etc/kubernetes/admin.conf ~/.kube/config

    sudo chown $(id -u):$(id -g) ~/.kube/config

    chmod 600 ~/.kube/config

}


Let's check the kubectl config:

$ cat $HOME/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0...K
    server: https://192.168.0.19:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0t...S0K
    client-key-data: LS0tL...LQo=


Let's test that kubectl works by checking out pods created in Kubernetes system namespace (at this point there are no pods in the default namespace!):

vagrant@controlplane:~$ kubectl get pods -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-7db6d8ff4d-d4fkw               0/1     Pending   0          13s
coredns-7db6d8ff4d-xk6t8               0/1     Pending   0          13s
etcd-controlplane                      1/1     Running   0          30s
kube-apiserver-controlplane            1/1     Running   0          30s
kube-controller-manager-controlplane   1/1     Running   0          30s
kube-proxy-shnrp                       1/1     Running   0          14s
kube-scheduler-controlplane            1/1     Running   0          30s


Let's check if api server is listening on port 6443:

vagrant@controlplane:~$ sudo ss | grep 6443
tcp   ESTAB  0      0                                                                            192.168.0.19:45602              192.168.0.19:6443        
tcp   ESTAB  0      0                                                                            192.168.0.19:34290              192.168.0.19:6443        
tcp   ESTAB  0      0                                                                            192.168.0.19:45556              192.168.0.19:6443        
tcp   ESTAB  0      0                                                                            192.168.0.19:34272              192.168.0.19:6443        
tcp   ESTAB  0      0                                                                            192.168.0.19:34256              192.168.0.19:6443        
tcp   ESTAB  0      0                                                                            192.168.0.19:53434              192.168.0.19:6443        
tcp   ESTAB  0      0                                                                   [::ffff:192.168.0.19]:6443      [::ffff:192.168.0.19]:34290       
tcp   ESTAB  0      0                                                                   [::ffff:192.168.0.19]:6443      [::ffff:192.168.0.19]:34256       
tcp   ESTAB  0      0                                                                   [::ffff:192.168.0.19]:6443      [::ffff:192.168.0.19]:53434       
tcp   ESTAB  0      0                                                                   [::ffff:192.168.0.19]:6443      [::ffff:192.168.0.19]:34272       
tcp   ESTAB  0      0                                                                   [::ffff:192.168.0.19]:6443      [::ffff:192.168.0.19]:45602       
tcp   ESTAB  0      0                                                                   [::ffff:192.168.0.19]:6443      [::ffff:192.168.0.19]:45556       
tcp   ESTAB  0      0                                                                                   [::1]:6443                      [::1]:51252       
tcp   ESTAB  0      0                                                                                   [::1]:51252                     [::1]:6443        


Let's follow now the next suggestion from kubeadm init ouptut - to deploy a pod network to the cluster. 
https://kubernetes.io/docs/concepts/cluster-administration/addons/ lists few options and we'll be using rajch/weave: Simple, resilient multi-host containers networking and more. 
 
To install Weave networking:

vagrant@controlplane:~$ kubectl apply -f "https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s-1.11.yaml"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created

This adds a new deamonset to kube-system namespace:

vagrant@controlplane:~$ kubectl get ds -A
NAMESPACE     NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   kube-proxy   3         3         3       3            3           kubernetes.io/os=linux   39h
kube-system   weave-net    3         3         3       3            3           <none>                   38h

To check its configuration (optional):

$ kubectl edit ds weave-net -n kube-system


Let's see network interfaces with Weave networking in place:

vagrant@controlplane:~$ ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 02:92:d8:07:a3:a8 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic enp0s3
       valid_lft 69914sec preferred_lft 69914sec
    inet6 fe80::92:d8ff:fe07:a3a8/64 scope link 
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:a2:bf:35 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.19/24 metric 100 brd 192.168.0.255 scope global dynamic enp0s8
       valid_lft 66347sec preferred_lft 66347sec
    inet6 fe80::a00:27ff:fea2:bf35/64 scope link 
       valid_lft forever preferred_lft forever
4: datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether b6:44:ba:8d:e9:5b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::b444:baff:fe8d:e95b/64 scope link 
       valid_lft forever preferred_lft forever
6: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
    link/ether 66:62:1f:35:75:ec brd ff:ff:ff:ff:ff:ff
    inet 10.32.0.1/12 brd 10.47.255.255 scope global weave
       valid_lft forever preferred_lft forever
    inet6 fe80::6462:1fff:fe35:75ec/64 scope link 
       valid_lft forever preferred_lft forever
8: vethwe-datapath@vethwe-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master datapath state UP group default 
    link/ether 5e:be:b5:c8:7b:5b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::90ef:e2ff:fe04:ae5f/64 scope link 
       valid_lft forever preferred_lft forever
9: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default 
    link/ether 7a:e2:29:f8:dd:d9 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::74c2:c5ff:fe4a:fbd4/64 scope link 
       valid_lft forever preferred_lft forever
10: vxlan-6784: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65535 qdisc noqueue master datapath state UNKNOWN group default qlen 1000
    link/ether ee:23:4d:40:2d:61 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::4876:bbff:feb7:8f1a/64 scope link 
       valid_lft forever preferred_lft forever
12: vethwepl83832e9@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default 
    link/ether 52:6f:59:3d:a1:ff brd ff:ff:ff:ff:ff:ff link-netns cni-92fbdf18-5b4d-840b-a941-0c7acac23d91
    inet6 fe80::506f:59ff:fe3d:a1ff/64 scope link 
       valid_lft forever preferred_lft forever
14: vethwepleb4ffea@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default 
    link/ether be:d2:26:fa:30:00 brd ff:ff:ff:ff:ff:ff link-netns cni-435511a3-4edb-8689-0dd8-11e729008d34
    inet6 fe80::bcd2:26ff:fefa:3000/64 scope link 
       valid_lft forever preferred_lft forever


Let's check if all pods are running now:

vagrant@controlplane:~$ kubectl get pods -n kube-system
NAME                                   READY   STATUS              RESTARTS      AGE
coredns-7db6d8ff4d-d4fkw               1/1     Running             0             3m35s
coredns-7db6d8ff4d-xk6t8               0/1     ContainerCreating   0             3m35s
etcd-controlplane                      1/1     Running             0             3m52s
kube-apiserver-controlplane            1/1     Running             0             3m52s
kube-controller-manager-controlplane   1/1     Running             0             3m52s
kube-proxy-shnrp                       1/1     Running             0             3m36s
kube-scheduler-controlplane            1/1     Running             0             3m52s
weave-net-tgnrf                        2/2     Running             1 (17s ago)   35s


And a little bit later, all are running:

vagrant@controlplane:~$ kubectl get pods -n kube-system
NAME                                   READY   STATUS    RESTARTS      AGE
coredns-7db6d8ff4d-d4fkw               1/1     Running   0             37h
coredns-7db6d8ff4d-xk6t8               1/1     Running   0             37h
etcd-controlplane                      1/1     Running   0             37h
kube-apiserver-controlplane            1/1     Running   0             37h
kube-controller-manager-controlplane   1/1     Running   0             37h
kube-proxy-shnrp                       1/1     Running   0             37h
kube-scheduler-controlplane            1/1     Running   0             37h
weave-net-tgnrf                        2/2     Running   1 (37h ago)   37h


If you get:

vagrant@controlplane:~$ kubectl get pod
E0522 14:37:24.977070   19903 memcache.go:265] couldn't get current server API group list: Get "https://192.168.0.16:6443/api?timeout=32s": dial tcp 192.168.0.16:6443: connect: connection refused
The connection to the server 192.168.0.16:6443 was refused - did you specify the right host or port?

...that might be a sign that systemd has not been set as the cgroup driver for the containerd.


Joining worker nodes to the cluster



Let's get the command for joining (which was printed in the kubeadm init output):

vagrant@controlplane:~$ kubeadm token create --print-join-command
kubeadm join 192.168.0.19:6443 --token 2jb2ri.qnm1ktzb49ovm9f1 --discovery-token-ca-cert-hash sha256:be00e778c55e32fdf0dec5057a0dacffbcb32aeb7cfa91cb5e7edbed853536b2 

On each worker node:

Become a root (or run the following commands as sudo):

vagrant@node01:~$ sudo -i
root@node01:~# 

Make a worker node join the master:

root@node01:~# kubeadm join 192.168.0.19:6443 --token 2jb2ri.qnm1ktzb49ovm9f1 --discovery-token-ca-cert-hash sha256:be00e778c55e32fdf0dec5057a0dacffbcb32aeb7cfa91cb5e7edbed853536b2
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 2.025845416s
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Let's check if all nodes are visible:

vagrant@controlplane:~$ kubectl get nodes
NAME           STATUS   ROLES           AGE   VERSION
controlplane   Ready    control-plane   37h   v1.30.1
node01         Ready    <none>          88s   v1.30.1
node02         Ready    <none>          21s   v1.30.1

Current pods:

vagrant@controlplane:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                   READY   STATUS    RESTARTS        AGE
kube-system   coredns-7db6d8ff4d-d4fkw               1/1     Running   0               37h
kube-system   coredns-7db6d8ff4d-xk6t8               1/1     Running   0               37h
kube-system   etcd-controlplane                      1/1     Running   0               37h
kube-system   kube-apiserver-controlplane            1/1     Running   0               37h
kube-system   kube-controller-manager-controlplane   1/1     Running   0               37h
kube-system   kube-proxy-fdg79                       1/1     Running   0               3m20s
kube-system   kube-proxy-shnrp                       1/1     Running   0               37h
kube-system   kube-proxy-xbnw8                       1/1     Running   0               4m28s
kube-system   kube-scheduler-controlplane            1/1     Running   0               37h
kube-system   weave-net-55l5f                        2/2     Running   0               4m28s
kube-system   weave-net-p5m45                        2/2     Running   1 (2m37s ago)   3m20s
kube-system   weave-net-tgnrf                        2/2     Running   1 (37h ago)     37h


Current replicasets:

vagrant@controlplane:~$ kubectl get replicasets --all-namespaces
NAMESPACE     NAME                 DESIRED   CURRENT   READY   AGE
kube-system   coredns-7db6d8ff4d   2         2         2       37h

Current deployments:

vagrant@controlplane:~$ kubectl get deployments --all-namespaces
NAMESPACE     NAME      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   coredns   2/2     2            2           37h

Current services:

vagrant@controlplane:~$ kubectl get services --all-namespaces
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  37h
kube-system   kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   37h

We can quickly test that we can create a pod in the cluster:

vagrant@controlplane:~$ kubectl run nginx --image=nginx
pod/nginx created

vagrant@controlplane:~$ kubectl get pod
NAME  READY   STATUS    RESTARTS        AGE
nginx 1/1     Running   0               10s                                                

vagrant@controlplane:~$ kubectl delete pod nginx
pod "nginx" deleted

vagrant@controlplane:~$ kubectl get pod
No resources found in default namespace.



Application deployment on the cluster



Let's now create our custom deployment of Nginx application which is a web server.

On master:

vagrant@controlplane:~$ kubectl create deployment nginx --image nginx:alpine
deployment.apps/nginx created

Let's check Kubernetes objects:

vagrant@controlplane:~$ kubectl get all --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS      AGE
default       pod/nginx-6f564d4fd9-jp898                 1/1     Running   0             114s
kube-system   pod/coredns-7db6d8ff4d-d4fkw               1/1     Running   0             38h
kube-system   pod/coredns-7db6d8ff4d-xk6t8               1/1     Running   0             38h
kube-system   pod/etcd-controlplane                      1/1     Running   0             38h
kube-system   pod/kube-apiserver-controlplane            1/1     Running   0             38h
kube-system   pod/kube-controller-manager-controlplane   1/1     Running   0             38h
kube-system   pod/kube-proxy-fdg79                       1/1     Running   0             27m
kube-system   pod/kube-proxy-shnrp                       1/1     Running   0             38h
kube-system   pod/kube-proxy-xbnw8                       1/1     Running   0             28m
kube-system   pod/kube-scheduler-controlplane            1/1     Running   0             38h
kube-system   pod/weave-net-55l5f                        2/2     Running   0             28m
kube-system   pod/weave-net-p5m45                        2/2     Running   1 (27m ago)   27m
kube-system   pod/weave-net-tgnrf                        2/2     Running   1 (38h ago)   38h

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  38h
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   38h

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/kube-proxy   3         3         3       3            3           kubernetes.io/os=linux   38h
kube-system   daemonset.apps/weave-net    3         3         3       3            3           <none>                   38h

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
default       deployment.apps/nginx     1/1     1            1           114s
kube-system   deployment.apps/coredns   2/2     2            2           38h

NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE
default       replicaset.apps/nginx-6f564d4fd9     1         1         1       114s
kube-system   replicaset.apps/coredns-7db6d8ff4d   2         2         2       38h


Our custom objects are all in the default namespace so we can shorten this:

vagrant@controlplane:~$ kubectl get all
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-6f564d4fd9-jp898   1/1     Running   0          2m22s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   38h

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           2m22s

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-6f564d4fd9   1         1         1       2m22s

We can also check which pods are running on which nodes:

$ kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                   READY   STATUS    RESTARTS        AGE     IP             NODE           NOMINATED NODE   READINESS GATES
default       nginx-6f564d4fd9-jp898                 1/1     Running   0               30h     10.44.0.1      node01         <none>           <none>
kube-system   coredns-7db6d8ff4d-d4fkw               1/1     Running   0               2d20h   10.32.0.2      controlplane   <none>           <none>
kube-system   coredns-7db6d8ff4d-xk6t8               1/1     Running   0               2d20h   10.32.0.3      controlplane   <none>           <none>
kube-system   etcd-controlplane                      1/1     Running   0               2d20h   192.168.0.19   controlplane   <none>           <none>
kube-system   kube-apiserver-controlplane            1/1     Running   0               2d20h   192.168.0.19   controlplane   <none>           <none>
kube-system   kube-controller-manager-controlplane   1/1     Running   0               2d20h   192.168.0.19   controlplane   <none>           <none>
kube-system   kube-proxy-fdg79                       1/1     Running   0               31h     192.168.0.22   node02         <none>           <none>
kube-system   kube-proxy-shnrp                       1/1     Running   0               2d20h   192.168.0.19   controlplane   <none>           <none>
kube-system   kube-proxy-xbnw8                       1/1     Running   0               31h     192.168.0.21   node01         <none>           <none>
kube-system   kube-scheduler-controlplane            1/1     Running   0               2d20h   192.168.0.19   controlplane   <none>           <none>
kube-system   weave-net-55l5f                        2/2     Running   0               31h     192.168.0.21   node01         <none>           <none>
kube-system   weave-net-p5m45                        2/2     Running   1 (31h ago)     31h     192.168.0.22   node02         <none>           <none>
kube-system   weave-net-tgnrf                        2/2     Running   1 (2d20h ago)   2d20h   192.168.0.19   controlplane   <none>           <none>


Let's check if nginx pod is indeed listening on port 80:

$ kubectl exec nginx-6f564d4fd9-jp898 -- netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1/nginx: master pro
tcp        0      0 :::80                   :::*                    LISTEN      1/nginx: master pro


Let's expose pod's port:

vagrant@controlplane:~$ kubectl expose deploy nginx --type=NodePort --port 80
service/nginx exposed

New NodePort service now appears in the list of services:

vagrant@controlplane:~$ kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        38h
nginx        NodePort    10.96.89.115   <none>        80:30961/TCP   30s 

Let's extract the port number:

vagrant@controlplane:~$ PORT_NUMBER=$(kubectl get service -l app=nginx -o jsonpath="{.items[0].spec.ports[0].nodePort}")
echo -e "\n\nService exposed on NodePort $PORT_NUMBER"

Service exposed on NodePort 30961

Verification:

vagrant@controlplane:~$ echo $PORT_NUMBER
30961

Here is the full diagram of the system:

For larger image click on https://drive.google.com/file/d/1LRTVSPn4gKoGWxJMuOlXweO9Gje4uhT9/view?usp=sharing




Testing the application deployed on the Kubernetes cluster



We can now access our application (Nginx) from controlplane:

Reply from the Nginx running on the first worker node:

vagrant@controlplane:~$ curl http://node01:$PORT_NUMBER
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Reply from the Nginx running on the second worker node:

vagrant@controlplane:~$ curl http://node02:$PORT_NUMBER
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>


Test from the browser:

vagrant@controlplane:~$ echo "http://$(dig +short node01):$PORT_NUMBER"
http://192.168.0.21:30961
vagrant@controlplane:~$ echo "http://$(dig +short node02):$PORT_NUMBER"
http://192.168.0.22:30961




Stopping the Vagrant Virtual Machines



../certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox$ vagrant status
Current machine states:

controlplane              running (virtualbox)
node01                    running (virtualbox)
node02                    running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

../certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox$ vagrant halt
==> node02: Attempting graceful shutdown of VM...
==> node01: Attempting graceful shutdown of VM...
==> controlplane: Attempting graceful shutdown of VM...


We can see in VirtualBox that all 3 VMs are powered off:




No comments: