Showing posts with label Vagrant. Show all posts
Showing posts with label Vagrant. Show all posts

Sunday, 19 May 2024

Introduction to Vagrant

 


Vagrant:
  • a modular framework for working with Virtual Machines (VMs) and containers 
  • enables the creation and configuration of lightweight, reproducible, and portable development environments (Vagrant Machines)
  • cross-platform (Win, MacOS, Linux)
  • has networking with VMs set up out of the box
  • directory which contains Vagrantfile automatically gets synced to VM
  • important concept: Provisioners and Providers
    • they can be mixed and matched
  • designed to work with various virtualization platforms, such as VirtualBox, VMware, Hyper-V, and more, making it a versatile tool for developers

Vagrant Machine

  • lightweight, reproducible, and portable development environment
  • any virtual machine that was created by Vagrant as the result of an initial vagrant up command
  • different from a standard/full virtual machine in that it is run in a terminal of the host computer (but under the bonnet, it does run the VM e.g. VirtualBox VM which can be verified by looking the VirtualBox UI)
  • we write programs on the host OS but run them inside a Vagrant machine that runs different OS but can work with our files and folders on the host OS
  • we need to install virtualization provider e.g. VirtualBox and Vagrant (and on Windows also Cygwin)
  • we can then download a Vagrant machine with desired guest OS and either fill it with our custom software we can download a ready-made machine
  • A Vagrant machine is compiled from a box. It can be a virtual machine, a container or a remote server from a cloud service.

Box

  • a package that can be used to create Vagrant machines
  • a file that we can download from a registry (e.g. Hashicorp Vagrant Cloud - https://app.vagrantup.com/boxes/search) and install on our host to get access to a complete computer (VM) running another OS 
  • We can download boxes from app.vagrantup.com, or we can build a new box from a Vagrantfile.
  • A box can be used as a base for another box. 
  • The base boxes are usually operating system boxes downloaded from app.vagrantup.com.
  • Vagrant Boxes | Vagrant | HashiCorp Developer

Providers

  • provide virtualization support
  • responsible for providing the virtualization technology that will run our machine.
  • they bring VMs into existence
  • 2 types:
    • Local
      • VirtualBox, libvirt, VMWare, Docker
    • Remote
      • OpenStack, Digital Ocean, AWS

Provisioners

  • Provision the VM
  • responsible for installing and configuring the necessary software on a newly created Vagrant machine.
  • they do shared, repeatable configuration against the VM
  • types:
    • Simple on-ramp
      • Shell (inline commands, shell scripts)
    • More powerful
      • Ansible, Puppet, Chef, Salt

Vagrantfiles

  • a file that describes how to create one or more Vagrant machines. Vagrantfiles use the Ruby language, as well as objects provided by Vagrant itself.

How to start with Vagrant?


Let's create a test directory for Vagrant:

$ mkdir vagrant-test 
$ cd vagrant-test

Let's check out the options:

$ vagrant --help
Usage: vagrant [options] <command> [<args>]

    -h, --help                       Print this help.

Common commands:
     autocomplete    manages autocomplete installation on host
     box             manages boxes: installation, removal, etc.
     cloud           manages everything related to Vagrant Cloud
     destroy         stops and deletes all traces of the vagrant machine
     global-status   outputs status Vagrant environments for this user
     halt            stops the vagrant machine
     help            shows the help for a subcommand
     init            initializes a new Vagrant environment by creating a Vagrantfile
     login           
     package         packages a running vagrant environment into a box
     plugin          manages plugins: install, uninstall, update, etc.
     port            displays information about guest port mappings
     powershell      connects to machine via powershell remoting
     provision       provisions the vagrant machine
     push            deploys code in this environment to a configured destination
     rdp             connects to machine via RDP
     reload          restarts vagrant machine, loads new Vagrantfile configuration
     resume          resume a suspended vagrant machine
     serve           start Vagrant server
     snapshot        manages snapshots: saving, restoring, etc.
     ssh             connects to machine via SSH
     ssh-config      outputs OpenSSH valid configuration to connect to the machine
     status          outputs status of the vagrant machine
     suspend         suspends the machine
     up              starts and provisions the vagrant environment
     upload          upload to machine via communicator
     validate        validates the Vagrantfile
     version         prints current and latest Vagrant version
     winrm           executes commands on a machine via WinRM
     winrm-config    outputs WinRM configuration to connect to the machine

For help on any individual command run `vagrant COMMAND -h`

Additional subcommands are available, but are either more advanced
or not commonly used. To see all subcommands, run the command
`vagrant list-commands`.
        --[no-]color                 Enable or disable color output
        --machine-readable           Enable machine readable output
    -v, --version                    Display Vagrant version
        --debug                      Enable debug output
        --timestamp                  Enable timestamps on log output
        --debug-timestamp            Enable debug output with timestamps
        --no-tty                     Enable non-interactive output

Let's try to see what's the current status of the vagrant machine (which does not exist as we haven't created Vagrantfile and we haven't started/provisioned a new Vagrant environment from it):

$ vagrant status
A Vagrant environment or target machine is required to run this
command. Run `vagrant init` to create a new Vagrant environment. Or,
get an ID of a target machine from `vagrant global-status` to run
this command on. A final option is to change to a directory with a
Vagrantfile and to try again.

Let's check out the options of vagrant init [vagrant init - Command-Line Interface | Vagrant | HashiCorp Developer]:

$ vagrant init --help
Usage: vagrant init [options] [name [url]]

Options:

        --box-version VERSION        Version of the box to add
    -f, --force                      Overwrite existing Vagrantfile
    -m, --minimal                    Use minimal Vagrantfile template (no help comments). Ignored with --template
        --output FILE                Output path for the box. '-' for stdout
        --template FILE              Path to custom Vagrantfile template
        --[no-]color                 Enable or disable color output
        --machine-readable           Enable machine readable output
    -v, --version                    Display Vagrant version
        --debug                      Enable debug output
        --timestamp                  Enable timestamps on log output
        --debug-timestamp            Enable debug output with timestamps
        --no-tty                     Enable non-interactive output
    -h, --help                       Print this help

Let's run vagrant init with no options:

$ vagrant init
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.

Let's check out the created Vagrantfile:

$ cat Vagrantfile 
# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
  # The most common configuration options are documented and commented below.
  # For a complete reference, please see the online documentation at
  # https://docs.vagrantup.com.

  # Every Vagrant development environment requires a box. You can search for
  # boxes at https://vagrantcloud.com/search.
  config.vm.box = "base"

  # Disable automatic box update checking. If you disable this, then
  # boxes will only be checked for updates when the user runs
  # `vagrant box outdated`. This is not recommended.
  # config.vm.box_check_update = false

  # Create a forwarded port mapping which allows access to a specific port
  # within the machine from a port on the host machine. In the example below,
  # accessing "localhost:8080" will access port 80 on the guest machine.
  # NOTE: This will enable public access to the opened port
  # config.vm.network "forwarded_port", guest: 80, host: 8080

  # Create a forwarded port mapping which allows access to a specific port
  # within the machine from a port on the host machine and only allow access
  # via 127.0.0.1 to disable public access
  # config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"

  # Create a private network, which allows host-only access to the machine
  # using a specific IP.
  # config.vm.network "private_network", ip: "192.168.33.10"

  # Create a public network, which generally matched to bridged network.
  # Bridged networks make the machine appear as another physical device on
  # your network.
  # config.vm.network "public_network"

  # Share an additional folder to the guest VM. The first argument is
  # the path on the host to the actual folder. The second argument is
  # the path on the guest to mount the folder. And the optional third
  # argument is a set of non-required options.
  # config.vm.synced_folder "../data", "/vagrant_data"

  # Disable the default share of the current code directory. Doing this
  # provides improved isolation between the vagrant box and your host
  # by making sure your Vagrantfile isn't accessible to the vagrant box.
  # If you use this you may want to enable additional shared subfolders as
  # shown above.
  # config.vm.synced_folder ".", "/vagrant", disabled: true

  # Provider-specific configuration so you can fine-tune various
  # backing providers for Vagrant. These expose provider-specific options.
  # Example for VirtualBox:
  #
  # config.vm.provider "virtualbox" do |vb|
  #   # Display the VirtualBox GUI when booting the machine
  #   vb.gui = true
  #
  #   # Customize the amount of memory on the VM:
  #   vb.memory = "1024"
  # end
  #
  # View the documentation for the provider you are using for more
  # information on available options.

  # Enable provisioning with a shell script. Additional provisioners such as
  # Ansible, Chef, Docker, Puppet and Salt are also available. Please see the
  # documentation for more information about their specific syntax and use.
  # config.vm.provision "shell", inline: <<-SHELL
  #   apt-get update
  #   apt-get install -y apache2
  # SHELL
end

If we try to initialize Vagrant in a directory which already contains Vagrantfile, we'll get an error:

$ vagrant init
`Vagrantfile` already exists in this directory. Remove it before
running `vagrant init`.

Now when we have Vagrantfile, let's run vagrant up command which creates and configures guest machines according to the Vagrantfile [see vagrant up - Command-Line Interface | Vagrant | HashiCorp Developer]. Before that, let's check the vagrant up format:

$ vagrant up --help
Usage: vagrant up [options] [name|id]

Options:

        --[no-]provision             Enable or disable provisioning
        --provision-with x,y,z       Enable only certain provisioners, by type or by name.
        --[no-]destroy-on-error      Destroy machine if any fatal error happens (default to true)
        --[no-]parallel              Enable or disable parallelism if provider supports it
        --provider PROVIDER          Back the machine with a specific provider
        --[no-]install-provider      If possible, install the provider if it isn't installed
        --[no-]color                 Enable or disable color output
        --machine-readable           Enable machine readable output
    -v, --version                    Display Vagrant version
        --debug                      Enable debug output
        --timestamp                  Enable timestamps on log output
        --debug-timestamp            Enable debug output with timestamps
        --no-tty                     Enable non-interactive output
    -h, --help                       Print this help


name - Name of machine defined in Vagrantfile. Using name to specify the Vagrant machine to act on must be done from within a Vagrant project (directory where the Vagrantfile exists).


Let's see what happens if we don't specify the name: 

$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'base' could not be found. Attempting to find and install...
    default: Box Provider: virtualbox
    default: Box Version: >= 0
==> default: Box file was not detected as metadata. Adding it directly...
==> default: Adding box 'base' (v0) for provider: virtualbox
    default: Downloading: base
An error occurred while downloading the remote file. The error
message, if any, is reproduced below. Please fix this error and try
again.

Couldn't open file /home/bojan/vagrant-test/base



$ vagrant box --help
Usage: vagrant box <subcommand> [<args>]

Available subcommands:
     add
     list
     outdated
     prune
     remove
     repackage
     update

For help on any individual subcommand run `vagrant box <subcommand> -h`
        --[no-]color                 Enable or disable color output
        --machine-readable           Enable machine readable output
    -v, --version                    Display Vagrant version
        --debug                      Enable debug output
        --timestamp                  Enable timestamps on log output
        --debug-timestamp            Enable debug output with timestamps
        --no-tty                     Enable non-interactive output

Let's list installed boxes:

$ vagrant box list
There are no installed boxes! Use `vagrant box add` to add some.


Let's go to https://app.vagrantup.com/boxes/search and pick some box to download and use:




$ vagrant box add centos/7
==> box: Loading metadata for box 'centos/7'
    box: URL: https://vagrantcloud.com/api/v2/vagrant/centos/7
This box can work with multiple providers! The providers that it
can work with are listed below. Please review the list and choose
the provider you will be working with.

1) hyperv
2) libvirt
3) virtualbox
4) vmware_desktop

Enter your choice: 3
==> box: Adding box 'centos/7' (v2004.01) for provider: virtualbox
    box: Downloading: https://vagrantcloud.com/centos/boxes/7/versions/2004.01/providers/virtualbox/unknown/vagrant.box
Download redirected to host: cloud.centos.org
    box: Calculating and comparing box checksum...
==> box: Successfully added box 'centos/7' (v2004.01) for 'virtualbox'!


$ vagrant box list 
centos/7 (virtualbox, 2004.01)


$ vi Vagrantfile 
...
config.vm.box = "centos/7"
...

Let's boot centos/7 VM:

$ vagrant up 
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'centos/7'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'centos/7' version '2004.01' is up to date...
==> default: Setting the name of the VM: vagrant-test_default_1716141840424_31412
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: 
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default: 
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
    default: No guest additions were detected on the base box for this VM! Guest
    default: additions are required for forwarded ports, shared folders, host only
    default: networking, and more. If SSH fails on this machine, please install
    default: the guest additions and repackage the box to continue.
    default: 
    default: This is not an error message; everything may continue to work properly,
    default: in which case you may ignore this message.
==> default: Rsyncing folder: /home/bojan/vagrant-test/ => /vagrant

vagrant up triggers the following:
  • base box gets downloaded unless it's cached
  • VM boots
  • cross-OS networking gets set up
  • cross-OS shared directory gets set up
Let's now ssh to the guest VM:

bojan@host:~/vagrant-test$ vagrant ssh
[vagrant@localhost ~]$ pwd
/home/vagrant
[vagrant@localhost ~]$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

[vagrant@localhost ~]$ whoami
vagrant
[vagrant@localhost ~]$ uname -a
Linux localhost.localdomain 3.10.0-1127.el7.x86_64 #1 SMP Tue Mar 31 23:36:51 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
[vagrant@localhost ~]$ pwd
/home/vagrant
[vagrant@localhost ~]$ exit
logout
bojan@host:~/vagrant-test$ ls 


We were in the terminal of the Vagrant machine (VM)!

Let's check the status of the Vagrant machine:

$ vagrant status
Current machine states:

default                   running (virtualbox)

The VM is running. To stop this VM, you can run `vagrant halt` to
shut it down forcefully, or you can run `vagrant suspend` to simply
suspend the virtual machine. In either case, to restart it again,
simply run `vagrant up`.


$ vagrant halt --help
Usage: vagrant halt [options] [name|id]

Options:

    -f, --force                      Force shut down (equivalent of pulling power)
        --[no-]color                 Enable or disable color output
        --machine-readable           Enable machine readable output
    -v, --version                    Display Vagrant version
        --debug                      Enable debug output
        --timestamp                  Enable timestamps on log output
        --debug-timestamp            Enable debug output with timestamps
        --no-tty                     Enable non-interactive output
    -h, --help                       Print this help


Let's stop this VM:

$ vagrant halt default
==> default: Attempting graceful shutdown of VM...

Let's now verify that it's stopped:

$ vagrant status
Current machine states:

default                   poweroff (virtualbox)

The VM is powered off. To restart the VM, simply run `vagrant up`


How to use Shell as Provisioner


Let's create another Vagrant machine on the same host. Let's use some other base OS, Fedora for example and let's use some shell command to perform provisioning e.g. installing some package e.g. mediawriter.

We'll create another test directory and in it a new Vagrantfile:

$ mkdir vagrant-test-2
$ cd vagrant-test-2

Let's create Vagrant file:

$ vagrant init
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.

Let's edit the Vagrantfile:

$ vi Vagrantfile 
...
config.vm.box = boxcutter/fedora22
...
config.vm.provision "shell", inline: "dnf install -y mediawriter"


$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'boxcutter/fedora22' could not be found. Attempting to find and install...
    default: Box Provider: virtualbox
    default: Box Version: >= 0
The box 'boxcutter/fedora22' could not be found or
could not be accessed in the remote catalog. If this is a private
box on HashiCorp's Vagrant Cloud, please verify you're logged in via
`vagrant login`. Also, please double-check the name. The expanded
URL and error message are shown below:

URL: ["https://vagrantcloud.com/boxcutter/fedora22"]
Error: The requested URL returned error: 404

Let's use some more recent Fedora version:

$ vi Vagrantfile 
...
config.vm.box = generic/fedora28
...

Let's try again to start the machine:

$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'generic/fedora28' could not be found. Attempting to find and install...
    default: Box Provider: virtualbox
    default: Box Version: >= 0
==> default: Loading metadata for box 'generic/fedora28'
    default: URL: https://vagrantcloud.com/api/v2/vagrant/generic/fedora28
==> default: Adding box 'generic/fedora28' (v4.3.12) for provider: virtualbox (amd64)
    default: Downloading: https://vagrantcloud.com/generic/boxes/fedora28/versions/4.3.12/providers/virtualbox/amd64/vagrant.box
    default: Calculating and comparing box checksum...
==> default: Successfully added box 'generic/fedora28' (v4.3.12) for 'virtualbox (amd64)'!
==> default: Importing base box 'generic/fedora28'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'generic/fedora28' version '4.3.12' is up to date...
==> default: Setting the name of the VM: vagrant-test-2_default_1716153560668_498
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: 
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default: 
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
    default: The guest additions on this VM do not match the installed version of
    default: VirtualBox! In most cases this is fine, but in rare cases it can
    default: prevent things such as shared folders from working properly. If you see
    default: shared folder errors, please make sure the guest additions within the
    default: virtual machine match the version of VirtualBox you have installed on
    default: your host and reload your VM.
    default: 
    default: Guest Additions Version: 6.0.6
    default: VirtualBox Version: 7.0
==> default: Running provisioner: shell...
    default: Running: inline script
    default: Fedora 28 - x86_64 - Updates                    3.4 MB/s |  30 MB     00:08
    default: Fedora 28 - x86_64                              2.3 MB/s |  60 MB     00:26
    default: Last metadata expiration check: 0:00:18 ago on Sun 19 May 2024 09:20:16 PM UTC.
    default: Dependencies resolved.
    default: ================================================================================
    default:  Package                       Arch    Version                   Repository
    default:                                                                            Size
    default: ================================================================================
    default: Installing:
    default:  mediawriter                   x86_64  4.1.4-1.fc28              updates  3.6 M
    ...
    default: Installing weak dependencies:
    default:  mesa-dri-drivers              x86_64  18.0.5-4.fc28             updates   12 M
    default:  ntfs-3g-system-compression    x86_64  1.0-1.fc28                updates   27 k
    default: 
    default: Transaction Summary
    default: ================================================================================
    default: Install  75 Packages
    default: 
    default: Total download size: 65 M
    default: Installed size: 209 M
    default: Downloading Packages:
    default: (1/75): qt5-qtbase-common-5.11.3-1.fc28.noarch. 135 kB/s |  39 kB     00:00
    ...
    default: (74/75): mesa-dri-drivers-18.0.5-4.fc28.x86_64. 2.0 MB/s |  12 MB     00:06
    default: (75/75): llvm-libs-6.0.1-8.fc28.x86_64.rpm      2.5 MB/s |  15 MB     00:06
    default: --------------------------------------------------------------------------------
    default: Total                                           2.9 MB/s |  65 MB     00:22
    default: Running transaction check
    default: Transaction check succeeded.
    default: Running transaction test
    default: Transaction test succeeded.
    default: Running transaction
    default:   Preparing        :                                                        1/1
    default:   Installing       : libblockdev-utils-2.16-2.fc28.x86_64                  1/75
    ...
    default:   Verifying        : ntfs-3g-system-compression-1.0-1.fc28.x86_64         74/75
    default:   Verifying        : llvm-libs-6.0.1-8.fc28.x86_64                        75/75
    default: 
    default: Installed:
    default:   mediawriter.x86_64 4.1.4-1.fc28
    default:   mesa-dri-drivers.x86_64 18.0.5-4.fc28
    ...
    default:   xcb-util-renderutil.x86_64 0.3.9-10.fc28
    default:   xcb-util-wm.x86_64 0.4.1-12.fc28
    default: 
    default: Complete!

When we run vagrant up with shell provisioner specified, the following steps are triggered:
  • networking set up
  • mount points set up
  • shell provisioner runs, some system dependencies are installed

When Vagrant is running a machine, it is actually a running a VM through a selected provider. In our case that is VirtualBox. We can actually see that VM in VirtualBox UI :



If we click "Show" button, we'll get the terminal of the guest OS where we can type and execute commands:



It is possible to define and provision multiple Vagrant machines in a single Vagrantfile. Example: https://github.com/kodekloudhub/certified-kubernetes-administrator-course/blob/master/kubeadm-clusters/virtualbox/Vagrantfile

Their status can be for example like here:

$ vagrant status
Current machine states:

controlplane              running (virtualbox)
node01                    running (virtualbox)
node02                    running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

If we want to ssh to some of these machines, we need to specify its name: 

../certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox$ vagrant ssh controlplane
Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-107-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/pro

 System information as of Sun May 19 23:17:23 UTC 2024

  System load:  0.0               Processes:               100
  Usage of /:   3.7% of 38.70GB   Users logged in:         0
  Memory usage: 10%               IPv4 address for enp0s3: 10.0.2.15
  Swap usage:   0%


Expanded Security Maintenance for Applications is not enabled.

0 updates can be applied immediately.

Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status


Last login: Sun May 19 22:59:32 2024 from 10.0.2.2
vagrant@controlplane:~$ 

References:


https://developer.hashicorp.com/vagrant/docs/cli/up

Saturday, 18 May 2024

Provisioning multi-node cluster on the local machine using Kubeadm and Vagrant

This article extends my notes from an Udemy course "Kubernetes for the Absolute Beginners - Hands-on". All course content rights belong to course creators. 

The previous article in the series was Kubernetes on Cloud | My Public Notepad

In previous articles we used Minikube which creates a single node which launches a single VM inside which it runs a single node which is both a master and a worker. This setup is used only for local tests. In production, we have multiple nodes e.g. at least two master nodes (usually 3 to 5 - see How Many Nodes for Your Kubernetes Control Plane? - The New Stack) in a control plane and then few (or few hundreds or thousands! - see Hey all, how many nodes do you run? : r/kubernetes).

For an emulation of the minimal production environment, let's assume we want to provision a Kubernetes cluster with one master and two worker nodes. In one of the past articles, Introduction to Kubernetes | My Public Notepad, we learned about which component need to be installed on master and which in worker nodes. 

To provision this cluster, we will be using a special tool called kubeadm. [Kubeadm | Kubernetes]

kubeadm tool is used to:
  • bootstrap a Kubernetes cluster by installing all of the necessary components on the right nodes in the right order
  • by design, it cares only about bootstrapping, not about provisioning machines
  • built to provide kubeadm init and kubeadm join as Kubernetes best-practice "fast paths" for creating multi-node Kubernetes clusters
  • performs the actions necessary to get a minimum viable cluster up and running
  • take care of requirements around security and certifications to enable communication between all of the components
  • install all of these various components individually across different nodes
  • modify all of the necessary configuration files to make sure all the components point to each other
  • set up certificates

Let's revise the tools that have similar names and which are involved in provisioning and managing Kubernetes cluster:
  • kubeadm: the command to bootstrap the cluster.
  • kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
  • kubectl: the command line util to talk to your cluster.

Here are the steps to set up a Kubernetes cluster using the kubeadm tool at a high level:
  • provision desired number of nodes for Kubernetes cluster
    • can be physical or virtual machines
  • designate one as the master and the rest as worker nodes
  • install a container runtime on the hosts, on all nodes (master and worker)
    • it needs to support Container Runtime Interface (CRI) - an API for integration wtih kubelet
    • example: containerd
  • install the kubeadm tool on all the nodes
  • initialize the master server
    • this ensures all of the required components are installed and configured on the master server
  • ensure that the network prerequisites are met
    • normal network connectivity is not enough for this
    • Kubernetes requires a special networking solution between the master and worker nodes called the Pod Network
  • worker nodes to join the cluster (to join the master node)
  • deploy our application onto the Kubernetes environment


Exercise: set up a Kubernetes cluster using the kubeadm tool on the local environment and deploy Nginx web server application in this cluster. Test it by accessing the application from the local browser.

Tools: 



Setting up node host machines



Cluster node machines can be a bare metal computers, VMs, EC2 instances in AWS etc...We'll be using local VMs. 

Initially, none of our 3 VMs is created:

../certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox$ vagrant status
Current machine states:

controlplane              not created (virtualbox)
node01                    not created (virtualbox)
node02                    not created (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

Let's create all 3 VMs:

../certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox$ vagrant up
Bringing machine 'controlplane' up with 'virtualbox' provider...
Bringing machine 'node01' up with 'virtualbox' provider...
Bringing machine 'node02' up with 'virtualbox' provider...
==> controlplane: Importing base box 'ubuntu/jammy64'...
==> controlplane: Matching MAC address for NAT networking...
==> controlplane: Setting the name of the VM: controlplane
==> controlplane: Clearing any previously set network interfaces...
==> controlplane: Preparing network interfaces based on configuration...
    controlplane: Adapter 1: nat
    controlplane: Adapter 2: bridged
==> controlplane: Forwarding ports...
    controlplane: 22 (guest) => 2222 (host) (adapter 1)
==> controlplane: Running 'pre-boot' VM customizations...
==> controlplane: Booting VM...
==> controlplane: Waiting for machine to boot. This may take a few minutes...
    controlplane: SSH address: 127.0.0.1:2222
    controlplane: SSH username: vagrant
    controlplane: SSH auth method: private key
    controlplane: Warning: Remote connection disconnect. Retrying...
    controlplane: Warning: Connection reset. Retrying...
    controlplane: 
    controlplane: Vagrant insecure key detected. Vagrant will automatically replace
    controlplane: this with a newly generated keypair for better security.
    controlplane: 
    controlplane: Inserting generated public key within guest...
    controlplane: Removing insecure key from the guest if it's present...
    controlplane: Key inserted! Disconnecting and reconnecting using new SSH key...
==> controlplane: Machine booted and ready!
==> controlplane: Checking for guest additions in VM...
    controlplane: The guest additions on this VM do not match the installed version of
    controlplane: VirtualBox! In most cases this is fine, but in rare cases it can
    controlplane: prevent things such as shared folders from working properly. If you see
    controlplane: shared folder errors, please make sure the guest additions within the
    controlplane: virtual machine match the version of VirtualBox you have installed on
    controlplane: your host and reload your VM.
    controlplane: 
    controlplane: Guest Additions Version: 6.0.0 r127566
    controlplane: VirtualBox Version: 7.0
==> controlplane: Setting hostname...
==> controlplane: Configuring and enabling network interfaces...
==> controlplane: Mounting shared folders...
    controlplane: /vagrant => /home/bojan/dev/github/certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox
==> controlplane: Running provisioner: setup-hosts (shell)...
    controlplane: Running: /tmp/vagrant-shell20240523-380226-2rpsff.sh
==> controlplane: Running provisioner: setup-dns (shell)...
    controlplane: Running: /tmp/vagrant-shell20240523-380226-ufm04a.sh
==> controlplane: Running provisioner: setup-ssh (shell)...
    controlplane: Running: /tmp/vagrant-shell20240523-380226-zynnbo.sh
==> controlplane: Running provisioner: file...
    controlplane: ./ubuntu/tmux.conf => $HOME/.tmux.conf
==> controlplane: Running provisioner: file...
    controlplane: ./ubuntu/vimrc => $HOME/.vimrc
==> controlplane: Running action triggers after up ...
==> controlplane: Running trigger: Post provisioner...
    Nothing to do here
==> node01: Importing base box 'ubuntu/jammy64'...
==> node01: Matching MAC address for NAT networking...
==> node01: Setting the name of the VM: node01
==> node01: Fixed port collision for 22 => 2222. Now on port 2200.
==> node01: Clearing any previously set network interfaces...
==> node01: Preparing network interfaces based on configuration...
    node01: Adapter 1: nat
    node01: Adapter 2: bridged
==> node01: Forwarding ports...
    node01: 22 (guest) => 2200 (host) (adapter 1)
==> node01: Running 'pre-boot' VM customizations...
==> node01: Booting VM...
==> node01: Waiting for machine to boot. This may take a few minutes...
    node01: SSH address: 127.0.0.1:2200
    node01: SSH username: vagrant
    node01: SSH auth method: private key
    node01: Warning: Remote connection disconnect. Retrying...
    node01: Warning: Connection reset. Retrying...
    node01: Warning: Connection reset. Retrying...
    node01: 
    node01: Vagrant insecure key detected. Vagrant will automatically replace
    node01: this with a newly generated keypair for better security.
    node01: 
    node01: Inserting generated public key within guest...
    node01: Removing insecure key from the guest if it's present...
    node01: Key inserted! Disconnecting and reconnecting using new SSH key...
==> node01: Machine booted and ready!
==> node01: Checking for guest additions in VM...
    node01: The guest additions on this VM do not match the installed version of
    node01: VirtualBox! In most cases this is fine, but in rare cases it can
    node01: prevent things such as shared folders from working properly. If you see
    node01: shared folder errors, please make sure the guest additions within the
    node01: virtual machine match the version of VirtualBox you have installed on
    node01: your host and reload your VM.
    node01: 
    node01: Guest Additions Version: 6.0.0 r127566
    node01: VirtualBox Version: 7.0
==> node01: Setting hostname...
==> node01: Configuring and enabling network interfaces...
==> node01: Mounting shared folders...
    node01: /vagrant => /home/bojan/dev/github/certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox
==> node01: Running provisioner: setup-hosts (shell)...
    node01: Running: /tmp/vagrant-shell20240523-380226-js33wi.sh
==> node01: Running provisioner: setup-dns (shell)...
    node01: Running: /tmp/vagrant-shell20240523-380226-ql2nuy.sh
==> node01: Running provisioner: setup-ssh (shell)...
    node01: Running: /tmp/vagrant-shell20240523-380226-8e61q9.sh
==> node01: Running action triggers after up ...
==> node01: Running trigger: Post provisioner...
    Nothing to do here
==> node02: Importing base box 'ubuntu/jammy64'...
==> node02: Matching MAC address for NAT networking...
==> node02: Setting the name of the VM: node02
==> node02: Fixed port collision for 22 => 2222. Now on port 2201.
==> node02: Clearing any previously set network interfaces...
==> node02: Preparing network interfaces based on configuration...
    node02: Adapter 1: nat
    node02: Adapter 2: bridged
==> node02: Forwarding ports...
    node02: 22 (guest) => 2201 (host) (adapter 1)
==> node02: Running 'pre-boot' VM customizations...
==> node02: Booting VM...
==> node02: Waiting for machine to boot. This may take a few minutes...
    node02: SSH address: 127.0.0.1:2201
    node02: SSH username: vagrant
    node02: SSH auth method: private key
    node02: Warning: Connection reset. Retrying...
    node02: Warning: Remote connection disconnect. Retrying...
    node02: 
    node02: Vagrant insecure key detected. Vagrant will automatically replace
    node02: this with a newly generated keypair for better security.
    node02: 
    node02: Inserting generated public key within guest...
    node02: Removing insecure key from the guest if it's present...
    node02: Key inserted! Disconnecting and reconnecting using new SSH key...
==> node02: Machine booted and ready!
==> node02: Checking for guest additions in VM...
    node02: The guest additions on this VM do not match the installed version of
    node02: VirtualBox! In most cases this is fine, but in rare cases it can
    node02: prevent things such as shared folders from working properly. If you see
    node02: shared folder errors, please make sure the guest additions within the
    node02: virtual machine match the version of VirtualBox you have installed on
    node02: your host and reload your VM.
    node02: 
    node02: Guest Additions Version: 6.0.0 r127566
    node02: VirtualBox Version: 7.0
==> node02: Setting hostname...
==> node02: Configuring and enabling network interfaces...
==> node02: Mounting shared folders...
    node02: /vagrant => /home/bojan/dev/github/certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox
==> node02: Running provisioner: setup-hosts (shell)...
    node02: Running: /tmp/vagrant-shell20240523-380226-kpm0j5.sh
==> node02: Running provisioner: setup-dns (shell)...
    node02: Running: /tmp/vagrant-shell20240523-380226-d6xikj.sh
==> node02: Running provisioner: setup-ssh (shell)...
    node02: Running: /tmp/vagrant-shell20240523-380226-obnj0e.sh
==> node02: Running action triggers after up ...
==> node02: Running trigger: Post provisioner...
    Gathering IP addresses of nodes...
    Setting /etc/hosts on nodes...
Uploading hosts.tmp to /tmp/hosts.tmp
Upload has completed successfully!

  Source: hosts.tmp
  Destination: /tmp/hosts.tmp
192.168.0.19  controlplane
192.168.0.21  node01
192.168.0.22  node02
Uploading hosts.tmp to /tmp/hosts.tmp
Upload has completed successfully!

  Source: hosts.tmp
  Destination: /tmp/hosts.tmp
192.168.0.19  controlplane
192.168.0.21  node01
192.168.0.22  node02
Uploading hosts.tmp to /tmp/hosts.tmp
Upload has completed successfully!

  Source: hosts.tmp
  Destination: /tmp/hosts.tmp
192.168.0.19  controlplane
192.168.0.21  node01
192.168.0.22  node02

VM build complete!

Use either of the following to access any NodePort services you create from your browser
replacing "port_number" with the number of your NodePort.

  http://192.168.0.21:port_number
  http://192.168.0.22:port_number


Let's check the current status of  Vagrant machines:

../certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox$ vagrant status
Current machine states:

controlplane              running (virtualbox)
node01                    running (virtualbox)
node02                    running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

Let's now ssh to each of these machines:

$ vagrant ssh controlplane
...
 IPv4 address for enp0s3: 10.0.2.15
...

vagrant@controlplane:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 02:92:d8:07:a3:a8 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic enp0s3
       valid_lft 86109sec preferred_lft 86109sec
    inet6 fe80::92:d8ff:fe07:a3a8/64 scope link 
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:a2:bf:35 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.19/24 metric 100 brd 192.168.0.255 scope global dynamic enp0s8
       valid_lft 86112sec preferred_lft 86112sec
    inet6 fe80::a00:27ff:fea2:bf35/64 scope link 
       valid_lft forever preferred_lft forever

Let's check that we can ping other two VMs (similar tests should be done on both other worker nodes):

vagrant@controlplane:~$ ping node01
PING node01 (192.168.0.21) 56(84) bytes of data.
64 bytes from node01 (192.168.0.21): icmp_seq=1 ttl=64 time=3.99 ms
64 bytes from node01 (192.168.0.21): icmp_seq=2 ttl=64 time=0.640 ms
64 bytes from node01 (192.168.0.21): icmp_seq=3 ttl=64 time=1.07 ms
64 bytes from node01 (192.168.0.21): icmp_seq=4 ttl=64 time=1.08 ms
^C
--- node01 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3124ms
rtt min/avg/max/mdev = 0.640/1.695/3.988/1.335 ms

vagrant@controlplane:~$ ping node02
PING node02 (192.168.0.22) 56(84) bytes of data.
64 bytes from node02 (192.168.0.22): icmp_seq=1 ttl=64 time=1.59 ms
64 bytes from node02 (192.168.0.22): icmp_seq=2 ttl=64 time=0.493 ms
64 bytes from node02 (192.168.0.22): icmp_seq=3 ttl=64 time=0.997 ms
^C
--- node02 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2029ms
rtt min/avg/max/mdev = 0.493/1.026/1.588/0.447 ms


$ vagrant ssh node01
// ping controlplane and node02

$ vagrant ssh node02
// ping controlplane and node01


Let's create SSH key pair and copy public key to worker nodes so we can SSH from master to them:

vagrant@controlplane:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/vagrant/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/vagrant/.ssh/id_rsa
Your public key has been saved in /home/vagrant/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:1Tkdo+7JZttTP42vA9eHfgJbFM5ZJiYb5fj0i9R0oEY vagrant@controlplane
The key's randomart image is:
+---[RSA 3072]----+
|            E.=  |
|           ooB++o|
|          . OB+B.|
|         . o.+B..|
|        S   .o.+.|
|           o+.= =|
|            *O ++|
|           o.o*.+|
|            . oBo|
+----[SHA256]-----+

vagrant@controlplane:~$ ssh-copy-id -o StrictHostKeyChecking=no vagrant@node01
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/vagrant/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
(vagrant@node01) Password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh -o 'StrictHostKeyChecking=no' 'vagrant@node01'"
and check to make sure that only the key(s) you wanted were added.

vagrant@controlplane:~$ ssh-copy-id -o StrictHostKeyChecking=no vagrant@node02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/vagrant/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
(vagrant@node02) Password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh -o 'StrictHostKeyChecking=no' 'vagrant@node02'"
and check to make sure that only the key(s) you wanted were added.


Initial setup on all nodes



The next step is to install kubeadm tool on each of these nodes. 

We need to have installed a container runtime which implements Container Runtime Interface (CRI) as Kubernetes (since version 1.30) uses CRI to talk to the container runtime (which runs containers in pods).

Container Runtimes | Kubernetes lists several options for container runtime:
  • containerd - we'll install this one
  • CRI-O
  • Docker Engine
    • Docker Engine does not implement the CRI so an additional service cri-dockerd has to be installed.
  • Mirantis Container Runtime
Regardless of the chosen container runtime, we need to configure prerequisites (https://kubernetes.io/docs/setup/production-environment/container-runtimes/#install-and-configure-prerequisites).

First we need to update apt index and install packages used by Kubernetes:

vagrant@controlplane:~$ {
    sudo apt-get update
    sudo apt-get install -y apt-transport-https ca-certificates curl
}

Then we need to configure kernel modules and make them persistent:

vagrant@controlplane:~$ {
    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

    sudo modprobe overlay
    sudo modprobe br_netfilter
}
overlay
br_netfilter4


On both master and worker nodes we'll enable IPv4 packet forwarding [https://kubernetes.io/docs/setup/production-environment/container-runtimes/#prerequisite-ipv4-forwarding-optional] by executing:

vagrant@controlplane:~$ {
    # sysctl params required by setup, params persist across reboots
    cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
    # Apply sysctl params without reboot
    sudo sysctl --system
}
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf ...
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
* Applying /etc/sysctl.d/10-ptrace.conf ...
kernel.yama.ptrace_scope = 1
* Applying /etc/sysctl.d/10-zeropage.conf ...
vm.mmap_min_addr = 65536
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.default.accept_source_route = 0
sysctl: setting key "net.ipv4.conf.all.accept_source_route": Invalid argument
net.ipv4.conf.default.promote_secondaries = 1
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
net.ipv4.ping_group_range = 0 2147483647
net.core.default_qdisc = fq_codel
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
fs.protected_regular = 1
fs.protected_fifos = 1
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/99-cloudimg-ipv6.conf ...
net.ipv6.conf.all.use_tempaddr = 0
net.ipv6.conf.default.use_tempaddr = 0
* Applying /usr/lib/sysctl.d/99-protect-links.conf ...
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
* Applying /etc/sysctl.conf ...


This above is the output from the master node. Let's verify IPv4 forwarding:

vagrant@controlplane:~$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1



We are running on Ubuntu Jammy Jellyfish on all nodes:

vagrant@controlplane:~$ cat /etc/os-release 
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

There are several options for installing containerd but we'll use the apt package manager. 

Containerd is an open-source Container runtime originally developed by Docker. The containerd.io packages in DEB and RPM formats are distributed by Docker (not by the containerd project). 

Containerd facilitates operations on containers by directly interfacing with your operating system. The Docker Engine sits on top of containerd and provides additional functionality and developer experience enhancements. [containerd vs. Docker | Docker]

containerd is a container runtime and Docker is a container engine [containerd vs. Docker: What’s the Difference? | Pure Storage Blog]

Docker uses Containerd as its runtime for creating Containers from images. Essentially, it acts as an interface (API) that allows users to use Containerd to perform low-level functionality. Simply put, when you run Docker commands in the terminal, Docker relays those commands to its low-level runtime (Containerd) that carries out all the necessary procedures. [Docker vs Containerd: A Detailed Comparison]


We can now install containerd, on all nodes. Here is the output from the master:

vagrant@controlplane:~$ sudo apt-get install -y containerd
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  runc
The following NEW packages will be installed:
  containerd runc
0 upgraded, 2 newly installed, 0 to remove and 1 not upgraded.
Need to get 40.3 MB of archives.
After this operation, 152 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 runc amd64 1.1.7-0ubuntu1~22.04.2 [4267 kB]
Get:2 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 containerd amd64 1.7.2-0ubuntu1~22.04.1 [36.0 MB]
Fetched 40.3 MB in 13s (3111 kB/s)                                                                                                                                                                      
Selecting previously unselected package runc.
(Reading database ... 64026 files and directories currently installed.)
Preparing to unpack .../runc_1.1.7-0ubuntu1~22.04.2_amd64.deb ...
Unpacking runc (1.1.7-0ubuntu1~22.04.2) ...
Selecting previously unselected package containerd.
Preparing to unpack .../containerd_1.7.2-0ubuntu1~22.04.1_amd64.deb ...
Unpacking containerd (1.7.2-0ubuntu1~22.04.1) ...
Setting up runc (1.1.7-0ubuntu1~22.04.2) ...
Setting up containerd (1.7.2-0ubuntu1~22.04.1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service.
Processing triggers for man-db (2.10.2-1) ...
Scanning processes...                                                                                                                                                                                    
Scanning linux images...                                                                                                                                                                                 

Running kernel seems to be up-to-date.

No services need to be restarted.

No containers need to be restarted.

No user sessions are running outdated binaries.

No VM guests are running outdated hypervisor (qemu) binaries on this host.


We can check the status of containerd service:

$ systemctl status containerd


https://kubernetes.io/docs/setup/production-environment/container-runtimes/#cgroup-drivers explains why systemd should be used as the cgroup driver for the kubelet and the container runtime when systemd is the selected init system. (We need to do that on all 3 nodes)

Let's first confirm what is the init system on the node:

$ ps -p 1
    PID TTY          TIME CMD
      1 ?        00:00:04 systemd


https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd shows instructions how to set systemd as the cgroup driver for the containerd.

vagrant@controlplane:~$ {
    sudo mkdir -p /etc/containerd
    containerd config default | sed 's/SystemdCgroup = false/SystemdCgroup = true/' | sudo tee /etc/containerd/config.toml
}
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2

[cgroup]
  path = ""

[debug]
  address = ""
  format = ""
  gid = 0
  level = ""
  uid = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
  tcp_address = ""
  tcp_tls_ca = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0

[metrics]
  address = ""
  grpc_histogram = false

[plugins]

  [plugins."io.containerd.gc.v1.scheduler"]
    deletion_threshold = 0
    mutation_threshold = 100
    pause_threshold = 0.02
    schedule_delay = "0s"
    startup_delay = "100ms"

  [plugins."io.containerd.grpc.v1.cri"]
    cdi_spec_dirs = ["/etc/cdi", "/var/run/cdi"]
    device_ownership_from_security_context = false
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = true
    disable_proc_mount = false
    disable_tcp_service = true
    drain_exec_sync_io_timeout = "0s"
    enable_cdi = false
    enable_selinux = false
    enable_tls_streaming = false
    enable_unprivileged_icmp = false
    enable_unprivileged_ports = false
    ignore_image_defined_volumes = false
    image_pull_progress_timeout = "1m0s"
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    sandbox_image = "registry.k8s.io/pause:3.8"
    selinux_category_range = 1024
    stats_collect_period = 10
    stream_idle_timeout = "4h0m0s"
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    systemd_cgroup = false
    tolerate_missing_hugetlb_controller = true
    unset_seccomp_profile = ""

    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
      ip_pref = ""
      max_conf_num = 1
      setup_serially = false

    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      ignore_blockio_not_enabled_errors = false
      ignore_rdt_not_enabled_errors = false
      no_pivot = false
      snapshotter = "overlayfs"

      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        privileged_without_host_devices_all_devices_allowed = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""
        sandbox_mode = ""
        snapshotter = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = ""
          cni_conf_dir = ""
          cni_max_conf_num = 0
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          privileged_without_host_devices_all_devices_allowed = false
          runtime_engine = ""
          runtime_path = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"
          sandbox_mode = "podsandbox"
          snapshotter = ""

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = true

      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        privileged_without_host_devices_all_devices_allowed = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""
        sandbox_mode = ""
        snapshotter = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]

    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = "node"

    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]

    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""

  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"

  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"

  [plugins."io.containerd.internal.v1.tracing"]
    sampling_ratio = 1.0
    service_name = "containerd"

  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"

  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false

  [plugins."io.containerd.nri.v1.nri"]
    disable = true
    disable_connections = false
    plugin_config_path = "/etc/nri/conf.d"
    plugin_path = "/opt/nri/plugins"
    plugin_registration_timeout = "5s"
    plugin_request_timeout = "2s"
    socket_path = "/var/run/nri/nri.sock"

  [plugins."io.containerd.runtime.v1.linux"]
    no_shim = false
    runtime = "runc"
    runtime_root = ""
    shim = "containerd-shim"
    shim_debug = false

  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]
    sched_core = false

  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]

  [plugins."io.containerd.service.v1.tasks-service"]
    blockio_config_file = ""
    rdt_config_file = ""

  [plugins."io.containerd.snapshotter.v1.aufs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.btrfs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.devmapper"]
    async_remove = false
    base_image_size = ""
    discard_blocks = false
    fs_options = ""
    fs_type = ""
    pool_name = ""
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.native"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.overlayfs"]
    root_path = ""
    upperdir_label = false

  [plugins."io.containerd.snapshotter.v1.zfs"]
    root_path = ""

  [plugins."io.containerd.tracing.processor.v1.otlp"]
    endpoint = ""
    insecure = false
    protocol = ""

  [plugins."io.containerd.transfer.v1.local"]
    config_path = ""
    max_concurrent_downloads = 3
    max_concurrent_uploaded_layers = 3

    [[plugins."io.containerd.transfer.v1.local".unpack_config]]
      differ = ""
      platform = "linux/amd64"
      snapshotter = "overlayfs"

[proxy_plugins]

[stream_processors]

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar"

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar+gzip"

[timeouts]
  "io.containerd.timeout.bolt.open" = "0s"
  "io.containerd.timeout.metrics.shimstats" = "2s"
  "io.containerd.timeout.shim.cleanup" = "5s"
  "io.containerd.timeout.shim.load" = "5s"
  "io.containerd.timeout.shim.shutdown" = "3s"
  "io.containerd.timeout.task.state" = "2s"

[ttrpc]
  address = ""
  gid = 0
  uid = 0



After we apply this change, we need to restart containerd:

$ sudo systemctl restart containerd

We need to repeat these steps on all 3 nodes. 

After that is done, we have container runtime installed.

The next step is installing kubeadm, kubelet and kubectl packages on all of our machines. Let's just remember what are these tools:
  • kubeadm: the command to bootstrap the cluster.
  • kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
  • kubectl: the command line util to talk to your cluster.
Here is the output of commands https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl applied to worker node 1 (but these commands need to be executed on all 3 nodes: master and 2 worker nodes):

vagrant@node01:~$ sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
Hit:1 https://download.docker.com/linux/ubuntu jammy InRelease
Hit:2 http://archive.ubuntu.com/ubuntu jammy InRelease
Get:3 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB]
Get:4 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB]
Hit:5 http://archive.ubuntu.com/ubuntu jammy-backports InRelease    
Get:6 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages [1472 kB]
Get:7 http://security.ubuntu.com/ubuntu jammy-security/main Translation-en [253 kB]
Get:8 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages [1876 kB]
Get:9 http://security.ubuntu.com/ubuntu jammy-security/restricted Translation-en [318 kB]
Fetched 4148 kB in 2s (1960 kB/s)                          
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
ca-certificates is already the newest version (20230311ubuntu0.22.04.1).
curl is already the newest version (7.81.0-1ubuntu1.16).
gpg is already the newest version (2.2.27-3ubuntu2.1).
gpg set to manually installed.
The following NEW packages will be installed:
  apt-transport-https
0 upgraded, 1 newly installed, 0 to remove and 1 not upgraded.
Need to get 1510 B of archives.
After this operation, 170 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 apt-transport-https all 2.4.12 [1510 B]
Fetched 1510 B in 0s (32.3 kB/s)        
Selecting previously unselected package apt-transport-https.
(Reading database ... 64038 files and directories currently installed.)
Preparing to unpack .../apt-transport-https_2.4.12_all.deb ...
Unpacking apt-transport-https (2.4.12) ...
Setting up apt-transport-https (2.4.12) ...
Scanning processes...                                                                                                               
Scanning linux images...                                                                                                            

Running kernel seems to be up-to-date.

No services need to be restarted.

No containers need to be restarted.

No user sessions are running outdated binaries.

No VM guests are running outdated hypervisor (qemu) binaries on this host.


Let's determine latest version of Kubernetes: 

vagrant@controlplane:~$ KUBE_LATEST=$(curl -L -s https://dl.k8s.io/release/stable.txt | awk 'BEGIN { FS="." } { printf "%s.%s", $1, $2 }')

$ echo ${KUBE_LATEST}
v1.30

As we will add Kubernetes apt repository to the list of apt repositories, we first need to download its public key:

vagrant@controlplane:~$ {
    sudo mkdir -p /etc/apt/keyrings
    curl -fsSL https://pkgs.k8s.io/core:/stable:/${KUBE_LATEST}/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
}

Now we can add Kubernetes apt repository:

vagrant@controlplane:~$ echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/${KUBE_LATEST}/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /

Let's now install Kubernetes tools:

vagrant@controlplane:~$ {
    sudo apt-get update
    sudo apt-get install -y kubelet kubeadm kubectl
    sudo apt-mark hold kubelet kubeadm kubectl
}
Hit:1 http://security.ubuntu.com/ubuntu jammy-security InRelease
Hit:2 http://archive.ubuntu.com/ubuntu jammy InRelease              
Hit:3 http://archive.ubuntu.com/ubuntu jammy-updates InRelease      
Hit:4 http://archive.ubuntu.com/ubuntu jammy-backports InRelease    
Get:5 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb  InRelease [1186 B]
Get:6 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb  Packages [3957 B]
Fetched 5143 B in 1s (7439 B/s)     
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  conntrack cri-tools ebtables kubernetes-cni socat
The following NEW packages will be installed:
  conntrack cri-tools ebtables kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 8 newly installed, 0 to remove and 1 not upgraded.
Need to get 93.9 MB of archives.
After this operation, 343 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 conntrack amd64 1:1.4.6-2build2 [33.5 kB]
Get:4 http://archive.ubuntu.com/ubuntu jammy/main amd64 ebtables amd64 2.0.11-4build2 [84.9 kB]      
Get:2 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb  cri-tools 1.30.0-1.1 [21.3 MB]
Get:8 http://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB]                 
Get:3 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb  kubeadm 1.30.1-1.1 [10.4 MB]
Get:5 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb  kubectl 1.30.1-1.1 [10.8 MB]                                                                              
Get:6 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb  kubernetes-cni 1.4.0-1.1 [32.9 MB]                                                                        
Get:7 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb  kubelet 1.30.1-1.1 [18.1 MB]                                                                              
Fetched 93.9 MB in 15s (6378 kB/s)                                                                                                                                                                      
Selecting previously unselected package conntrack.
(Reading database ... 64087 files and directories currently installed.)
Preparing to unpack .../0-conntrack_1%3a1.4.6-2build2_amd64.deb ...
Unpacking conntrack (1:1.4.6-2build2) ...
Selecting previously unselected package cri-tools.
Preparing to unpack .../1-cri-tools_1.30.0-1.1_amd64.deb ...
Unpacking cri-tools (1.30.0-1.1) ...
Selecting previously unselected package ebtables.
Preparing to unpack .../2-ebtables_2.0.11-4build2_amd64.deb ...
Unpacking ebtables (2.0.11-4build2) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../3-kubeadm_1.30.1-1.1_amd64.deb ...
Unpacking kubeadm (1.30.1-1.1) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../4-kubectl_1.30.1-1.1_amd64.deb ...
Unpacking kubectl (1.30.1-1.1) ...
Selecting previously unselected package kubernetes-cni.
Preparing to unpack .../5-kubernetes-cni_1.4.0-1.1_amd64.deb ...
Unpacking kubernetes-cni (1.4.0-1.1) ...
Selecting previously unselected package socat.
Preparing to unpack .../6-socat_1.7.4.1-3ubuntu4_amd64.deb ...
Unpacking socat (1.7.4.1-3ubuntu4) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../7-kubelet_1.30.1-1.1_amd64.deb ...
Unpacking kubelet (1.30.1-1.1) ...
Setting up conntrack (1:1.4.6-2build2) ...
Setting up kubectl (1.30.1-1.1) ...
Setting up ebtables (2.0.11-4build2) ...
Setting up socat (1.7.4.1-3ubuntu4) ...
Setting up cri-tools (1.30.0-1.1) ...
Setting up kubernetes-cni (1.4.0-1.1) ...
Setting up kubeadm (1.30.1-1.1) ...
Setting up kubelet (1.30.1-1.1) ...
Processing triggers for man-db (2.10.2-1) ...
Scanning processes...                                                                                                                                                                                    
Scanning linux images...                                                                                                                                                                                 

Running kernel seems to be up-to-date.

No services need to be restarted.

No containers need to be restarted.

No user sessions are running outdated binaries.

No VM guests are running outdated hypervisor (qemu) binaries on this host.
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.


crictl is a command-line interface for CRI-compatible container runtimes. You can use it to inspect and debug container runtimes and applications on a Kubernetes node.

Let's set up crictl in case we need to examine containers:

vagrant@controlplane:~$ sudo crictl config \
    --set runtime-endpoint=unix:///run/containerd/containerd.sock \
    --set image-endpoint=unix:///run/containerd/containerd.sock


Creating a cluster (from control plane)



We are now set to create a  cluster with kubeadm: Creating a cluster with kubeadm | Kubernetes. This is done from the master node only.

On the master node, let's check the IP address:

vagrant@controlplane:~$ ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 02:92:d8:07:a3:a8 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic enp0s3
       valid_lft 69914sec preferred_lft 69914sec
    inet6 fe80::92:d8ff:fe07:a3a8/64 scope link 
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:a2:bf:35 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.19/24 metric 100 brd 192.168.0.255 scope global dynamic enp0s8
       valid_lft 66347sec preferred_lft 66347sec
    inet6 fe80::a00:27ff:fea2:bf35/64 scope link 
       valid_lft forever preferred_lft forever

We want to make kubelet use for listening (bridge) network (with primary IP address) and not the one chosen by default which is the first non-local loop network adapter, which might be NAT adapter: 

vagrant@controlplane:~$ cat <<EOF | sudo tee /etc/default/kubelet
KUBELET_EXTRA_ARGS='--node-ip ${PRIMARY_IP}'
EOF
KUBELET_EXTRA_ARGS='--node-ip 192.168.0.19'

vagrant@node01:~$ cat <<EOF | sudo tee /etc/default/kubelet
KUBELET_EXTRA_ARGS='--node-ip ${PRIMARY_IP}'
EOF
KUBELET_EXTRA_ARGS='--node-ip 192.168.0.21'

vagrant@node02:~$ cat <<EOF | sudo tee /etc/default/kubelet
KUBELET_EXTRA_ARGS='--node-ip ${PRIMARY_IP}'
EOF
KUBELET_EXTRA_ARGS='--node-ip 192.168.0.22'



When we run kubeadm init command, we'll have to specify --pod--network--cidr to provide what is going to be the pod's network. We'll use 10.244.0.0/16:

vagrant@controlplane:~$ POD_CIDR=10.244.0.0/16
vagrant@controlplane:~$ SERVICE_CIDR=10.96.0.0/16

We'll also use --apiserver-advertise-address to specify what is the address that API server will listen on and we'll set it to be the static IP that we set for the master node. This will make API server accessible to all worker nodes.

Let's now provision the cluster:

vagrant@controlplane:~$ sudo kubeadm init --pod-network-cidr $POD_CIDR --service-cidr $SERVICE_CIDR --apiserver-advertise-address $PRIMARY_IP
[init] Using Kubernetes version: v1.30.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0523 00:26:36.688123    4302 checks.go:844] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [controlplane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.19]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [controlplane localhost] and IPs [192.168.0.19 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [controlplane localhost] and IPs [192.168.0.19 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.013183546s
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 7.505965178s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node controlplane as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node controlplane as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 7dorxi.kf6ttb6y0bpiku7t
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.19:6443 --token 7dorxi.kf6ttb6y0bpiku7t \
        --discovery-token-ca-cert-hash sha256:be00e778c55e32fdf0dec5057a0dacffbcb32aeb7cfa91cb5e7edbed853536b2 


The command above creates kubectl config file (etc/kubernetes/admin.conf) which we need to copy to location where kubectl will be looking for it.
 
Let's run the suggested kubeconfig setup commands from above, in the master node only:

vagrant@controlplane:~$ {

    mkdir ~/.kube

    sudo cp /etc/kubernetes/admin.conf ~/.kube/config

    sudo chown $(id -u):$(id -g) ~/.kube/config

    chmod 600 ~/.kube/config

}


Let's check the kubectl config:

$ cat $HOME/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0...K
    server: https://192.168.0.19:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0t...S0K
    client-key-data: LS0tL...LQo=


Let's test that kubectl works by checking out pods created in Kubernetes system namespace (at this point there are no pods in the default namespace!):

vagrant@controlplane:~$ kubectl get pods -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-7db6d8ff4d-d4fkw               0/1     Pending   0          13s
coredns-7db6d8ff4d-xk6t8               0/1     Pending   0          13s
etcd-controlplane                      1/1     Running   0          30s
kube-apiserver-controlplane            1/1     Running   0          30s
kube-controller-manager-controlplane   1/1     Running   0          30s
kube-proxy-shnrp                       1/1     Running   0          14s
kube-scheduler-controlplane            1/1     Running   0          30s


Let's check if api server is listening on port 6443:

vagrant@controlplane:~$ sudo ss | grep 6443
tcp   ESTAB  0      0                                                                            192.168.0.19:45602              192.168.0.19:6443        
tcp   ESTAB  0      0                                                                            192.168.0.19:34290              192.168.0.19:6443        
tcp   ESTAB  0      0                                                                            192.168.0.19:45556              192.168.0.19:6443        
tcp   ESTAB  0      0                                                                            192.168.0.19:34272              192.168.0.19:6443        
tcp   ESTAB  0      0                                                                            192.168.0.19:34256              192.168.0.19:6443        
tcp   ESTAB  0      0                                                                            192.168.0.19:53434              192.168.0.19:6443        
tcp   ESTAB  0      0                                                                   [::ffff:192.168.0.19]:6443      [::ffff:192.168.0.19]:34290       
tcp   ESTAB  0      0                                                                   [::ffff:192.168.0.19]:6443      [::ffff:192.168.0.19]:34256       
tcp   ESTAB  0      0                                                                   [::ffff:192.168.0.19]:6443      [::ffff:192.168.0.19]:53434       
tcp   ESTAB  0      0                                                                   [::ffff:192.168.0.19]:6443      [::ffff:192.168.0.19]:34272       
tcp   ESTAB  0      0                                                                   [::ffff:192.168.0.19]:6443      [::ffff:192.168.0.19]:45602       
tcp   ESTAB  0      0                                                                   [::ffff:192.168.0.19]:6443      [::ffff:192.168.0.19]:45556       
tcp   ESTAB  0      0                                                                                   [::1]:6443                      [::1]:51252       
tcp   ESTAB  0      0                                                                                   [::1]:51252                     [::1]:6443        


Let's follow now the next suggestion from kubeadm init ouptut - to deploy a pod network to the cluster. 
https://kubernetes.io/docs/concepts/cluster-administration/addons/ lists few options and we'll be using rajch/weave: Simple, resilient multi-host containers networking and more. 
 
To install Weave networking:

vagrant@controlplane:~$ kubectl apply -f "https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s-1.11.yaml"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created

This adds a new deamonset to kube-system namespace:

vagrant@controlplane:~$ kubectl get ds -A
NAMESPACE     NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   kube-proxy   3         3         3       3            3           kubernetes.io/os=linux   39h
kube-system   weave-net    3         3         3       3            3           <none>                   38h

To check its configuration (optional):

$ kubectl edit ds weave-net -n kube-system


Let's see network interfaces with Weave networking in place:

vagrant@controlplane:~$ ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 02:92:d8:07:a3:a8 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic enp0s3
       valid_lft 69914sec preferred_lft 69914sec
    inet6 fe80::92:d8ff:fe07:a3a8/64 scope link 
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:a2:bf:35 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.19/24 metric 100 brd 192.168.0.255 scope global dynamic enp0s8
       valid_lft 66347sec preferred_lft 66347sec
    inet6 fe80::a00:27ff:fea2:bf35/64 scope link 
       valid_lft forever preferred_lft forever
4: datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether b6:44:ba:8d:e9:5b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::b444:baff:fe8d:e95b/64 scope link 
       valid_lft forever preferred_lft forever
6: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
    link/ether 66:62:1f:35:75:ec brd ff:ff:ff:ff:ff:ff
    inet 10.32.0.1/12 brd 10.47.255.255 scope global weave
       valid_lft forever preferred_lft forever
    inet6 fe80::6462:1fff:fe35:75ec/64 scope link 
       valid_lft forever preferred_lft forever
8: vethwe-datapath@vethwe-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master datapath state UP group default 
    link/ether 5e:be:b5:c8:7b:5b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::90ef:e2ff:fe04:ae5f/64 scope link 
       valid_lft forever preferred_lft forever
9: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default 
    link/ether 7a:e2:29:f8:dd:d9 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::74c2:c5ff:fe4a:fbd4/64 scope link 
       valid_lft forever preferred_lft forever
10: vxlan-6784: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65535 qdisc noqueue master datapath state UNKNOWN group default qlen 1000
    link/ether ee:23:4d:40:2d:61 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::4876:bbff:feb7:8f1a/64 scope link 
       valid_lft forever preferred_lft forever
12: vethwepl83832e9@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default 
    link/ether 52:6f:59:3d:a1:ff brd ff:ff:ff:ff:ff:ff link-netns cni-92fbdf18-5b4d-840b-a941-0c7acac23d91
    inet6 fe80::506f:59ff:fe3d:a1ff/64 scope link 
       valid_lft forever preferred_lft forever
14: vethwepleb4ffea@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default 
    link/ether be:d2:26:fa:30:00 brd ff:ff:ff:ff:ff:ff link-netns cni-435511a3-4edb-8689-0dd8-11e729008d34
    inet6 fe80::bcd2:26ff:fefa:3000/64 scope link 
       valid_lft forever preferred_lft forever


Let's check if all pods are running now:

vagrant@controlplane:~$ kubectl get pods -n kube-system
NAME                                   READY   STATUS              RESTARTS      AGE
coredns-7db6d8ff4d-d4fkw               1/1     Running             0             3m35s
coredns-7db6d8ff4d-xk6t8               0/1     ContainerCreating   0             3m35s
etcd-controlplane                      1/1     Running             0             3m52s
kube-apiserver-controlplane            1/1     Running             0             3m52s
kube-controller-manager-controlplane   1/1     Running             0             3m52s
kube-proxy-shnrp                       1/1     Running             0             3m36s
kube-scheduler-controlplane            1/1     Running             0             3m52s
weave-net-tgnrf                        2/2     Running             1 (17s ago)   35s


And a little bit later, all are running:

vagrant@controlplane:~$ kubectl get pods -n kube-system
NAME                                   READY   STATUS    RESTARTS      AGE
coredns-7db6d8ff4d-d4fkw               1/1     Running   0             37h
coredns-7db6d8ff4d-xk6t8               1/1     Running   0             37h
etcd-controlplane                      1/1     Running   0             37h
kube-apiserver-controlplane            1/1     Running   0             37h
kube-controller-manager-controlplane   1/1     Running   0             37h
kube-proxy-shnrp                       1/1     Running   0             37h
kube-scheduler-controlplane            1/1     Running   0             37h
weave-net-tgnrf                        2/2     Running   1 (37h ago)   37h


If you get:

vagrant@controlplane:~$ kubectl get pod
E0522 14:37:24.977070   19903 memcache.go:265] couldn't get current server API group list: Get "https://192.168.0.16:6443/api?timeout=32s": dial tcp 192.168.0.16:6443: connect: connection refused
The connection to the server 192.168.0.16:6443 was refused - did you specify the right host or port?

...that might be a sign that systemd has not been set as the cgroup driver for the containerd.


Joining worker nodes to the cluster



Let's get the command for joining (which was printed in the kubeadm init output):

vagrant@controlplane:~$ kubeadm token create --print-join-command
kubeadm join 192.168.0.19:6443 --token 2jb2ri.qnm1ktzb49ovm9f1 --discovery-token-ca-cert-hash sha256:be00e778c55e32fdf0dec5057a0dacffbcb32aeb7cfa91cb5e7edbed853536b2 

On each worker node:

Become a root (or run the following commands as sudo):

vagrant@node01:~$ sudo -i
root@node01:~# 

Make a worker node join the master:

root@node01:~# kubeadm join 192.168.0.19:6443 --token 2jb2ri.qnm1ktzb49ovm9f1 --discovery-token-ca-cert-hash sha256:be00e778c55e32fdf0dec5057a0dacffbcb32aeb7cfa91cb5e7edbed853536b2
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 2.025845416s
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Let's check if all nodes are visible:

vagrant@controlplane:~$ kubectl get nodes
NAME           STATUS   ROLES           AGE   VERSION
controlplane   Ready    control-plane   37h   v1.30.1
node01         Ready    <none>          88s   v1.30.1
node02         Ready    <none>          21s   v1.30.1

Current pods:

vagrant@controlplane:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                   READY   STATUS    RESTARTS        AGE
kube-system   coredns-7db6d8ff4d-d4fkw               1/1     Running   0               37h
kube-system   coredns-7db6d8ff4d-xk6t8               1/1     Running   0               37h
kube-system   etcd-controlplane                      1/1     Running   0               37h
kube-system   kube-apiserver-controlplane            1/1     Running   0               37h
kube-system   kube-controller-manager-controlplane   1/1     Running   0               37h
kube-system   kube-proxy-fdg79                       1/1     Running   0               3m20s
kube-system   kube-proxy-shnrp                       1/1     Running   0               37h
kube-system   kube-proxy-xbnw8                       1/1     Running   0               4m28s
kube-system   kube-scheduler-controlplane            1/1     Running   0               37h
kube-system   weave-net-55l5f                        2/2     Running   0               4m28s
kube-system   weave-net-p5m45                        2/2     Running   1 (2m37s ago)   3m20s
kube-system   weave-net-tgnrf                        2/2     Running   1 (37h ago)     37h


Current replicasets:

vagrant@controlplane:~$ kubectl get replicasets --all-namespaces
NAMESPACE     NAME                 DESIRED   CURRENT   READY   AGE
kube-system   coredns-7db6d8ff4d   2         2         2       37h

Current deployments:

vagrant@controlplane:~$ kubectl get deployments --all-namespaces
NAMESPACE     NAME      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   coredns   2/2     2            2           37h

Current services:

vagrant@controlplane:~$ kubectl get services --all-namespaces
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  37h
kube-system   kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   37h

We can quickly test that we can create a pod in the cluster:

vagrant@controlplane:~$ kubectl run nginx --image=nginx
pod/nginx created

vagrant@controlplane:~$ kubectl get pod
NAME  READY   STATUS    RESTARTS        AGE
nginx 1/1     Running   0               10s                                                

vagrant@controlplane:~$ kubectl delete pod nginx
pod "nginx" deleted

vagrant@controlplane:~$ kubectl get pod
No resources found in default namespace.



Application deployment on the cluster



Let's now create our custom deployment of Nginx application which is a web server.

On master:

vagrant@controlplane:~$ kubectl create deployment nginx --image nginx:alpine
deployment.apps/nginx created

Let's check Kubernetes objects:

vagrant@controlplane:~$ kubectl get all --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS      AGE
default       pod/nginx-6f564d4fd9-jp898                 1/1     Running   0             114s
kube-system   pod/coredns-7db6d8ff4d-d4fkw               1/1     Running   0             38h
kube-system   pod/coredns-7db6d8ff4d-xk6t8               1/1     Running   0             38h
kube-system   pod/etcd-controlplane                      1/1     Running   0             38h
kube-system   pod/kube-apiserver-controlplane            1/1     Running   0             38h
kube-system   pod/kube-controller-manager-controlplane   1/1     Running   0             38h
kube-system   pod/kube-proxy-fdg79                       1/1     Running   0             27m
kube-system   pod/kube-proxy-shnrp                       1/1     Running   0             38h
kube-system   pod/kube-proxy-xbnw8                       1/1     Running   0             28m
kube-system   pod/kube-scheduler-controlplane            1/1     Running   0             38h
kube-system   pod/weave-net-55l5f                        2/2     Running   0             28m
kube-system   pod/weave-net-p5m45                        2/2     Running   1 (27m ago)   27m
kube-system   pod/weave-net-tgnrf                        2/2     Running   1 (38h ago)   38h

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  38h
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   38h

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/kube-proxy   3         3         3       3            3           kubernetes.io/os=linux   38h
kube-system   daemonset.apps/weave-net    3         3         3       3            3           <none>                   38h

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
default       deployment.apps/nginx     1/1     1            1           114s
kube-system   deployment.apps/coredns   2/2     2            2           38h

NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE
default       replicaset.apps/nginx-6f564d4fd9     1         1         1       114s
kube-system   replicaset.apps/coredns-7db6d8ff4d   2         2         2       38h


Our custom objects are all in the default namespace so we can shorten this:

vagrant@controlplane:~$ kubectl get all
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-6f564d4fd9-jp898   1/1     Running   0          2m22s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   38h

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           2m22s

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-6f564d4fd9   1         1         1       2m22s

We can also check which pods are running on which nodes:

$ kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                   READY   STATUS    RESTARTS        AGE     IP             NODE           NOMINATED NODE   READINESS GATES
default       nginx-6f564d4fd9-jp898                 1/1     Running   0               30h     10.44.0.1      node01         <none>           <none>
kube-system   coredns-7db6d8ff4d-d4fkw               1/1     Running   0               2d20h   10.32.0.2      controlplane   <none>           <none>
kube-system   coredns-7db6d8ff4d-xk6t8               1/1     Running   0               2d20h   10.32.0.3      controlplane   <none>           <none>
kube-system   etcd-controlplane                      1/1     Running   0               2d20h   192.168.0.19   controlplane   <none>           <none>
kube-system   kube-apiserver-controlplane            1/1     Running   0               2d20h   192.168.0.19   controlplane   <none>           <none>
kube-system   kube-controller-manager-controlplane   1/1     Running   0               2d20h   192.168.0.19   controlplane   <none>           <none>
kube-system   kube-proxy-fdg79                       1/1     Running   0               31h     192.168.0.22   node02         <none>           <none>
kube-system   kube-proxy-shnrp                       1/1     Running   0               2d20h   192.168.0.19   controlplane   <none>           <none>
kube-system   kube-proxy-xbnw8                       1/1     Running   0               31h     192.168.0.21   node01         <none>           <none>
kube-system   kube-scheduler-controlplane            1/1     Running   0               2d20h   192.168.0.19   controlplane   <none>           <none>
kube-system   weave-net-55l5f                        2/2     Running   0               31h     192.168.0.21   node01         <none>           <none>
kube-system   weave-net-p5m45                        2/2     Running   1 (31h ago)     31h     192.168.0.22   node02         <none>           <none>
kube-system   weave-net-tgnrf                        2/2     Running   1 (2d20h ago)   2d20h   192.168.0.19   controlplane   <none>           <none>


Let's check if nginx pod is indeed listening on port 80:

$ kubectl exec nginx-6f564d4fd9-jp898 -- netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1/nginx: master pro
tcp        0      0 :::80                   :::*                    LISTEN      1/nginx: master pro


Let's expose pod's port:

vagrant@controlplane:~$ kubectl expose deploy nginx --type=NodePort --port 80
service/nginx exposed

New NodePort service now appears in the list of services:

vagrant@controlplane:~$ kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        38h
nginx        NodePort    10.96.89.115   <none>        80:30961/TCP   30s 

Let's extract the port number:

vagrant@controlplane:~$ PORT_NUMBER=$(kubectl get service -l app=nginx -o jsonpath="{.items[0].spec.ports[0].nodePort}")
echo -e "\n\nService exposed on NodePort $PORT_NUMBER"

Service exposed on NodePort 30961

Verification:

vagrant@controlplane:~$ echo $PORT_NUMBER
30961

Here is the full diagram of the system:

For larger image click on https://drive.google.com/file/d/1LRTVSPn4gKoGWxJMuOlXweO9Gje4uhT9/view?usp=sharing




Testing the application deployed on the Kubernetes cluster



We can now access our application (Nginx) from controlplane:

Reply from the Nginx running on the first worker node:

vagrant@controlplane:~$ curl http://node01:$PORT_NUMBER
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Reply from the Nginx running on the second worker node:

vagrant@controlplane:~$ curl http://node02:$PORT_NUMBER
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>


Test from the browser:

vagrant@controlplane:~$ echo "http://$(dig +short node01):$PORT_NUMBER"
http://192.168.0.21:30961
vagrant@controlplane:~$ echo "http://$(dig +short node02):$PORT_NUMBER"
http://192.168.0.22:30961




Stopping the Vagrant Virtual Machines



../certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox$ vagrant status
Current machine states:

controlplane              running (virtualbox)
node01                    running (virtualbox)
node02                    running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

../certified-kubernetes-administrator-course/kubeadm-clusters/virtualbox$ vagrant halt
==> node02: Attempting graceful shutdown of VM...
==> node01: Attempting graceful shutdown of VM...
==> controlplane: Attempting graceful shutdown of VM...


We can see in VirtualBox that all 3 VMs are powered off: