Saturday 18 May 2024

Useful vi editor commands


vi editor:
  • text editor for Linux and Unix
  • used to create, edit and manage text files

vim editor is the advanced version of vi editor.

Modes


Three modes in vi:
  • Command
  • Insert
  • Visual
  • Last Line (Escape)

Command Mode


This is the default mode, the one vi is in upon the start of the editor. 
Characters are interpreted as commands and are not displayed.

In this mode we can:
  • move through a file
  • delete
  • copy
  • paste

To delete a line, move the cursor with Up and Down keys to desired line and then press dd.


Insert Mode


To get into this mode, press i key while in Command Mode.


Visual Mode


In this mode we can:
  • select and delete consecutive lines of text

To enter this mode from the command mode, go first to the desired line e.g. the first or last line of the section that we want to delete and then press v key.

To make a selection, go up and down with Up and Down keys.

To delete the selection press x or Delete keys.


Last Line Mode (Escape Mode)


To get into this mode from Command mode, press colon (:).

In this mode we can:
  • save the file
  • execute commands
q - to quit
q! - force quit - without saving changes
wq - write (save) changes and quit
w - write (save) to file ("Save As...")
w! - force write - overwrite file 


References:


Provisioning multi-node cluster using Kubeadm

This article extends my notes from an Udemy course "Kubernetes for the Absolute Beginners - Hands-on". All course content rights belong to course creators. 

The previous article in the series was Kubernetes on Cloud | My Public Notepad


Let's assume we want to provision a Kubernetes cluster with one master and two worker nodes:
  • Master
    • kube-apiserver
    • etcd
    • node-controller
    • replica-controller
  • Worker node 1
    • kubelet
    • container runtime (e.g. Docker)
  • Worker node 2
    • kubelet
    • container runtime (e.g. Docker)

To provision this cluster, we will be using a special tool called kubeadm. [Kubeadm | Kubernetes]

kubeadm tool is used to:
  • bootstrap a Kubernetes cluster by installing all of the necessary components on the right nodes in the right order
  • by design, it cares only about bootstrapping, not about provisioning machines
  • built to provide kubeadm init and kubeadm join as Kubernetes best-practice "fast paths" for creating multi-node Kubernetes clusters
  • performs the actions necessary to get a minimum viable cluster up and running
  • take care of requirements around security and certifications to enable communication between all of the components
  • install all of these various components individually across different nodes
  • modify all of the necessary configuration files to make sure all the components point to each other
  • set up certificates


Let's revise the tools that have similar names and which are involved in provisioning and managing Kubernetes cluster:
  • kubeadm: the command to bootstrap the cluster.
  • kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
  • kubectl: the command line util to talk to your cluster.


Here are the steps to set up a Kubernetes cluster using the kubeadm tool at a high level:
  • provision desired number of nodes for Kubernetes cluster
    • can be physical or virtual machines
  • designate one as the master and the rest as worker nodes
  • install a container runtime on the hosts, on all nodes (master and worker)
    • it needs to support Container Runtime Interface (CRI) - an API for integration wtih kubelet
    • example: containerd
  • install the kubeadm tool on all the nodes
  • initialize the master server
    • this ensures all of the required components are installed and configured on the master server
  • ensure that the network prerequisites are met
    • normal network connectivity is not enough for this
    • Kubernetes requires a special networking solution between the master and worker nodes called the Pod Network
  • worker nodes to join the cluster (to join the master node)
  • deploy our application onto the Kubernetes environment


Exercise: set up a Kubernetes cluster using the kubeadm tool on the local environment

Tuesday 14 May 2024

Kubernetes on Cloud

This article extends my notes from an Udemy course "Kubernetes for the Absolute Beginners - Hands-on". All course content rights belong to course creators. 



Deploying a Kubernetes cluster in production in the cloud can be done in cloud environments that can be:
  • private
  • public 
In any environment we have two types of solutions:
  • self-hosted or turnkey solution
    • we provision the required VMs
    • we use some kind of tools or scripts to configure the Kubernetes cluster on them
    • we are responsible for maintaining those VMs, patching and upgrading them
    • provisioning the cluster itself and managing the lifecycle of the cluster are mostly made easy using certain tools and scripts
      • for example, deploying a Kubernetes cluster on AWS can be made easy using tools like kops, KubeOne or KubeAdmin
  • hosted or managed solutions
    • more like Kubernetes-as-a-Service solution
    • provider:
      • provisions VMs
      • installs Kubernetes
      • deploys the cluster. Kubernetes cluster is (in the context of the hosted solution) a managed group of VM instances for running containerized applications.
      • configures Kubernetes
      • maintains VMs
    • For example, the Google Kubernetes Engine (GKE, formerly known as Google Container Engine) lets us provision a Kubernetes cluster in a matter of minutes with just a few clicks without having to perform any kind of configuration by ourself.
    • In these environments, we most likely won't have access to the master nodes or VMs to perform any kind of configuration changes on them
    • The version of Kubernetes and the master nodes are all managed by the by the provider
    • Most popular solutions:
      • Google Kubernetes Engine (GKE)
      • Azure Kubernetes Service (AKS)
      • Amazon Elastic Kubernetes Service (EKS)
    • When provisioning and managing Kubernetes cluster here we can still reuse the deployment and service definition files that were mentioned in previous articles


Generic Process of deploying Microservices Application to Managed Cloud 


The following steps are the common steps in process of deploying a Microservices Application to GKE, AKS and EKS:

  • Create Kubernetes cluster
    • in Web Console, we can select:
      • cluster size - how many VMs we want to have
      • VM spec - number of cores, memory 
  • Connect to cluster via Terminal
    • Open a Cloud Shell Terminal or local Terminal
    • Configure kubectl access by running the cloud provider's CLI tool (gcloud, aws, az) which configure ~/.kube/config so kubectl command line tool (installed and available in cloud shell by default) talks to the right cluster
  •  Clone our repo with Microservices App Kubernetes project (a bunch of Kubernetes objects definition files)
  • For each Kubernetes object in the repo, execute kubectl create -f path/to/definition_file.yaml
  • Use kubectl commands for checking the state of created Kubernetes objects as usual


Deploying Microservices Application on Kubernetes

This article extends my notes from an Udemy course "Kubernetes for the Absolute Beginners - Hands-on". All course content rights belong to course creators. 

The previous article in the series was Microservices Application Stack on Docker | My Public Notepad.

In the previous article in Kubernetes series we deployed the voting application on Docker. In this article we'll see how to deploy it on Kubernetes.

The goal is to:
  • deploy these applications as containers on a Kubernetes cluster
  • enable connectivity between the containers so that the applications can access each other and the databases
  • enable external access for the external facing applications which are the Voting and the Result app so that the users can access the web browser
The smallest object that we can create on a Kubernetes cluster is a pod so we must first deploy these applications as a pod on our Kubernetes cluster or we could deploy them as ReplicaSets or Deployments. We can first deploy containers as pods directly and later we can convert this to using Deployments.

Once the pods are deployed, the next step is to enable connectivity between the services. Let's review the connectivity requirements (which application requires access to which services and which application needs to be accessed externally).
  • Redis database
    • accessed by the Voting app and the Worker app. The Voting app saves the vote to the Redis database, and the worker app reads the vote from the Redis database.
    • has a service that listens on Port 6379
  • PostgreSQL database
    • accessed by the Worker app to update it with the total count of votes, and it's also accessed by the Result app to read the total count of votes to be displayed in the resulting web page in the browser.
    • has a service that listens on Port 5432
  • The Voting app
    • is accessed by the external users, the voters
    • has a Python web server that listens on port 80
  • The Result app
    • accessed by the external users to view the results
    • has a Node.js based server that listens on Port 80
  • The Worker app 
    • not being accessed by anyone. None of the other components or external users are accessing the Worker app. The Worker app simply reads the count of votes from the database and then updates the total count of votes on the PostgreSQL database. None of the other components nor the external users ever access the Worker app
    • has no service because it's just a worker and it's not accessed by any other service or external users.
How to make one component accessible by another? 

For example, how to make the Redis database accessible by the Voting app? The voting app should not use the IP address of the Redis pod because the IP of the pod can change if the pod restarts. 

In Kubernetes ClusterIP Service | My Public Notepad we learned that the right way is to use a ClusterIP service which is used to expose an application to other applications and in Kubernetes NodePort Service | My Public Notepad we learned that we should use NodePort service to expose an application to users for external access.

Let's first create ClusterIP services for those applications that should not be accessed outside the cluster:
  • a service for the Redis so that it can be accessed by the Voting app and the Worker app
    • will name it a redis service and it will be accessible anywhere within the cluster by the name of the service redis. The source code within the Voting app and the worker app are hardcoded to point to a Redis database running on a host by the name redis so so it's important to name our service as redis.
  • a service for the PostgreSQL pod so that the PostgreSQL DB can be accessed by the Worker and the Result app. 
    • named db because in the source code of the Result app and the Worker app, they are looking for a database at the address db 
    • while connecting to the database, the Worker and the Result apps are passing a username and password to connect to the database, both of which are set to postgres. So when we deploy the Postgres DB pod, we must make sure that we set the these credentials for it as the initial set of credentials to while creating the database.
Let's now create NodePort services for those applications for which we need to enable external access:
  • a services for Voting app 
  • a service for the Result app
When we create these services we can decide on what port we are going to make them available on and it would be a high port with a port number greater than 30000. 

We will be deploying five pods in total and we have four services. We have no service for the Worker pod and this is because it is not running any service that must be accessed by another application or external users. It is just a worker process that reads from one database and updates another so it does not require a service. A service is only required if the application has some kind of process or database service or web service that needs to be exposed, that needs to be accessed by others.


Deploying Microservices Application stack via Pods and Services

 
Here is the code of the Kubernetes project which deploys Voting app via Pods and Services: kubernetes-demo/minikube/voting-app at main · BojanKomazec/kubernetes-demo

Images used in the project above were derived from these:

And here is a similar implementation, via Docker only: https://github.com/dockersamples/example-voting-app

Deploying applications directly as pods has drawbacks as application cannot scale easily. If we wanted to add more instances of a particular service, and if we wanted to update the application like an image that was used in the application, then your application will have to be taken down while the new pod is created so that that may result in a downtime.

So the right approach is to use Deployments to deploy an application.


Deploying Microservices Application stack via Deployments and Services


We choose deployments over ReplicaSets as deployments:
  • automatically create ReplicaSets as required
  • help us perform rolling updates and rollbacks and maintain a record of revisions and record the cause of change
Project repo from the Udemy course: kodekloudhub/example-voting-app-kubernetes

My project repo: kubernetes-demo/minikube/voting-app-via-deployments at main · BojanKomazec/kubernetes-demo - it also contains instructions how to scale deployment of the Voting web app.

---

Introduction to Grafana

 



What is Grafana?

  • Web application for:
    • analytics
    • interactive visualization  - often a component in monitoring stacks in combination with:
      • time series databases:
        • InfluxDB
        • Prometheus
        • Graphite
      • monitoring platforms:
        • Sensu
        • Icinga
        • Checkmk
        • Zabbix
        • Netdata
        • PRTG
      • SIEMs (Security Information and Event Management - collects logs and events, normalizing this data for further analysis that can manifest as visualizations, alerts, searches, reports, and more.):
        • Elasticsearch
        • Splunk
      • other data sources.
  • Produces charts, graphs, and alerts for the web when connected to supported data sources
  • Multi-platform
    • Microsoft Windows
    • Linux
    • macOS
  • Licenses:
    • open source
    • licensed Grafana Enterprise
      • additional capabilities
      • sold as a self-hosted installation or through an account on the Grafana Labs cloud service
  • Expandable through a plug-in system
  • Complex monitoring dashboards can be built via interactive query builders

How to start with Grafana Web Application?


Grafana web app shows a list of:
  • Dashboards
    • for data visualization
    • can be grouped into folders
  • Playlists
    • groups of dashboards that are displayed in a sequence
    • they can be used to cycle dashboards on TVs without user control 
  • Snapshots
    • interactive, publicly available, point-in-time representations of dashboards
  • Library panels
    • Reusable panels that can be added to multiple dashboards



How to create a new Dashboard?

We can add a visualisation by selecting a data source and then querying and visualising data with charts, stats and tables or by creating lists, markdowns and other widgets.


There is also a drop-down menu in the context of the dashboard, with the same content:


Adding a visualization actually adds a new panel:


We can toggle a Table view and see data points as rows in a table instead of the graph:


In the right-hand side panel we can choose Visualisation type:

For example, Bar chart would look like this:


Suggestions tab show thumbnails for various visualisations:


Related panels can be grouped into rows.


---

Microservices Application Stack on Docker

This article extends my notes from an Udemy course "Kubernetes for the Absolute Beginners - Hands-on". All course content rights belong to course creators. 

The previous article in the series was Kubernetes LoadBalancer Service | My Public Notepad.




Let's consider a simple application stack running on Docker.

This is a simple voting application with following components:
  • voting app
    • web application developed in Python
    • provides the user with an interface to choose between two options: a cat and a dog
  • in-memory DB
    • Redis
    • when user makes a selection, the vote is stored in Redis
  • worker
    • an application which processes the vote, written in .NET
    • takes the new vote and updates the persistent database - it increments the number of votes for cats if vote was for cats
  • persistent database
    • PostgreSQL
    • has a table with a number of votes for each category: cats and dogs
  • vote results display app
    • an interface to show the results;  reads count of votes from the PostgreSQL database and displays it to the user
    • web application, developed in Node.js
This application is built with a combination of different services, different development tools and multiple different development platforms such as Python, Node.js, .NET etc...

It is easy to set up an entire application stack consisting of diverse components in Docker. Let's see how to put together this application stack on a single Docker engine using docker run commands.

Let us assume that all images of applications are already built and are available on Docker Repository.

We start with the data layer by starting an instance of Redis:

$ docker run -d --name=redis redis

-d, --detach = Run container in background and print container ID.
--name = name the container (important here!)

Next we will deploy the PostgreSQL database:

$ docker run -d --name=db postgres:9.4

To start with the application services we will deploy a front-end app for voting interface. Since this is a web server, it has a web UI instance running on port 80. We will publish that port to 5000 on the host system so we can access it from a browser.

$ docker run -d --name=vote -p 5000:80 voting-app

To deploy the web application that shows the results to the user:

$ docker run -d --name=result -p 5001:80 result-app

Finally, we deploy the worker by running an instance of the worker image:

$ docker run -d --name=worker worker

If we now try to load voting app like http://192.168.56.101:5000, we'll get Internal Server Error. 

This is because although all the instances are running on the host, in different containers, we haven't actually linked them together. We haven't told the voting web application to use this particular Redis instance. There could be multiple Redis instances running.

We haven't told the worker and the resulting app to use this particular PostgreSQL database that we ran. We can use links. --link container_name : host_name is a command line option which can be used to link to containers together. For example, the voting app web service is dependent on the Redis service when the web server starts as we can see in this web server code snippet:

def get_redis():
    if not hasattr(g, 'redis'):
        g.redis = Redis(host = 'redis', db=0, socket_timeout=5)
    return g.redis


Web app looks for a Redis service running on host redis, but the voting app container cannot resolve a host by the name redis. To make the voting app aware of the Redis service we'll add a --link option to running the voting app container to link it to the Redis container:

$ docker run -d --name=vote -p 5000:80 --link redis:redis voting-app

As --link uses the container name, this is why we need to name the containers. --link creates an entry in the /etc/hosts file on the voting app container, adding an entry with the host name redis with the internal IP of the red disk container:

/etc/hosts:
...
127.17.0.2 redis
..

Similarly, we need to add a link for the result app to communicate with the Postgres database:

$ docker run -d --name=result -p 5001:80 --link db:db result-app

 
Finally, the worker application requires access to both the Redis as well as the Postgres database:

$ docker run -d --name=worker --link redis:redis --link db:db worker

Using links is deprecated and the support may be removed in future in Docker. This is because advanced and newer concepts in Docker Swarm and networking supports better ways of achieving what we just did here with links.

Monday 13 May 2024

Kubernetes LoadBalancer Service

This article extends my notes from an Udemy course "Kubernetes for the Absolute Beginners - Hands-on". All course content rights belong to course creators. 

The previous article in the series was Kubernetes ClusterIP Service | My Public Notepad


NodePort service helps us make an external facing application available on a port on the worker nodes.


Let's assume we have the following tiers in our Software Product and cluster of 4 (worker) nodes:
  • front-end application AppA
    • Deployment:
      • worker node 1: 192.168.56.70
        • pod
      • worker node 2: 192.168.56.71
        • 2 pods
      • worker node 3: 192.168.56.72
        • 0 pods
      • worker node 4: 192.168.56.73
        • 0 pods
    • NodePort service: 
      • nodePort: 30035 
  • front-end application AppB
    • Deployment:
      • worker node 1: 192.168.56.70
        • 0 pods
      • worker node 2: 192.168.56.71
        • 0 pods
      • worker node 3: 192.168.56.72
        • 2 pods
      • worker node 4: 192.168.56.73
        • 1 pod
    • NodePort service: 
      • nodePort: 31061
  • Redis 
    • Deployment:
      • worker node 5
  • DB
    • Deployment:
      • worker node 6
  • Worker
For each of these tiers we created a deployment so their instances are running on multiple pods, within their deployments. These deployments' pods are hosted across worker nodes in a cluster. 

Let's focus only on front-end app tiers and let's say we have a four node cluster (worker nodes 1 to 4) and pod distribution is as in the list above.

To make the applications accessible to external users, we create the services of type NodePort which help in receiving traffic on the ports on the nodes and routing the traffic to the respective pods. 

But what URL should be given to end users to access the applications?

User could access any of these two applications using IP of any of the nodes and ports the NodePort service is exposed on externally.

For AppA those combinations would be:

http://192.168.56.70:30035
http://192.168.56.71:30035
http://192.168.56.72:30035
http://192.168.56.73:30035

For AppB those combinations would be:

http://192.168.56.70:31061
http://192.168.56.71:31061
http://192.168.56.72:31061
http://192.168.56.73:31061

Note that even if pods are only hosted on two of the four nodes, they will still be accessible on the IP of all four nodes in the cluster. For example, if the pods for the AppA are only deployed on the nodes with
IP 70 and 71, they would still be accessible on the ports of all the nodes in the cluster. This is because NodePort exposes a port on each node in a cluster (it kinda abstracts the cluster...and does not look in which node are pods that match its selector).

Obviously, end users need a single URL like http://appA.com or http://appB.com to access the applications.

One way to achieve this is to create a new VM for load balancer purpose and install and configure
a suitable load balancer on it like HA Proxy or Nginx, then configure the load balancer to route
traffic to the underlying nodes.

Setting all of that external load balancing and then maintaining and managing can be a tedious
task. If our solution is deployed on a supported cloud platforms like GCP, AWS or Azure, we can leverage the native load balancer of that cloud platform. Kubernetes has support for integrating with the native load balancers of certain cloud providers and configuring that for us. 

So all we need to do is set the service type for the front-end services to LoadBalancer instead of
NodePort.

loadbalancer-service-definition.yaml:

apiVersion: v1
kind: Service
metadata:
  name: myapp-service
spec:
  type: LoadBalancer
  ports:
    - targetPort: 80
      port: 80
      nodePort: 30008
  selector:
    app: appA
    type: front-end

If we set the type of service to LoadBalancer in an unsupported environment like VirtualBox or any other environments, then it would have the same effect as setting it to NodePort, where the services are exposed on a high end port on the nodes there. It just won't do any kind of external load balancer configuration.

---