Sunday 5 May 2024

Kubernetes Controllers

These are custom notes that extend my notes from an Udemy course "Kubernetes for the Absolute Beginners - Hands-on". All course content rights belong to course creators. 

This is the third article in the series, the previous one being Managing Pods In A Minikube Cluster | My Public Notepad.




Controllers are the processes that monitor Kubernetes objects and respond accordingly. One of them is the Replication Controller.

Let's consider a case where we have a single pod running our application. We'll show here two reasons why we want to have pod replicas running.

1) High Availability


If application crashes and the pod fails, users will no longer be able to access our application. To prevent this we need to have more than one instance of pod (pod replicas) running at the same time. If one fails, we still have our application running on the other one.

Replication Controller helps us run multiple instances of a single pod in the Kubernetes cluster, thus providing high availability. Even if we want to have a single pod, the Replication Controller can help by automatically bringing up a new pod when the existing one fails.

Replication Controller ensures that the specified number of pods are running at all times, no matter if it's one or more pods.

2) Load Balancing 


So we have a single pod serving a set of users. When the number of users increase, we deploy an additional pod to share and balance the load across the two pods. If the demand further increases and if we were to run out of resources on the first node, we could deploy additional pods across the other nodes in the cluster.

Replication Controller spans across multiple nodes in the cluster. It helps us balance the load across multiple pods on different nodes as well as scale our application when the demand increases.

Replication Controller vs Replica Set


Replication Controller and Replica Set are similar terms, both have the same purpose, but they are not the same.

Replication Controller is the older technology that is being replaced by Replica Set which is the new recommended way to set up replication. There are minor differences in the way each work.

How to create a Replication Controller?


Just as we used definition files to create pod objects (see Managing Pods In A Minikube Cluster | My Public Notepad), we will use a Replication Controller definition file which, as any Kubernetes definition file, has four sections: apiVersion, kind, metadata and spec.

rc-definition.yaml:

apiVersion: v1
kind: ReplicationController
metadata:
  name: myapp-rc
  labels:
    app: myapp
    type: front-end
spec:
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        type: frontend
    spec:
      containers:
      - name: nginx-container
        image: nginx
  replicas: 3 


Replication Controller is supported in Kubernetes API version v1.

In any Kubernetes definition file the spec section defines what's inside the object we are creating.

Replication Controller needs to create multiple instances of a pod. We need to specify a blueprint of those pod instances so we'll create a template section under spec to provide a pod template to be used by the Replication Controller to create replicas.

For pod template definition we can reuse the contents of the pod definition file pod-definition.yml that we created in Managing Pods In A Minikube Cluster | My Public Notepad so we'll simply move all the contents of the pod definition file into the template section of the replication controller, except for the apiVersion (it is already specified) and kind (no need to specify kind as it can only be a pod). Make sure that metadata and spec are children of the template and are properly indented.

To specify how many replicas we want to have we use property replicas under the spec (at the same level as template).

To create this Kubernetes object let's first start minikube (which a single cluster on the local host and configures kubectl to talk to that minikube cluster):

$ minikube start
😄  minikube v1.33.0 on Ubuntu 22.04
✨  Using the docker driver based on existing profile
👍  Starting "minikube" primary control-plane node in "minikube" cluster
🚜  Pulling base image v0.0.43 ...
🏃  Updating the running docker "minikube" container ...
🐳  Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

We can check kubectl config:

$ cat  ~/.kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /home/bojan/.minikube/ca.crt
    extensions:
    - extension:
        last-update: Sun, 05 May 2024 12:54:52 BST
        provider: minikube.sigs.k8s.io
        version: v1.33.0
      name: cluster_info
    server: https://192.168.49.2:8443
  name: minikube
contexts:
- context:
    cluster: minikube
    extensions:
    - extension:
        last-update: Sun, 05 May 2024 12:54:52 BST
        provider: minikube.sigs.k8s.io
        version: v1.33.0
      name: context_info
    namespace: default
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
  user:
    client-certificate: /home/bojan/.minikube/profiles/minikube/client.crt
    client-key: /home/bojan/.minikube/profiles/minikube/client.key


To find more details about ReplicaSet Kubernetes object we can use:

$ kubectl explain replicaset
GROUP:      apps
KIND:       ReplicaSet
VERSION:    v1

DESCRIPTION:
    ReplicaSet ensures that a specified number of pod replicas are running at
    any given time.
    
FIELDS:
  apiVersion    <string>
    APIVersion defines the versioned schema of this representation of an object.
    Servers should convert recognized schemas to the latest internal value, and
    may reject unrecognized values. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

  kind  <string>
    Kind is a string value representing the REST resource this object
    represents. Servers may infer this from the endpoint the client submits
    requests to. Cannot be updated. In CamelCase. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

  metadata      <ObjectMeta>
    If the Labels of a ReplicaSet are empty, they are defaulted to be the same
    as the Pod(s) that the ReplicaSet manages. Standard object's metadata. More
    info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

  spec  <ReplicaSetSpec>
    Spec defines the specification of the desired behavior of the ReplicaSet.
    More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

  status        <ReplicaSetStatus>
    Status is the most recently observed status of the ReplicaSet. This data may
    be out of date by some window of time. Populated by the system. Read-only.
    More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status


Let's now create replication controller based on its definition file:

$ kubectl create -f rc-definition.yaml 
replicationcontroller/myapp-rc created


When the replication controller is created, it first creates the pods using the pod definition template, as many as required, which is 3 in our case.

To view the list of created replication controllers:

$ kubectl get replicationcontrollers
NAME       DESIRED   CURRENT   READY   AGE
myapp-rc   3         3         3       6m51s


The output also shows the number of desired, current and ready pod replicas.

To see all the pods and among them the pods that were created by the replication controller:

$ kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
myapp-rc-4mcjh   1/1     Running   0          9m58s
myapp-rc-5jrm2   1/1     Running   0          9m58s
myapp-rc-rg6jm   1/1     Running   0          9m58s

Unlike standalone, independent pods, all pods created automatically by the replication controller have names that start with the name of the replication controller, which is myapp-rc in our case. 

RC, indicating that they are all created automatically by the replication controller.

To delete objects defined in a definition file:

$ kubectl delete -f ./minikube/rc-definition.yaml 
replicationcontroller "myapp-rc" deleted

$ kubectl get pods
No resources found in default namespace.

How to create a Replica Set?


Let's create a definition file, which is very similar to the one above, for replication controller:

replicaset-definition.yaml:

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp-replicaset
  labels:
    app: myapp
    type: front-end
spec:
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        type: front-end
    spec:
      containers:
      - name: nginx-container
        image: nginx
  replicas: 3
  selector:
    matchLabels:
      type: front-end


Kubernetes API version v1 has no support for Replica Sets but app/v1 does. ReplicaSets moved to apps/v1 in 1.9 version of Kubernetes.

One major difference between replication controller and replica set: replica set requires a selector definition.

The selector section helps the replica set identify what pods fall under it. We need to specify it although we specify what pods fall under it via the template because replica set can also manage pods that were not created as part of the replica set creation.

There might be pods created before the creation of the replica set that match labels specified in the selector. The replica set will also take (the number of) those pods into consideration when creating the replicas.

The selector is not a required field in case of a replication controller, but it is still available. When we skip it, as we did in the rc-definition.yaml, Kubernetes assumes it to be the same as the labels provided in the pod definition file.

In case of replica set, selector needs explicitly to be stated in the definition file.

matchLabels selector simply matches the labels specified under it to the labels on the pod.

The replica set selector also provides many other options for matching labels that were not available in a replication controller.

To create a replica set and then verify it and also number of pods:

$ kubectl create -f replicaset-definition.yaml 
replicaset.apps/myapp-replicaset created

$ kubectl get replicasets
NAME               DESIRED   CURRENT   READY   AGE
myapp-replicaset   3         3         3       9s

There is also a shorter version of the same command, using alias rs instead of replicaset(s):

$ kubectl get rs
$ kubeclt delete rs my-replicaset
 
Let's check the pods:

$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
myapp-replicaset-47xr4   1/1     Running   0          13s
myapp-replicaset-9bq9l   1/1     Running   0          13s
myapp-replicaset-p6q7m   1/1     Running   0          13s


To find out more details about ReplicaSets e.g. the name of the container running in pods and which Docker image has been used for pod creation we can use:

$ kubectl get replicasets -o wide
NAME              DESIRED   CURRENT   READY   AGE   CONTAINERS          IMAGES       SELECTOR
new-replica-set   4         4         0       57s   busybox-container   busybox777   name=busybox-pod

(The example above is not related to the execution of previous commands, it's just a generic example)

In the example above, we see that there are 4 current pods but none of them is READY. To find out the reason we can check pods events with:

$ kubectl describe pods

or, if we want to check the specific pod:

$ kubectl get pods
NAME                    READY   STATUS             RESTARTS   AGE
new-replica-set-2kp8n   0/1     ImagePullBackOff   0          6m19s
new-replica-set-4mrf4   0/1     ImagePullBackOff   0          6m19s
new-replica-set-nxggw   0/1     ImagePullBackOff   0          6m19s
new-replica-set-hh4j9   0/1     ImagePullBackOff   0          6m19s


$ kubectl describe pod new-replica-set-2kp8n
Name:             new-replica-set-2kp8n
Namespace:        default
Priority:         0
Service Account:  default
Node:             controlplane/192.21.178.6
Start Time:       Tue, 07 May 2024 11:05:48 +0000
Labels:           name=busybox-pod
Annotations:      <none>
Status:           Pending
IP:               10.42.0.9
IPs:
  IP:           10.42.0.9
Controlled By:  ReplicaSet/new-replica-set
Containers:
  busybox-container:
    Container ID:  
    Image:         busybox777
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      echo Hello Kubernetes! && sleep 3600
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8lgtt (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  kube-api-access-8lgtt:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/new-replica-set-2kp8n to controlplane
  Normal   Pulling    11m (x4 over 13m)     kubelet            Pulling image "busybox777"
  Warning  Failed     11m (x4 over 13m)     kubelet            Failed to pull image "busybox777": failed to pull and unpack image "docker.io/library/busybox777:latest": failed to resolve reference "docker.io/library/busybox777:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
  Warning  Failed     11m (x4 over 13m)     kubelet            Error: ErrImagePull
  Warning  Failed     11m (x6 over 13m)     kubelet            Error: ImagePullBackOff
  Normal   BackOff    3m22s (x42 over 13m)  kubelet            Back-off pulling image "busybox777"


In this case, the specified Docker image was wrong, non-existing. 

To fix the replica set so it uses the correct Docker image we need to update ReplicaSet definition after which we can: 
  • either delete and recreate the ReplicaSet 
  • or delete all pods, so new ones with the correct image will be created
If we fix/edit ReplicaSet definition, new pods, using the updated information, will not be created automatically.


To check events for all ReplicaSets:

$ kubectl describe replicasets

or, for specific one:

$ kubectl describe replicaset new-replica-set
Name:         new-replica-set
Namespace:    default
Selector:     name=busybox-pod
Labels:       <none>
Annotations:  <none>
Replicas:     4 current / 4 desired
Pods Status:  0 Running / 4 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  name=busybox-pod
  Containers:
   busybox-container:
    Image:      busybox777
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
      -c
      echo Hello Kubernetes! && sleep 3600
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age    From                   Message
  ----    ------            ----   ----                   -------
  Normal  SuccessfulCreate  7m46s  replicaset-controller  Created pod: new-replica-set-hh4j9
  Normal  SuccessfulCreate  7m46s  replicaset-controller  Created pod: new-replica-set-2kp8n
  Normal  SuccessfulCreate  7m46s  replicaset-controller  Created pod: new-replica-set-nxggw
  Normal  SuccessfulCreate  7m46s  replicaset-controller  Created pod: new-replica-set-4mrf4



Scaling In Action


Case #1: There are less pods than specified in replicas


Let's delete one of the pods that are managed by ReplicaSet:

$ kubectl delete pod myapp-replicaset-47xr4
pod "myapp-replicaset-47xr4" deleted


As a side note, we can delete multiple pods with:

$ kubectl delete pods my-replica-set-4mrf4 my-replica-set-hh4j9 my-replica-set-nxggw my-replica-set-tgs9d 
pod "my-replica-set-4mrf4" deleted
pod "my-replica-set-hh4j9" deleted
pod "my-replica-set-nxggw" deleted
pod "my-replica-set-tgs9d" deleted

But let's assume we only deleted that one pod.

If we now check pods, we'll see that we still have 3 pods, but one has a new name. That is the pod that ReplicaSet created once it realized that one pod went down:

kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
myapp-replicaset-ne54m   1/1     Running   0          12s
myapp-replicaset-9bq9l   1/1     Running   0          1m23s
myapp-replicaset-p6q7m   1/1     Running   0          1m233s


All events related to the specific ReplicaSet can be seen in the output of this command:

$ kubectl describe replicaset myapp-replicaset
Name:         myapp-replicaset
Namespace:    default
Selector:     env=production
Labels:       app=myapp
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  env=production
  Containers:
   nginx:
    Image:         nginx
    Port:          <none>
    Host Port:     <none>
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Events:
  Type    Reason            Age    From                   Message
  ----    ------            ----   ----                   -------
  Normal  SuccessfulCreate  17m    replicaset-controller  Created pod: myapp-replicaset-47xr4
  Normal  SuccessfulCreate  17m    replicaset-controller  Created pod: myapp-replicaset-9bq9l
  Normal  SuccessfulCreate  17m    replicaset-controller  Created pod: myapp-replicaset-p6q7m
  Normal  SuccessfulCreate  6m33s  replicaset-controller  Created pod: myapp-replicaset-ne54m


Case #2: There are more pods than specified in replicas


Let's consider situation when we have more pods, all with the same label as specified in ReplicaSet's matchLabels selector. Let's  test what happens if we have one extra such pod:

$ kubectl create -f nginx.yaml 
pod/nginx created

We would expect now to have 4 pods running but we actually have 3 as ReplicaSet terminated one extra pod:

$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
myapp-replicaset-jnm6z   1/1     Running   0          28m
myapp-replicaset-nk5n9   1/1     Running   0          18m
myapp-replicaset-nwwmh   1/1     Running   0          28m

If we executed the last command as soon as new pod was created, we would have seen all four pods here but this last one would be in Terminating state.

Labels and Selectors


What is the use case for labels and selectors? Why do we label our pods and objects in Kubernetes?

Let's assume we deployed three instances of our front end web application as three pods. We would like to create a replication controller or replica set to ensure that we have three active pods at any time.

We can use replica set to monitor these existing pods if we have them already created as it is in this example. In case they were not created, the replica set will create them for us.

The role of the replica set is to monitor the pods and if any of them were to fail deploy new ones.
The replica set is in fact a process that monitors the pods.

Now how does the replica set know what pods to monitor? There could be hundreds of other pods in the cluster running different applications. This is where labelling our pods during creation comes in handy. We could now provide these labels as a filter for replica set. Under the selector section we use the matchLabels filter and provide the same label that we used while creating the pods. This way the replica set knows which pods to monitor.

What happens if label in template does not match the label in the matchLabels? kubectl create fails and reports an error related to mismatched labels.

Why labels are not enough and why we need templates?



Let's assume we have three existing pods that were created already and we need to create a replica set to monitor the pods to ensure there are a minimum of three running at all times. When the replica set is created, it is not going to deploy a new instance of POD as three of them with matching labels are already created. We need to provide a template section in the replica set specification although we are not expecting the replica set to create a new port on deployment because in case one of the pods were to fail in the future, the replica set needs to create a new one to maintain the desired number of pods. And for the replica set to create a new pod, the template definition section is required.


How to scale the Replica Set?


Let's assume we started with three replicas and then we decide to scale to six. There are multiple ways to update our replica set to scale to six replicas.

1) Update the number of replicas in the definition file to six:

replicaset-definition.yaml:

...
  replicas: 6
...


 Then run:

$ kubectl replace -f replicaset-definition.yaml


2) Run kubectl scale command by specifying replica set definition file:

$ kubectl scale --replicas=6 -f replicaset-definition.yaml

or by specifying object type name (replicaset) and object name (myapp-replicaset):

$ kubectl scale --replicas=6 replicaset myapp-replicaset
replicaset.apps/myapp-replicaset scaled


Using the file name as input will not result in the number of replicas being updated automatically in the file. The number of replicas in the replica set definition file will still be three, even though we scaled our replica set to have six replicas using the kubectl scale command and the file as input.

3) Run:

$ kubectl edit replicaset myapp-replicaset

This commands opens a copy of the ReplicaSet definition in a text editor (e.g. vi, vim):

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  creationTimestamp: "2024-05-05T22:47:22Z"
  generation: 1
  labels:
    app: myapp
    type: front-end
  name: myapp-replicaset
  namespace: default
  resourceVersion: "138523"
  uid: 6237c310-29f9-4c13-8f03-34fa70aa4307
spec:
  replicas: 3
  selector:
    matchLabels:
      type: front-end
  template:
    metadata:
      creationTimestamp: null
      labels:
        type: front-end
      name: nginx
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 3
  fullyLabeledReplicas: 3
  observedGeneration: 1
  readyReplicas: 3
  replicas: 3
"/tmp/kubectl-edit-352613877.yaml" 45L, 1135B    

As we can see that file is stored in /tmp directory. Changes in this file are applied to ReplicaSet as soon as this file is saved.

If we change number of replicas from 3 to 4 and save this file:

$ kubectl edit replicaset myapp-replicaset
replicaset.apps/myapp-replicaset edited

...number of pods will automatically be increased to 4:

$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
myapp-replicaset-5r8vj   1/1     Running   0          4s
myapp-replicaset-jnm6z   1/1     Running   0          7h25m
myapp-replicaset-nk5n9   1/1     Running   0          7h15m
myapp-replicaset-nwwmh   1/1     Running   0  

The same approach can be used to scale down.

We can use kubectl edit replicaset to change any of the attributes in its definition, not just replicas number. That can be e.g. name of the Docker image in the template. 

There are also options available for automatically scaling the replica set based on load.


How to delete Replica Set?



To delete replica set (and underlying pods):

$ kubectl delete replicaset myapp-replicaset

It is possible to specify multiple ReplicaSets for deletion:

$ kubectl delete replicaset replicaset-1 replicaset-2
replicaset.apps "replicaset-1" deleted
replicaset.apps "replicaset-2" deleted
---

No comments: