Tuesday, 7 May 2024

Kubernetes Deployments

This article extends my notes from an Udemy course "Kubernetes for the Absolute Beginners - Hands-on". All course content rights belong to course creators. 

This is the fourth article in the series, the previous one being Kubernetes Controllers | My Public Notepad

Let's look at the process of deploying our application in a production environment. 

Let's assume we have a web server that needs to be deployed in a production environment. For high availability reasons, we need multiple instances of the web server running.

Also, whenever newer versions of application builds become available on the Docker registry, we would like to upgrade our Docker instances seamlessly. When upgrading the instances, we do not want to upgrade all of them at once. This may impact users accessing our applications so we might want to upgrade them one after the other. That kind of upgrade is known as rolling updates.

If one of the upgrades we performed resulted in an unexpected error, we'd need to undo the recent change. We should be able to roll back the changes that were recently carried out.

We might want to make multiple changes to our environment such as upgrading the underlying Web Server versions as well as scaling our environment and also modifying the resource allocations etc. We don't want to apply each change immediately after the command is run, instead we'd like to apply a pause to our environment, make the changes and then resume so that all the changes are rolled out together.

All of these capabilities are available with the Kubernetes Deployments

Pods deploy single instances of our application such as the web application. Each container is encapsulated in pods. Multiple pods are deployed using Replication Controllers or Replica Sets and then comes Deployment which is a Kubernetes object that comes higher in the hierarchy. 

Deployment provides us with the capability to:
  • upgrade the underlying instances seamlessly using rolling updates
  • roll back (undo) changes
  • pause and resume changes as required

How to create a deployment?


As with pods and replica sets, for this new Kubernetes object type, we first create the deployment definition file. Its contents is the same to the replica set definition file, except for the kind which is now going to be Deployment.

deployment_definition.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
  labels:
    app: myapp
    type: front-end
spec:
  template:
    metadata:
      name: myapp-pod
      labels:
        app: my-app
        type: front-end
    spec:
      containers:
      - name: nginx-container
        image: nginx:1.7.0
  replicas: 3
  selector:
    matchLabels:
      type: front-end


template has a pod definition inside it (well, only metadata and spec attributes and their content; we can copy them directly from the pod definition file).

To create deployment:

$ kubectl create -f deployment_definition.yaml 
deployment.apps/myapp-deployment created


If number of deployment definition attributes is small, we can create a deployment without the full-blown definition file:

$ kubectl create deployment <deployment_name> \
    --image=<image_name> \
    --replicas=<replicas_count>


If we try to create a deployment that has the name same as some already created deployment, we'll get the error:

$ kubectl create -f ./minikube/deployment/deployment.yaml 
Error from server (AlreadyExists): error when creating "./minikube/deployment/deployment.yaml": deployments.apps "myapp-deployment" already exists

This applies to any other Kubernetes object type.

Kubernetes object type that we put as the kind value is case-sensitive. If we use the name which does not start with capital letter, we'll get an error like this:

$ kubectl create -f /root/deployment-definition-1.yaml 
Error from server (BadRequest): error when creating "/root/deployment-definition-1.yaml": deployment in version "v1" cannot be handled as a Deployment: no kind "deployment" is registered for version "apps/v1" in scheme "k8s.io/apimachinery@v1.29.0-k3s1/pkg/runtime/scheme.go:100"

To see created deployments:

$ kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
myapp-deployment   3/3     3            3           105s

or, version with the plural:

$ kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
myapp-deployment   3/3     3            3           107s


Deployment automatically creates a replica set:

$ kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
myapp-deployment-6bddbfd569   3         3         3       3m35s


Deployment contains a single pod template, and generates one replicaset per revision

To get more details about ReplicaSet:

$ kubectl get rs -o wide
NAME                          DESIRED   CURRENT   READY   AGE     CONTAINERS        IMAGES   SELECTOR
myapp-deployment-6bddbfd569   3         3         3       3m58s   nginx-container   nginx    pod-template-hash=6bddbfd569,type=front-end


The replica sets ultimately create pods:

$ kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
myapp-deployment-6bddbfd569-n7j4z   1/1     Running   0          6m6s
myapp-deployment-6bddbfd569-qzpgl   1/1     Running   0          6m6s
myapp-deployment-6bddbfd569-xlnz8   1/1     Running   0          6m6s

or 

$ kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
myapp-deployment-6bddbfd569-n7j4z   1/1     Running   0          6m8s
myapp-deployment-6bddbfd569-qzpgl   1/1     Running   0          6m8s
myapp-deployment-6bddbfd569-xlnz8   1/1     Running   0          6m8s


Note that:

replica_set_name = <deployment_name>-<pod_template_hash>

pod_name = <replica_set_name>-<arbitrary_id>=<deployment_name>-<pod_template_hash>-<arbitrary_id>

To see all the created Kubernetes objects at once run:

$ kubectl get all
NAME                                    READY   STATUS    RESTARTS   AGE
pod/myapp-deployment-6bddbfd569-n7j4z   1/1     Running   0          14m
pod/myapp-deployment-6bddbfd569-qzpgl   1/1     Running   0          14m
pod/myapp-deployment-6bddbfd569-xlnz8   1/1     Running   0          14m

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   19d

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/myapp-deployment   3/3     3            3           14m

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/myapp-deployment-6bddbfd569   3         3         3       14m


In this output above, observe how naming of Kubernetes objects unveils their hierarchy: 
deployment >> 
    replicaset >> 
        pods

To get extensive information about the deployment:

$ kubectl describe deployment myapp-deployment
Name:                   myapp-deployment
Namespace:              default
CreationTimestamp:      Tue, 07 May 2024 23:44:32 +0100
Labels:                 app=myapp
                        type=front-end
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               type=front-end
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=my-app
           type=front-end
  Containers:
   nginx-container:
    Image:         nginx
    Port:          <none>
    Host Port:     <none>
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   myapp-deployment-6bddbfd569 (3/3 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  6s    deployment-controller  Scaled up replica set myapp-deployment-6bddbfd569 to 3



Updates and rollbacks in a deployment



Before we look at how we upgrade our application, let's try to understand rollouts and versioning in a deployment.


Rollouts and Versioning


After deployment is created, it triggers a (first) rollout (creation of the desired number of pod replicas). A new rollout creates a new deployment revision. Let's call it Revision 1.

When the application is upgraded (when the container version is updated to a new one), a new rollout is triggered and a new deployment revision is created named Revision 2.

This helps us keep track of the changes made to our deployment and enables us to roll back to a previous version of deployment if necessary.

We can see the status of the rollout by running:

$ kubectl rollout status deployment/deployment_name

For example:

$ kubectl rollout status deployment/myapp-deployment
deployment "myapp-deployment" successfully rolled out

Note that it's mandatory to include the name of the Kubernetes object type - deployment in this case. If we don't include it, we'll get an error:

$ kubectl rollout status myapp-deployment
error: the server doesn't have a resource type "myapp-deployment"

If we issue kubectl rollout status command as soon as we create a deployment, its output will show the progress of pod replicas creation:

$ kubectl create -f ./minikube/deployment/deployment2.yaml 
deployment.apps/myapp-deployment created

$ kubectl rollout status deployment/myapp-deployment
Waiting for deployment "myapp-deployment" rollout to finish: 0 of 6 updated replicas are available...
Waiting for deployment "myapp-deployment" rollout to finish: 1 of 6 updated replicas are available...
Waiting for deployment "myapp-deployment" rollout to finish: 2 of 6 updated replicas are available...
Waiting for deployment "myapp-deployment" rollout to finish: 3 of 6 updated replicas are available...
Waiting for deployment "myapp-deployment" rollout to finish: 4 of 6 updated replicas are available...
Waiting for deployment "myapp-deployment" rollout to finish: 5 of 6 updated replicas are available...
deployment "myapp-deployment" successfully rolled out

Note how pods are brought up one at a time. Kubernetes will consider deployment successful only if all pods have been deployed successfully.  

To see the revisions and history of rollouts, run:

$ kubectl rollout history deployment/myapp-deployment
deployment.apps/myapp-deployment 
REVISION  CHANGE-CAUSE
1         <none>

We can also use the format where object type is specified separately:

$ kubectl rollout history deployment myapp-deployment
deployment.apps/myapp-deployment 
REVISION  CHANGE-CAUSE
1         <none>

CHANGE-CAUSE is empty because we didn't specifically asked kubectl to record the cause of change (why did we create deployment or actually, what is the command that create a deployment). 

If we want this information to be recorded, we need to use --record option with kubectl create command. It instructs Kubernetes to record the cause of change. 

$ kubectl create -f ./minikube/deployment/deployment2.yaml --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/myapp-deployment created

$ kubectl rollout status deployment/myapp-deployment
Waiting for deployment "myapp-deployment" rollout to finish: 0 of 6 updated replicas are available...
Waiting for deployment "myapp-deployment" rollout to finish: 1 of 6 updated replicas are available...
Waiting for deployment "myapp-deployment" rollout to finish: 2 of 6 updated replicas are available...
Waiting for deployment "myapp-deployment" rollout to finish: 3 of 6 updated replicas are available...
Waiting for deployment "myapp-deployment" rollout to finish: 4 of 6 updated replicas are available...
Waiting for deployment "myapp-deployment" rollout to finish: 5 of 6 updated replicas are available...
deployment "myapp-deployment" successfully rolled out

$ kubectl rollout history deployment myapp-deployment
deployment.apps/myapp-deployment 
REVISION  CHANGE-CAUSE
1         kubectl create --filename=./minikube/deployment/deployment2.yaml --record=true

The same information will also appear in Annotations attribute of the kubectrl describe command:

$ kubectl describe deployment myapp-deployment
Name:                   myapp-deployment
Namespace:              default
CreationTimestamp:      Thu, 09 May 2024 23:11:09 +0100
Labels:                 tier=frontend
Annotations:            deployment.kubernetes.io/revision: 1
                        kubernetes.io/change-cause: kubectl create --filename=./minikube/deployment/deployment2.yaml --record=true
Selector:               app=myapp
Replicas:               6 desired | 6 updated | 6 total | 6 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=myapp
  Containers:
   nginx:
    Image:         nginx
    Port:          <none>
    Host Port:     <none>
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   myapp-deployment-7b8958bfff (6/6 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  4m7s  deployment-controller  Scaled up replica set myapp-deployment-7b8958bfff to 6


To see details about specific revision:

$ kubectl rollout history deployment/myapp-deployment --revision=6
deployment.apps/myapp-deployment with revision #6
Pod Template:
  Labels:       app=myapp
        pod-template-hash=6bf7c4cbf
  Annotations:  kubernetes.io/change-cause: kubectl set image deployment/myapp-deployment nginx=nginx:1.18-perl --record=true
  Containers:
   nginx:
    Image:      nginx:1.18-perl
    Port:       <none>
    Host Port:  <none>
    Environment:        <none>
    Mounts:     <none>
  Volumes:      <none>
  Node-Selectors:       <none>
  Tolerations:  <none>


NOTE: In my tests I found some issues with --record and revisions

1) If we specify wrong deployment name, a the last revision's cause gets updated to the failed command:

$ kubectl rollout history deployment/myapp-deployment
deployment.apps/myapp-deployment 
REVISION  CHANGE-CAUSE
3         kubectl edit deployment myapp-deployment --record=true
4         kubectl edit deployment myapp-deployment --record=true
5         kubectl edit deployment myapp-deployment --record=true

$ kubectl set image deployment/myapp-deployment nginx-container=nginx:1.18-perl --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/myapp-deployment image updated
error: unable to find container named "nginx-container"

$ kubectl rollout history deployment/myapp-deployment
deployment.apps/myapp-deployment 
REVISION  CHANGE-CAUSE
3         kubectl edit deployment myapp-deployment --record=true
4         kubectl edit deployment myapp-deployment --record=true
5         kubectl set image deployment/myapp-deployment nginx-container=nginx:1.18-perl --record=true

2) If we don't specify --record, the previous revision's command is used for cause:

$ kubectl rollout history deployment/myapp-deployment
deployment.apps/myapp-deployment 
REVISION  CHANGE-CAUSE
3         kubectl edit deployment myapp-deployment --record=true
4         kubectl edit deployment myapp-deployment --record=true
5         kubectl set image deployment/myapp-deployment nginx-container=nginx:1.18-perl --record=true
6         kubectl set image deployment/myapp-deployment nginx=nginx:1.18-perl --record=true
7         kubectl set image deployment/myapp-deployment nginx=nginx:1.17-perl --record=true
8         kubectl set image deployment/myapp-deployment nginx=nginx:1.17-perl --record=true

$ kubectl set image deployment/myapp-deployment nginx=nginx:1.15 
deployment.apps/myapp-deployment image updated

$ kubectl rollout history deployment/myapp-deployment
deployment.apps/myapp-deployment 
REVISION  CHANGE-CAUSE
3         kubectl edit deployment myapp-deployment --record=true
4         kubectl edit deployment myapp-deployment --record=true
5         kubectl set image deployment/myapp-deployment nginx-container=nginx:1.18-perl --record=true
6         kubectl set image deployment/myapp-deployment nginx=nginx:1.18-perl --record=true
7         kubectl set image deployment/myapp-deployment nginx=nginx:1.17-perl --record=true
8         kubectl set image deployment/myapp-deployment nginx=nginx:1.17-perl --record=true
9         kubectl set image deployment/myapp-deployment nginx=nginx:1.17-perl --record=true


Some revisions are seemingly missing:

$ kubectl rollout history deployment/myapp-deployment --revision=1
error: unable to find the specified revision

$ kubectl rollout history deployment/myapp-deployment --revision=2
error: unable to find the specified revision
     
This is actually by design - if new revision N returns the state of the deployment to the one in older revision M, that older revision M becomes N and M is not shown in history. 

Deployment Strategies


There are two types of deployment strategies: 
  • recreate
  • rolling update

Let's assume that we have five replicas of our web application instance deployed.

One way to upgrade these to a newer version is to destroy all of these and then create newer versions of application instances, meaning first destroy the five running instances, and then deploy five new instances of the new application version. The problem with this, as you can imagine, is that during the period after the older versions are down and before any newer version is up, the application is down and inaccessible to users. This strategy is known as the recreate strategy and thankfully this is not the default deployment strategy.

The second strategy is where we do not destroy all of them at once. Instead, we take down the older version and bring up a newer version one by one. This way the application never goes down and the upgrade is seamless. This strategy is called rolling update.

If we do not specify a strategy while creating the deployment, it will assume it to be rolling update (rolling update is the default deployment strategy).

Here is a snippet from deployment.yaml which shows how to specify deployment strategy:

...
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
...



How to update deployment?


Update can mean updating our application version,  updating the version of Docker containers used, updating their labels or updating the number of replicas, etc.

We modify a deployment definition file (e.g. we change the version/tag of the container image) and then run:

$ kubectl apply -f deployment_definition.yaml

Running this command applies the changes, a new rollout is triggered and a new revision of the deployment is created.

But there is another way to do the same thing:

$ kubectl set image deployment/myapp-deployment nginx-container=nginx:1.7.1

You can use this command to update the image of our application but doing it this way will result in the deployment definition file having a different configuration. (!)

We can also record the change cause:

$ kubectl set image deployment/myapp-deployment nginx-container=nginx:1.18-perl --record

We can modify the deployment (edit the content of its definition file and modify it permanently) via kubectl edit command. Let's assume we want to use some previous version of the nginx image, say 1.18. We also want to use --record so the cause of change is tracked in the deployment revision history. 

$ kubectl edit deployment myapp-deployment --record

// This command opens en editor (like vi or vim)

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubernetes.io/change-cause: kubectl create --filename=./minikube/deployment/deployment2.yaml
      --record=true
  creationTimestamp: "2024-05-09T22:11:09Z"
  generation: 1
  labels:
    tier: frontend
  name: myapp-deployment
  namespace: default
  resourceVersion: "255642"
  uid: 7c75b270-a725-4f71-971b-49d84a9d67f0
spec:
  progressDeadlineSeconds: 600
  replicas: 6
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: myapp
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: myapp
      name: nginx-2
    spec:
      containers:
      - image: nginx:1.18
        imagePullPolicy: Always
        name: nginx
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 6
  conditions:
  - lastTransitionTime: "2024-05-09T22:11:17Z"
    lastUpdateTime: "2024-05-09T22:11:17Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2024-05-09T22:11:09Z"
    lastUpdateTime: "2024-05-09T22:11:18Z"
    message: ReplicaSet "myapp-deployment-7b8958bfff" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 6
  replicas: 6
  updatedReplicas: 6

// After we save the change and close the editor:

deployment.apps/myapp-deployment edited

How a deployment performs an upgrade (update)?


The difference between the recreate and rolling update strategies can be seen when we look at the deployments in detail:

kubectl describe deployment myapp-deployment

When the recreate strategy was used, the events indicate that the old replica set was scaled down to zero first and then the new replica set scaled up to five.

However, when the rolling update strategy was used, the old replica set was scaled down one at a time, simultaneously scaling up the new replica set one at a time.

We can see that during the update we have two replica sets, the old and a new one. kubectl describe deployment lists all of them in two fields in the output (like in the example above): 
  • OldReplicaSets
  • NewReplicaSet

When a brand new deployment is created, say, to deploy five replicas, it first creates an initial replica set automatically, which in turn creates the number of pods required to meet the number of replicas. 

When we upgrade our application, the Kubernetes deployment object creates a new replica set under the hood and starts deploying the containers there at the same time taking down the pods in the old replica set following a rolling update strategy.

This can be seen when we list the replica sets:

$ kubectl get replicasets 


Here we'd see the old replica set with zero pods and the new replica set with five pods.

In the previous chapter we showed how to edit the deployment definition file. Changes take immediate effect so f we now quickly run kubectl rollout status we'll see that pods are being updated to that new image:

kubectl rollout status deployment/myapp-deployment
$ kubectl rollout status deployment/myapp-deployment
Waiting for deployment "myapp-deployment" rollout to finish: 3 out of 6 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 3 out of 6 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 3 out of 6 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 3 out of 6 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 4 out of 6 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 4 out of 6 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 4 out of 6 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 4 out of 6 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 5 out of 6 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 5 out of 6 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 5 out of 6 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 5 out of 6 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "myapp-deployment" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "myapp-deployment" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "myapp-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "myapp-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "myapp-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "myapp-deployment" rollout to finish: 5 of 6 updated replicas are available...
deployment "myapp-deployment" successfully rolled out

This shows how the default update strategy, RollingUpdate, works in practice. New replicas are created first and then old are terminated so users have access to Nginx app without downtime. 

$ kubectl describe deployment myapp-deployment
Name:                   myapp-deployment
Namespace:              default
CreationTimestamp:      Thu, 09 May 2024 23:33:10 +0100
Labels:                 tier=frontend
Annotations:            deployment.kubernetes.io/revision: 5
                        kubernetes.io/change-cause: kubectl edit deployment myapp-deployment --record=true
Selector:               app=myapp
Replicas:               6 desired | 6 updated | 6 total | 6 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=myapp
  Containers:
   nginx:
    Image:         nginx:1.17
    Port:          <none>
    Host Port:     <none>
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  myapp-deployment-7b8958bfff (0/0 replicas created), myapp-deployment-b6c557d47 (0/0 replicas created)
NewReplicaSet:   myapp-deployment-6866d9c964 (6/6 replicas created)
Events:
  Type    Reason             Age                 From                   Message
  ----    ------             ----                ----                   -------
  Normal  ScalingReplicaSet  51m                 deployment-controller  Scaled up replica set myapp-deployment-7b8958bfff to 6
  Normal  ScalingReplicaSet  50m                 deployment-controller  Scaled up replica set myapp-deployment-b6c557d47 to 2
  Normal  ScalingReplicaSet  50m                 deployment-controller  Scaled down replica set myapp-deployment-7b8958bfff to 5 from 6
  Normal  ScalingReplicaSet  50m                 deployment-controller  Scaled up replica set myapp-deployment-b6c557d47 to 3 from 2
  Normal  ScalingReplicaSet  49m                 deployment-controller  Scaled down replica set myapp-deployment-7b8958bfff to 4 from 5
  Normal  ScalingReplicaSet  49m                 deployment-controller  Scaled up replica set myapp-deployment-b6c557d47 to 4 from 3
  Normal  ScalingReplicaSet  49m                 deployment-controller  Scaled down replica set myapp-deployment-7b8958bfff to 3 from 4
  Normal  ScalingReplicaSet  49m                 deployment-controller  Scaled up replica set myapp-deployment-b6c557d47 to 5 from 4
  Normal  ScalingReplicaSet  49m                 deployment-controller  Scaled down replica set myapp-deployment-7b8958bfff to 2 from 3
  Normal  ScalingReplicaSet  43m (x17 over 49m)  deployment-controller  (combined from similar events): Scaled up replica set myapp-deployment-b6c557d47 to 3 from 2

Old replicaset is sclaed down and new replicaset is scaled up.


What happens if deploying some pods fails during the rollout?


Let's say we change image name to a non-existing one:

$ kubectl edit deployment myapp-deployment

We'll see that rollout is stuck:

$ kubectl rollout status deployment/myapp-deployment
Waiting for deployment "myapp-deployment" rollout to finish: 3 out of 6 new replicas have been updated...

Deployment has 5 out of 6 pods ready (which is good - users still have access to our app):

$ kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
myapp-deployment   5/6     3            5           7h5m

If we check the pods, we'll see the reason:

$ kubectl get pods
NAME                                READY   STATUS             RESTARTS   AGE
myapp-deployment-759b778ddf-gxrb7   0/1     ImagePullBackOff   0          89s
myapp-deployment-759b778ddf-kblkl   0/1     ImagePullBackOff   0          89s
myapp-deployment-759b778ddf-rwqn8   0/1     ImagePullBackOff   0          89s
myapp-deployment-88c4d7667-7gktq    1/1     Running            0          22m
myapp-deployment-88c4d7667-hsg4h    1/1     Running            0          22m
myapp-deployment-88c4d7667-j6t7q    1/1     Running            0          22m
myapp-deployment-88c4d7667-qpf7k    1/1     Running            0          22m
myapp-deployment-88c4d7667-vhsrd    1/1     Running            0          22m

Kubernetes would scale down old ReplicaSet to 0 pods only once new ReplicaSet has successfully scaled up to the desired number of replicas. That didn't happen in this case and Kubernetes only terminated 1 out of 6 pods from the old ReplicaSet, leaving other 5 up and running, so users would still have an access to the app. 

One way of rectifying this would be the roll back.

How a deployment performs a roll back?



Let's assume that once we upgrade our application, we realize something isn't right, something's wrong with the new version of build we used to upgrade. So we would like to roll back our update.

Kubernetes deployments allow us to roll back to a previous revision.

To undo a change run:

$ kubectl rollout undo deployment/myapp-deployment

The deployment will then destroy the pods in the new replica set and bring the older ones up in the
old replica set. New replica set is scaled down one at a time while the old replica set is simultaneously scaled up one at a time. This is RollingUpdate strategy, applied in reverse: from Version 2 to Version 1. Another strategy is that old replicaset is scaled up first and then the new one is scaled down.

When we compare the output of the kubectl get replicasets command before and after the rollback, we will be able to notice this difference before the rollback.

The first replica set had zero pods and new replica set had five pods and this is reversed after the rollback is finished.


Example: Let's undo changes to fix the issue from the previous chapter where new deployment got stuck for the wrong image name:

$ kubectl rollout undo deployment/myapp-deployment
deployment.apps/myapp-deployment rolled back

$ kubectl rollout status deployment/myapp-deployment
deployment "myapp-deployment" successfully rolled out

$ kubectl rollout history deployment/myapp-deployment
deployment.apps/myapp-deployment 
REVISION  CHANGE-CAUSE
3         kubectl edit deployment myapp-deployment --record=true
4         kubectl edit deployment myapp-deployment --record=true
5         kubectl set image deployment/myapp-deployment nginx-container=nginx:1.18-perl --record=true
6         kubectl set image deployment/myapp-deployment nginx=nginx:1.18-perl --record=true
7         kubectl set image deployment/myapp-deployment nginx=nginx:1.17-perl --record=true
9         kubectl set image deployment/myapp-deployment nginx=nginx:1.17-perl --record=true
11        kubectl edit deployment myapp-deployment --record=true
12        kubectl set image deployment/myapp-deployment nginx=nginx:1.17-perl --record=true

$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
myapp-deployment-88c4d7667-76nhb   1/1     Running   0          12s
myapp-deployment-88c4d7667-7gktq   1/1     Running   0          30m
myapp-deployment-88c4d7667-hsg4h   1/1     Running   0          30m
myapp-deployment-88c4d7667-j6t7q   1/1     Running   0          30m
myapp-deployment-88c4d7667-qpf7k   1/1     Running   0          30m
myapp-deployment-88c4d7667-vhsrd   1/1     Running   0          30m

$ kubectl describe deployment myapp-deployment
Name:                   myapp-deployment
Namespace:              default
CreationTimestamp:      Thu, 09 May 2024 23:33:10 +0100
Labels:                 tier=frontend
Annotations:            deployment.kubernetes.io/revision: 12
                        kubernetes.io/change-cause: kubectl set image deployment/myapp-deployment nginx=nginx:1.17-perl --record=true
Selector:               app=myapp
Replicas:               6 desired | 6 updated | 6 total | 6 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=myapp
  Containers:
   nginx:
    Image:         nginx:1.16-perl
    Port:          <none>
    Host Port:     <none>
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  myapp-deployment-7b8958bfff (0/0 replicas created), myapp-deployment-b6c557d47 (0/0 replicas created), myapp-deployment-6866d9c964 (0/0 replicas created), myapp-deployment-6bf7c4cbf (0/0 replicas created), myapp-deployment-7b5bcfbfc6 (0/0 replicas created), myapp-deployment-75d76f4c78 (0/0 replicas created), myapp-deployment-759b778ddf (0/0 replicas created)
NewReplicaSet:   myapp-deployment-88c4d7667 (6/6 replicas created)
Events:
  Type    Reason             Age                   From                   Message
  ----    ------             ----                  ----                   -------
  Normal  ScalingReplicaSet  53m                   deployment-controller  Scaled up replica set myapp-deployment-6bf7c4cbf to 2
  Normal  ScalingReplicaSet  53m                   deployment-controller  Scaled down replica set myapp-deployment-6866d9c964 to 5 from 6
  Normal  ScalingReplicaSet  53m                   deployment-controller  Scaled up replica set myapp-deployment-6bf7c4cbf to 3 from 2
  Normal  ScalingReplicaSet  53m                   deployment-controller  Scaled down replica set myapp-deployment-6866d9c964 to 4 from 5
  Normal  ScalingReplicaSet  53m                   deployment-controller  Scaled up replica set myapp-deployment-6bf7c4cbf to 4 from 3
  Normal  ScalingReplicaSet  53m                   deployment-controller  Scaled up replica set myapp-deployment-6bf7c4cbf to 5 from 4
  Normal  ScalingReplicaSet  53m                   deployment-controller  Scaled down replica set myapp-deployment-6866d9c964 to 3 from 4
  Normal  ScalingReplicaSet  53m                   deployment-controller  Scaled down replica set myapp-deployment-6866d9c964 to 2 from 3
  Normal  ScalingReplicaSet  53m                   deployment-controller  Scaled up replica set myapp-deployment-6bf7c4cbf to 6 from 5
  Normal  ScalingReplicaSet  53m (x37 over 7h14m)  deployment-controller  (combined from similar events): Scaled up replica set myapp-deployment-7b5bcfbfc6 to 2
  Normal  ScalingReplicaSet  34m                   deployment-controller  Scaled up replica set myapp-deployment-88c4d7667 to 2 from 0
  Normal  ScalingReplicaSet  34m                   deployment-controller  Scaled down replica set myapp-deployment-75d76f4c78 to 5 from 6
  Normal  ScalingReplicaSet  34m                   deployment-controller  Scaled up replica set myapp-deployment-88c4d7667 to 3 from 2
  Normal  ScalingReplicaSet  34m                   deployment-controller  Scaled down replica set myapp-deployment-75d76f4c78 to 4 from 5
  Normal  ScalingReplicaSet  12m                   deployment-controller  Scaled up replica set myapp-deployment-759b778ddf to 2
  Normal  ScalingReplicaSet  12m                   deployment-controller  Scaled down replica set myapp-deployment-88c4d7667 to 5 from 6
  Normal  ScalingReplicaSet  12m                   deployment-controller  Scaled up replica set myapp-deployment-759b778ddf to 3 from 2
  Normal  ScalingReplicaSet  3m49s (x2 over 34m)   deployment-controller  Scaled up replica set myapp-deployment-88c4d7667 to 6 from 5
  Normal  ScalingReplicaSet  3m49s                 deployment-controller  Scaled down replica set myapp-deployment-759b778ddf to 0 from 3

How to delete a deployment?

Deleting a deployment will delete all pods created by it (its ReplicaSet).

$ kubectl delete deployment myapp-deployment
deployment.apps "myapp-deployment" deleted

We can also use resource/name format:

$ kubectl delete deployment/myapp-deployment
deployment.apps "myapp-deployment" deleted

Specifying resource type twice will render an error:

$ kubectl delete deployment deployment/myapp-deployment
error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'kubectl get resource/<resource_name>' instead of 'kubectl get resource resource/<resource_name>'

Commands Summary


kubectl create -  to create the deployment
kubectl get deployments - to list the deployments
kubectl apply - to update the deployments
kubectl set image - to update the deployments
kubectl rollout status -  to see the status of rollouts 
kubectl rollout history - to see the history of the rollouts
kubectl rollout undo - to roll back a deployment operation
kubectl delete deployment - to delete deployment

---

No comments: