Monday 13 May 2024

Kubernetes LoadBalancer Service

This article extends my notes from an Udemy course "Kubernetes for the Absolute Beginners - Hands-on". All course content rights belong to course creators. 

The previous article in the series was Kubernetes ClusterIP Service | My Public Notepad

NodePort service helps us make an external facing application available on a port on the worker nodes.

Let's assume we have the following tiers in our Software Product and cluster of 4 (worker) nodes:
  • front-end application AppA
    • Deployment:
      • worker node 1:
        • pod
      • worker node 2:
        • 2 pods
      • worker node 3:
        • 0 pods
      • worker node 4:
        • 0 pods
    • NodePort service: 
      • nodePort: 30035 
  • front-end application AppB
    • Deployment:
      • worker node 1:
        • 0 pods
      • worker node 2:
        • 0 pods
      • worker node 3:
        • 2 pods
      • worker node 4:
        • 1 pod
    • NodePort service: 
      • nodePort: 31061
  • Redis 
    • Deployment:
      • worker node 5
  • DB
    • Deployment:
      • worker node 6
  • Worker
For each of these tiers we created a deployment so their instances are running on multiple pods, within their deployments. These deployments' pods are hosted across worker nodes in a cluster. 

Let's focus only on front-end app tiers and let's say we have a four node cluster (worker nodes 1 to 4) and pod distribution is as in the list above.

To make the applications accessible to external users, we create the services of type NodePort which help in receiving traffic on the ports on the nodes and routing the traffic to the respective pods. 

But what URL should be given to end users to access the applications?

User could access any of these two applications using IP of any of the nodes and ports the NodePort service is exposed on externally.

For AppA those combinations would be:

For AppB those combinations would be:

Note that even if pods are only hosted on two of the four nodes, they will still be accessible on the IP of all four nodes in the cluster. For example, if the pods for the AppA are only deployed on the nodes with
IP 70 and 71, they would still be accessible on the ports of all the nodes in the cluster. This is because NodePort exposes a port on each node in a cluster (it kinda abstracts the cluster...and does not look in which node are pods that match its selector).

Obviously, end users need a single URL like or to access the applications.

One way to achieve this is to create a new VM for load balancer purpose and install and configure
a suitable load balancer on it like HA Proxy or Nginx, then configure the load balancer to route
traffic to the underlying nodes.

Setting all of that external load balancing and then maintaining and managing can be a tedious
task. If our solution is deployed on a supported cloud platforms like GCP, AWS or Azure, we can leverage the native load balancer of that cloud platform. Kubernetes has support for integrating with the native load balancers of certain cloud providers and configuring that for us. 

So all we need to do is set the service type for the front-end services to LoadBalancer instead of


apiVersion: v1
kind: Service
  name: myapp-service
  type: LoadBalancer
    - targetPort: 80
      port: 80
      nodePort: 30008
    app: appA
    type: front-end

If we set the type of service to LoadBalancer in an unsupported environment like VirtualBox or any other environments, then it would have the same effect as setting it to NodePort, where the services are exposed on a high end port on the nodes there. It just won't do any kind of external load balancer configuration.


No comments: