Monday, 13 May 2024

Kubernetes LoadBalancer Service

This article extends my notes from an Udemy course "Kubernetes for the Absolute Beginners - Hands-on". All course content rights belong to course creators. 

The previous article in the series was Kubernetes ClusterIP Service | My Public Notepad


NodePort service helps us make an external facing application available on a port on the worker nodes.


Let's assume we have the following tiers in our Software Product and cluster of 4 (worker) nodes:
  • front-end application AppA
    • Deployment:
      • worker node 1: 192.168.56.70
        • 1 pod
      • worker node 2: 192.168.56.71
        • 2 pods
      • worker node 3: 192.168.56.72
        • 0 pods
      • worker node 4: 192.168.56.73
        • 0 pods
    • NodePort service: 
      • nodePort: 30035 
  • front-end application AppB
    • Deployment:
      • worker node 1: 192.168.56.70
        • 0 pods
      • worker node 2: 192.168.56.71
        • 0 pods
      • worker node 3: 192.168.56.72
        • 2 pods
      • worker node 4: 192.168.56.73
        • 1 pod
    • NodePort service: 
      • nodePort: 31061
  • Redis 
    • Deployment:
      • worker node 5
  • DB
    • Deployment:
      • worker node 6
  • Worker
For each of these tiers we created a deployment so their instances are running on multiple pods, within their deployments. These deployments' pods are hosted across worker nodes in a cluster. 

Let's focus only on front-end app tiers and let's say we have a four node cluster (worker nodes 1 to 4) and pod distribution is as in the list above.

To make the applications accessible to external users, we create the services of type NodePort which help in receiving traffic on the ports on the nodes and routing the traffic to the respective pods. 

But what URL should be given to end users to access the applications?

User could access any of these two applications using IP of any of the nodes and ports the NodePort service is exposed on externally.

For AppA those combinations would be:

http://192.168.56.70:30035
http://192.168.56.71:30035
http://192.168.56.72:30035
http://192.168.56.73:30035

For AppB those combinations would be:

http://192.168.56.70:31061
http://192.168.56.71:31061
http://192.168.56.72:31061
http://192.168.56.73:31061

Note that even if pods are only hosted on two of the four nodes, they will still be accessible on the IP of all four nodes in the cluster. For example, if the pods for the AppA are only deployed on the nodes with
IP 70 and 71, they would still be accessible on the ports of all the nodes in the cluster. This is because NodePort exposes a port on each node in a cluster (it kinda abstracts the cluster...and does not look in which node are pods that match its selector).

Obviously, end users need a single URL like http://appA.com or http://appB.com to access the applications.

One way to achieve this is to create a new VM for load balancer purpose and install and configure
a suitable load balancer on it like HA Proxy or Nginx, then configure the load balancer to route
traffic to the underlying nodes.

Setting all of that external load balancing and then maintaining and managing can be a tedious
task. If our solution is deployed on a supported cloud platforms like GCP, AWS or Azure, we can leverage the native load balancer of that cloud platform. Kubernetes has support for integrating with the native load balancers of certain cloud providers and configuring that for us. 

LoadBalancer service automatically provisions a cloud provider's load balancer, which distributes traffic to the appropriate nodes and services within the cluster. This provides a robust, scalable, and managed way to handle external traffic.

So all we need to do is set the service type for the front-end services to LoadBalancer instead of
NodePort.

loadbalancer-service-definition.yaml:

apiVersion: v1
kind: Service
metadata:
  name: myapp-service
spec:
  type: LoadBalancer
  ports:
    - targetPort: 80
      port: 80
      nodePort: 30008
  selector:
    app: appA
    type: front-end


Another example:

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: LoadBalancer
  selector:
    app: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080

If we set the type of service to LoadBalancer in an unsupported environment like VirtualBox or any other environments, then it would have the same effect as setting it to NodePort, where the services are exposed on a high end port on the nodes there. It just won't do any kind of external load balancer configuration.

---

No comments: