Saturday 15 June 2024

Scaling in AWS EKS


 

Scaling allows the EKS cluster to dynamically adjust to varying workloads, ensuring efficient resource utilization and cost management.

Kubernetes scaling types:
  • manual
  • automatic

Manual scaling can be performed with kubectl scale command which sets a new size for a deployment, replica set, replication controller, or stateful set. 

Auto-scaling options provided in Amazon EKS:
  • native to Kubernetes
    • Cluster Autoscaler
    • Horizontal Pod Autoscaler
  • through AWS-specific features
    • Auto Scaling Groups
    • Fargate
  • 3rd party AWS-specific solutions
    • Karpenter

Kubernetes Cluster Autoscaler


The Kubernetes Cluster Autoscaler is designed to automatically adjust the number of nodes in your cluster based on the resource requests of the workloads running in the cluster.

Key Features:

  • Node Scaling: It adds or removes nodes based on the pending pods that cannot be scheduled due to insufficient resources.
  • Pod Scheduling: Ensures that all pending pods are scheduled by scaling the cluster up.

Installation and Setup:


To use the Cluster Autoscaler in the EKS cluster we need to deploy it using a Helm chart or a pre-configured YAML manifest.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

Configuration:

  • Ensure the --nodes flag in the deployment specifies the min and max nodes for your node group.
  • Annotate your node groups with the k8s.io/cluster-autoscaler tags to enable autoscaler to manage them.

Kubernetes Horizontal Pod Autoscaler (HPA)


The Horizontal Pod Autoscaler automatically scales the number of pods in a deployment, replication controller, or replica set based on observed CPU utilization or other select metrics.

Key Features:

  • Pod Scaling: Adjusts the number of pod replicas to match the demand.

Installation and Setup:


To use HPA ensure the Metrics Server is installed in your cluster to provide resource metrics.

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

Configuration:


Create an HPA resource for your deployment.

kubectl autoscale deployment your-deployment --cpu-percent=50 --min=1 --max=10


AWS Autoscaling Groups


AWS Auto Scaling Groups (ASGs) can also be used to scale the worker nodes in your EKS cluster.

Key Features:

  • EC2 Instance Scaling: Automatically adjusts the number of EC2 instances in the group.

Installation and Setup:


When you create EKS managed node groups, they are automatically managed by ASGs. (This is the default auto scaling provider in EKS cluster as it does not require installing any additional tools, it's provided out of the box when we create a node group.)

eksctl create nodegroup --cluster your-cluster-name --name your-nodegroup-name --nodes-min 1 --nodes-max 10


AWS Fargate


AWS Fargate allows you to run Kubernetes pods without managing the underlying nodes. It provides serverless compute for containers, eliminating the need to provision and scale EC2 instances.

Key Features:

  • Serverless: No need to manage EC2 instances.
  • Automatic Scaling: Automatically scales pods based on the specified compute resources.

Installation and Setup:


Create a Fargate profile for your EKS cluster, specifying which pods should run on Fargate.

eksctl create fargateprofile --cluster your-cluster-name --name your-fargate-profile --namespace your-namespace

References:


No comments: