Kubernetes autoscaling is a function that scales resources in and out depending on the current workload. AWS supports two autoscaling implementations:
- Cluster Autoscaler
- Karpenter
- Karpenter
- flexible, high-performance Kubernetes cluster autoscaler and node provisioner
- helps improve application availability and cluster efficiency
- launches right-sized compute resources (for example, Amazon EC2 instances) in response to changing application load in under a minute
- can provision just-in-time compute resources that precisely meet the requirements of our workload
- automatically provisions new compute resources based on the specific requirements of cluster workloads. These include compute, storage, acceleration, and scheduling requirements.
- creates Kubernetes nodes directly from EC2 instances
- improves the efficiency and cost of running workloads on the cluster
- open-source
Pod Scheduler
- Kubernetes cluster component responsible for determining which node Pods get assigned to
- default Pod scheduler for Kubernetes is kube-scheduler
- logs the reasons Pods can't be scheduled
Unschedulable Pods
A Pod is unschedulable when it's been put into Kubernetes' scheduling queue, but can't be deployed to a node. This can be for a number of reasons, including:
- The cluster not having enough CPU or RAM available to meet the Pod's requirements.
- Pod affinity or anti-affinity rules preventing it from being deployed to available nodes.
- Nodes being cordoned due to updates or restarts.
- The Pod requiring a persistent volume that's unavailable, or bound to an unavailable node.
How to detect unschedulable Pods?
Pods waiting to be scheduled are held in the "Pending" status, but if the Pod can't be scheduled, it will remain in this state. However, Pods that are being deployed normally are also marked as "Pending." The difference comes down to how long a Pod remains in "Pending."
How to fix unschedulable Pods?
There is no single solution for unschedulable Pods as they have many different causes. However, there are a few things we can try depending on the cause.
- Enable cluster autoscaling
- If we're using a managed Kubernetes service like Amazon EKS or Google Kubernetes Engine (GKE), we can very easily take advantage of autoscaling to increase and decrease cluster capacity on-demand. With autoscaling enabled, Kubernetes' Cluster Autoscaler will trigger our provider to add nodes when needed. As long as we've configured our cluster node pool and it hasn't reached its max node limit, our provider will automatically provision a new node and add it to the pool, making it available to the cluster and to our Pods.
- Increase our node capacity
- Check our Pod requests
- Check our affinity and anti-affinity rules
In this article we'll show how to enable cluster autoscaling with Karpenter.
How does the regular Kubernetes Autoscaler work in AWS?
When we create a regular Kubernetes cluster in AWS, each node group is managed by the AWS Auto-scaling group [
Auto Scaling groups - Amazon EC2 Auto Scaling]. Cluster native autoscaler adjusts the desired size based on the load in the cluster to fit all unscheduled pods.
HorizontalPodAutoscaler (
HPA) [
Horizontal Pod Autoscaling | Kubernetes] is built into Kubernetes and it uses metrics like CPU usage, memory usage or custom metrics we can write to decide when to spin up or down additional pods in the node of the cluster. If our app is receiving more traffic, HPA will kick in and provision additional pods.
VerticalPodAutoscaler (VPA) can also be installed in cluster where it manages the resource (like CPU and memory) allocation to pods that are already running.
What about when there's not enough capacity to schedule any more pods in the node? That's when we'll need an additional node. So we have a pod that needs to be scheduled but we don't know where to put it. We could call AWS API, spin up an additional EC2 node, get added it to our cluster or if we're using managed groups we can use Managed Node Group API, bump up the desired size but easier approach is to use cluster auto-scaler. There is a mature open-source solution called Cluster Auto-Scaler (CAS).
CAS was built to handle hundreds of different combinations of nodes types, zones, purchase options available in AWS. CAS works directly with managed node groups or self-managed managed nodes and auto-scaling groups which are AWS constructs to help us manage nodes.
What are the issues with the regular Kubernetes Autoscaler?
Let's say CAS is installed on node, in cluster and manages one managed node group (MNG). It's filling up and we have an additional pod that needs to be provisioned so CAS tells MNG to bump up the number of nodes so it spins up another one so pod can now be scheduled. But this is not ideal. We have a single pod in a node, we don't need such a big node.
This can be solved by creating a different MNG with a smaller instance type and now CAS recognizes that instance and provisions pod on a more appropriately-sized node.
Unfortunately, we might end up with many MNGs, based on requirements which might be a challenge to manage especially when looking best practices in terms of cost efficiency and high availability.
How does Karpenter work?
Karpenter works differently, It doesn't use MNG or ASGs and manages each node directly. Let's say we have different pods, of different sizes. Let's say that HPA says that we need more of the smaller pods. Karpenter will intelligently pick the right instance type for that workload. If we need to spin up a larger pod it will again pick the right instance type.
Karpenter picks exactly the right type of node for our workload.
If we're using spot instances and spot capacity is not available, Karpenter does retries more quickly. Karpenter offers, faster, dynamic, more intelligent compute, using best practices without operational overhead of managing nodes ourselves.
How to control how Karpenter operates?
There are many dimensions here. We can set constraints on Karpenter to limit the instances type, we can set up taints to isolate workloads to specific types of nodes. Different teams can have isolated access to different pods, one team can access billing pods, another GPU-based instances.
Workload Consolidation feature: Pods are consolidated into fewer nodes.. let's say we have 3 nodes, two at 70% and one at 20% utilization. Karpenter detects this and will move pods from underutilized node to those two and shut down this now empty node (instances are terminated). This leads to lower costs.
Karpenter is making it easier to use spot and graviton instances which can also lead to lower costs.
A feature to keep our nodes up to date. ttlSecondsUntilExpired parameter tells Karpenter to terminate nodes after a set amount of time. These nodes will automatically be replaced with new nodes, running the latest AMIs.
Karpenter:
1) lower costs
2) higher application availability
3) lower operation overhead
Karpenter needs permissions to create EC2 instances in AWS.
If we use a self-hosted (on bare metal boxes or EC2 instances), self-managed (we have full control over all aspects of Kubernetes) Kubernetes cluster, for example by using
kOps (see also
Is k8s Kops preferable than eks? : r/kubernetes), we can add additional IAM policies to the existing IAM role attached to Kubernetes nodes.
If using EKS, the best way to grant access to internal service is with IAM roles for service accounts (IRSA).
Karpenter's Kubernetes Custom Resources
NodePool
NodePool is the primary Custom Resource (CR) in Karpenter that defines scheduling constraints, how nodes are provisioned and managed (node management policies). It is the successor to the older Provisioner API and acts as the brain that tells Karpenter which nodes to create and how to handle them over time. It acts as the "brain" for scheduling decisions by evaluating the requirements of pending pods and matching them to infrastructure constraints.
Core Role of NodePool
- Scheduling Authority: It defines the constraints (instance types, zones, architectures) that determine which nodes can be created.
- Successor to Provisioner: It replaced the older Provisioner API to provide a more scalable and configuration-based approach.
- Management Hub: It handles node lifecycle settings, including disruption policies (consolidation and expiration) and aggregate resource limits (CPU/Memory).
Core Functions
A NodePool manages three primary aspects of our cluster's compute capacity:
- Scheduling Constraints: Restricts which nodes can be provisioned using requirements for instance types, zones, architectures (e.g., x86 vs. ARM), and capacity types (Spot vs. On-Demand).
- Disruption Policies: Governs how Karpenter optimizes the cluster by defining when nodes should be expired or consolidated to save costs.
- Resource Limits: Sets a cap on the total CPU and memory that the NodePool can provision, preventing runaway costs.
Key Components of a NodePool
The specification is divided into several functional areas:
- template: Defines the configuration for the nodes that will be created.
- requirements: Uses well-known Kubernetes labels (e.g., karpenter.sh/capacity-type) to select hardware.
- nodeClassRef: Points to an EC2NodeClass for cloud-provider-specific settings like subnets and security groups.
- disruption: Replaces older TTL settings with a unified policy for consolidationPolicy (e.g., WhenUnderutilized) and expireAfter.
- limits: Defines the maximum aggregate resources (e.g., cpu: 1000) allowed for this pool.
Example v1 Configuration
This example demonstrates a production-ready NodePool that prioritises Spot instances but allows for On-
Demand fallback.
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: general-purpose
spec:
template:
spec:
requirements:
- key: "karpenter.sh/capacity-type"
operator: In
values: ["spot", "on-demand"]
- key: "karpenter.k8s.aws/instance-category"
operator: In
values: ["c", "m", "r"]
- key: "kubernetes.io/arch"
operator: In
values: ["amd64", "arm64"]
nodeClassRef:
group: karpenter.k8s.aws
kind: EC2NodeClass
name: default
disruption:
consolidationPolicy: WhenUnderutilized
expireAfter: 720h # 30 days
limits:
cpu: "500"
memory: 1000Gi
Comparison with Other Objects
While the NodePool is the central configuration object, it works in a hierarchy with two other key resources:
- NodePool
- Purpose: The Logic: Defines what nodes should look like and how they should behave.
- EC2NodeClass
- Purpose: The Infrastructure: Defines where and with what AWS-specific settings (subnets, AMIs, security groups) nodes launch.
- NodeClaim
- Purpose: The Instance: Represents an individual node currently being managed or provisioned by Karpenter.
Every NodePool must reference at least one EC2NodeClass to successfully provision capacity on AWS.
Useful Commands:
To see all node pools:
% kubectl get nodepools
NAME NODECLASS NODES READY AGE
clickhouse clickhouse 0 True 140d
clickhouse-backup clickhouse-backup 0 True 140d
Cluster user needs to have permission to list resource "nodepools" in API group "karpenter.sh" at the cluster scope.
To debug a specific node pool:
kubectl describe nodepool <nodepool-name>
Cluster user needs to have permission to get resource "nodepools" in API group "karpenter.sh" at the cluster scope.
% kubectl describe nodepool clickhouse
Name: clickhouse
Namespace:
Labels: <none>
Annotations: karpenter.sh/nodepool-hash: 12671849087427876759
karpenter.sh/nodepool-hash-version: v3
API Version: karpenter.sh/v1
Kind: NodePool
Metadata:
Creation Timestamp: 2025-10-22T15:02:58Z
Generation: 2
Resource Version: 1073678
UID: f7869dd3-ac24-4600-98a6-059073645769
Spec:
Disruption:
Budgets:
Nodes: 10%
Consolidate After: 0s
Consolidation Policy: WhenEmptyOrUnderutilized
Template:
Metadata:
Labels:
Karpenter - Node - Pool: clickhouse
Spec:
Expire After: 720h
Node Class Ref:
Group: karpenter.k8s.aws
Kind: EC2NodeClass
Name: clickhouse
Requirements:
Key: node.kubernetes.io/instance-type
Operator: In
Values:
r8g.xlarge
r8g.2xlarge
r8g.4xlarge
r8g.8xlarge
Key: karpenter.sh/capacity-type
Operator: In
Values:
on-demand
spot
Status:
Conditions:
Last Transition Time: 2025-10-22T15:02:59Z
Message:
Observed Generation: 2
Reason: ValidationSucceeded
Status: True
Type: ValidationSucceeded
Last Transition Time: 2025-10-22T15:03:07Z
Message:
Observed Generation: 2
Reason: NodeClassReady
Status: True
Type: NodeClassReady
Last Transition Time: 2025-10-23T17:24:01Z
Message:
Observed Generation: 2
Reason: Ready
Status: True
Type: Ready
Resources:
Cpu: 0
Ephemeral - Storage: 0
Memory: 0
Nodes: 0
Pods: 0
Events: <none>
EC2NodeClass
EC2NodeClass is a Custom Resource (CR) used to define AWS-specific infrastructure configurations for the nodes Karpenter provisions.
While a NodePool handles high-level scheduling constraints (like instance types or taints), the EC2NodeClass dictates the underlying Amazon EC2 settings.
Key Responsibilities
The EC2NodeClass abstracts cloud provider-specific details, including:
- Networking: Selects subnets using subnetSelectorTerms.
- Security: Identifies security groups via securityGroupSelectorTerms.
- Identity: Assigns the IAM role or instance profile for the nodes.
- Storage: Configures blockDeviceMappings for EBS volumes.
- Images: Specifies the Amazon Machine Image (AMI) family (e.g., AL2, Bottlerocket) or selects specific AMIs.
- Customisation: Includes userData for custom bootstrap scripts.
Relationship with NodePools
A NodePool must reference an EC2NodeClass using the nodeClassRef field. Multiple NodePools can point to the same EC2NodeClass if they share the same infrastructure requirements (e.g., same VPC and IAM role).
Example Configuration
A basic EC2NodeClass manifest typically looks like this:
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
name: default
spec:
amiFamily: AL2
role: "KarpenterNodeRole-my-cluster"
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: my-cluster
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: my-cluster
Useful Commands:
To see all EC2NodeClasses
kubectl get ec2nodeclasses
Cluster user needs to have permission to list resource "ec2nodeclasses" in API group "karpenter.k8s.aws" at the cluster scope.
Example:
% kubectl get ec2nodeclasses
NAME READY AGE
clickhouse True 140d
clickhouse-backup True 140d
To debug a specific node that isn't coming online:
kubectl describe ec2nodeclasses <ec2nodeclass-name>
Cluster user needs to have permission to get resource "ec2nodeclasses" in API group "karpenter.k8s.aws" at the cluster scope.
Example:
% kubectl describe ec2nodeclass clickhouse
Name: clickhouse
Namespace:
Labels: <none>
Annotations: karpenter.k8s.aws/ec2nodeclass-hash: 358699366951558737
karpenter.k8s.aws/ec2nodeclass-hash-version: v4
API Version: karpenter.k8s.aws/v1
Kind: EC2NodeClass
Metadata:
Creation Timestamp: 2025-10-22T15:02:58Z
Finalizers:
karpenter.k8s.aws/termination
Generation: 1
Resource Version: 73323969
UID: 25c663e7-cc29-47b2-8a97-937fb5f39825
Spec:
Ami Family: AL2023
Ami Selector Terms:
Alias: al2023@latest
Detailed Monitoring: true
Metadata Options:
Http Endpoint: enabled
httpProtocolIPv6: disabled
Http Put Response Hop Limit: 1
Http Tokens: required
Role: KarpenterNodeRole-mycorp-prod-clickhouse-k8s
Security Group Selector Terms:
Tags:
karpenter.sh/discovery/mycorp-prod-clickhouse-k8s: true
Subnet Selector Terms:
Tags:
karpenter.sh/discovery: true
private_subnet: true
Tags:
Name: mycorp-prod-clickhouse-k8s-karpenter-clickhouse
karpenter.sh/discovery/mycorp-prod-clickhouse-k8s: true
Status:
Amis:
Id: ami-06ab427136b8ffa61
Name: amazon-eks-node-al2023-x86_64-nvidia-1.33-v20260304
Requirements:
Key: kubernetes.io/arch
Operator: In
Values:
amd64
Key: karpenter.k8s.aws/instance-gpu-count
Operator: Exists
Id: ami-08f492a005f7b8703
Name: amazon-eks-node-al2023-x86_64-neuron-1.33-v20260304
Requirements:
Key: kubernetes.io/arch
Operator: In
Values:
amd64
Key: karpenter.k8s.aws/instance-accelerator-count
Operator: Exists
Id: ami-0023c4931d42779e6
Name: amazon-eks-node-al2023-x86_64-standard-1.33-v20260304
Requirements:
Key: kubernetes.io/arch
Operator: In
Values:
amd64
Key: karpenter.k8s.aws/instance-gpu-count
Operator: DoesNotExist
Key: karpenter.k8s.aws/instance-accelerator-count
Operator: DoesNotExist
Id: ami-061bed77c8a6d03cd
Name: amazon-eks-node-al2023-arm64-standard-1.33-v20260304
Requirements:
Key: kubernetes.io/arch
Operator: In
Values:
arm64
Key: karpenter.k8s.aws/instance-gpu-count
Operator: DoesNotExist
Key: karpenter.k8s.aws/instance-accelerator-count
Operator: DoesNotExist
Conditions:
Last Transition Time: 2025-10-22T15:02:59Z
Message:
Observed Generation: 1
Reason: AMIsReady
Status: True
Type: AMIsReady
Last Transition Time: 2025-10-22T15:02:59Z
Message:
Observed Generation: 1
Reason: SubnetsReady
Status: True
Type: SubnetsReady
Last Transition Time: 2025-10-22T15:02:59Z
Message:
Observed Generation: 1
Reason: SecurityGroupsReady
Status: True
Type: SecurityGroupsReady
Last Transition Time: 2025-10-22T15:02:59Z
Message:
Observed Generation: 1
Reason: InstanceProfileReady
Status: True
Type: InstanceProfileReady
Last Transition Time: 2025-10-22T15:03:07Z
Message:
Observed Generation: 1
Reason: ValidationSucceeded
Status: True
Type: ValidationSucceeded
Last Transition Time: 2025-10-22T15:03:07Z
Message:
Observed Generation: 1
Reason: Ready
Status: True
Type: Ready
Instance Profile: mycorp-prod-clickhouse-k8s_15693974848685646064
Security Groups:
Id: sg-09f3cd41bcef827c0
Name: mycorp-prod-clickhouse-k8s-node-20251020164545608400000006
Subnets:
Id: subnet-04xxxxxxxxxx5d30b
Zone: us-east-1b
Zone ID: use1-az2
Id: subnet-00xxxxxxxxxx08cef
Zone: us-east-1c
Zone ID: use1-az3
Id: subnet-02xxxxxxxxxxx8711
Zone: us-east-1a
Zone ID: use1-az1
Events: <none>
NodeClaim
In Karpenter, a NodeClaim is the Custom Resource (CR) that represents a single, specific instance of compute capacity.
While a NodePool is the template and a NodeClass is the blueprint, the NodeClaim is the actual request sent to the cloud provider to launch a specific node.
Key Characteristics
- 1:1 Relationship: Each NodeClaim typically corresponds to exactly one EC2 instance and its associated Kubernetes Node.
- Immutable: Once created, a NodeClaim cannot be changed. If the requirements for a node change (e.g., due to "drift"), Karpenter deletes the existing NodeClaim and creates a new one.
- Lifecycle Management: It tracks the instance from its initial "launch" request through "registration" with the cluster until it is fully "initialized" and ready to run pods.
Why We Should Monitor NodeClaims
NodeClaims are the best place to look when debugging provisioning failures. We can use them to identify why a node failed to join the cluster:
- Status Conditions: A NodeClaim status will show if an instance failed to launch (e.g., "LaunchFailed" due to AWS service limits) or if it's stuck during initialization.
- Provider Information: It stores the specific providerID (e.g., EC2 instance ID) and the nodeName once the instance successfully joins the cluster.
- Drift Detection: Karpenter adds a Drifted status to a NodeClaim if the underlying instance no longer matches the current NodePool or EC2NodeClass configuration.
How it Fits the Hierarchy:
- NodePoolDefines
- Function: scheduling rules and resource limits.
- Analogy: The Foreman
- EC2NodeClass
- Function: Defines AWS-specific infrastructure (subnets, AMIs).
- Analogy: The Blueprint
- NodeClaim
- Function: Represents the actual instance being provisioned.
- Analogy: The Building Material
Useful Commands:
To see all active node claims and their status:
kubectl get nodeclaims
Cluster user needs to have permission to list resource "nodeclaims" in API group "karpenter.sh" at the cluster scope.
To debug a specific node that isn't coming online:
kubectl describe nodeclaim <nodeclaim-name>
Cluster user needs to have permission to get resource "nodeclaims" in API group "karpenter.sh" at the cluster scope.
Example:
% kubectl get nodeclaims
No resources found
How to Debug Misfunctioning Karpenter
Based on the output we've provided above, our NodePool and EC2NodeClass are both in a Ready state and have correctly discovered subnets, security groups, and AMIs. However, we have 0 nodes and 0 NodeClaims. The most common reason for this behavior—where the infrastructure is ready but Karpenter isn't scaling—is a scheduling mismatch between our pending Pods and the NodePool's requirements. Here is a breakdown of why Karpenter might be sitting idle:
1. Missing Architecture Requirement
Our clickhouse NodePool specifies r8g instance types. These are AWS Graviton (ARM64) instances. In our NodePool spec, we haven't explicitly defined the architecture requirement. While Karpenter is smart, if our Pods do not have a nodeSelector or toleration for kubernetes.io/arch: arm64, or if the Pods are explicitly asking for amd64, Karpenter will not use this NodePool to satisfy them.Check our Pods:
kubectl get pods -A | grep -i pending
kubectl describe pod <pending-pod-name>
Look for NodeSelectors or Affinity rules in the Pod spec. If the Pod doesn't allow arm64, Karpenter won't spin up an r8g node.
2. Missing Taints/Tolerations
ClickHouse workloads often use specialized nodes. If our NodePool (or the nodes it creates) has a Taint that we haven't shown in the snippet, or if our Pods have a Toleration for a taint that doesn't exist on this NodePool, Karpenter will ignore it.
Check if our pending Pods have specific nodeSelector labels that aren't present in the NodePool template.metadata.labels. Our NodePool only has one label: Karpenter - Node - Pool: clickhouse.
3. The "Karpenter Controller" Logs
If the logic seems correct but nothing is happening, the answer is always in the controller logs. Karpenter will explicitly tell us why it is passing over a Pod.Run this to see the scheduling decisions:
kubectl logs -n karpenter -l app.kubernetes.io/name=karpenter | grep -i "scheduling"
Look for messages like:no reachable nodeclassesno possible pod inventoryunschedulable, ... did not match requirements
4. Service Linked Role / Permissions
Since our EC2NodeClass is Ready, our basic AWS tags are likely fine. However, double-check that the KarpenterNodeRole-geeiq-prod-clickhouse-k8s actually exists in IAM and has the AmazonEKSWorkerNodePolicy and AmazonEC2ContainerRegistryReadOnly attached. If the role is missing or misconfigured, the EC2 instance might start but fail to join the cluster, causing Karpenter to terminate it immediately.
Summary Checklist
Potential Issue Fix/Action
Arch Mismatch => Add kubernetes.io/arch with arm64 to NodePool requirements or Pod nodeSelector.
Pending Pods => Ensure there are actually Pods in Pending state. Karpenter only scales in response to unschedulable pods.
Instance Availability => r8g instances are relatively new. Ensure they are available in us-east-2 for the capacity type (Spot/On-Demand) we requested.
How to install Karpenter in the cluster?
Namespace
Current Karpenter best practices recommend deploying it in the kube-system namespace rather than its own dedicated namespace.
While many early adopters used a separate karpenter namespace, the project shifted toward kube-system starting with version v0.33.0.
Why kube-system is preferred:
- API Priority & Fairness: By default, Kubernetes grants higher priority to requests coming from the kube-system namespace. This ensures the Karpenter controller can still communicate with the API server to provision nodes even during periods of heavy cluster congestion.
- Critical Component Status: Placing Karpenter in kube-system denotes it as a critical cluster component, aligning it with other essential services like kube-proxy or the VPC CNI.
- Reduced Complexity: Using a standard system namespace avoids the need to manually configure custom FlowSchemas or PriorityLevelConfigurations that would otherwise be required to give a custom namespace the same level of reliability.
When to use a separate namespace:
- Legacy Installations: If we installed Karpenter before v0.33.0, it likely lives in a karpenter namespace. Migrating is recommended but requires updating our IAM Roles for Service Accounts (IRSA) trust policy to reflect the new namespace.
- Fargate Isolation: If we run Karpenter on AWS Fargate, we must create a Fargate profile for the specific namespace where Karpenter is deployed.
Labels
While most standard Karpenter installations use the label app.kubernetes.io/name=karpenter for the controller pods, it is not guaranteed for every pod in every environment.
Why it might differ:
- Helm Chart Customisation: If we (or our platform team) overrode the podLabels or nameOverride values during the Karpenter Helm installation, this label will be different.
- Version Variance: Very old versions of Karpenter sometimes used different labelling conventions (e.g., just app=karpenter), though modern versions follow Kubernetes recommended labels.
- Webhook Pods: In some configurations, Karpenter may run separate pods for webhooks that might carry slightly different descriptive labels depending on the deployment strategy.
How to configure Karpenter?
We can configure specific Karpenter NodePools or Provisioners.
How to set up weighted NodePools for multi-tenant isolation?
In Karpenter, Weighted NodePools allow us to control which NodePool is selected when a pod's requirements match multiple pools. This is a powerful tool for multi-tenant isolation, enabling us to prioritize specific hardware or cost models for certain teams while providing a fallback mechanism.
How Weighting Works
- Precedence: Higher weight values indicate higher priority.
- Default: If no weight is specified, it defaults to 0.
- Selection: If a pending pod matches the requirements of multiple NodePools, Karpenter will always select the one with the highest weight first.
Multi-Tenant Strategy: Isolation & Priority
For multi-tenant environments, we can use weights to enforce distinct tiers of service or cost:
- Reserved/Savings Plan Tier (Highest Weight):
- Create a NodePool that specifically includes instance types covered by our Savings Plans or Reserved Instances. By giving this pool a high weight (e.g., 100), Karpenter will prioritize using this pre-paid capacity before launching new nodes.
- Spot Instance Tier (Medium Weight):
- A general-purpose pool for non-critical workloads or "Team A" can be set with a medium weight (e.g., 50) and restricted to spot capacity.
- On-Demand Fallback (Lowest Weight):
- A "catch-all" NodePool with a low weight (e.g., 10) that allows on-demand instances. This ensures that if Spot capacity is unavailable or Savings Plans are exhausted, workloads still have a place to land.
Implementation Example
Below is an example of two overlapping NodePools where the "Premium" pool is prioritized for any workload that could run on it.
# NodePool 1: High Priority (e.g., Reserved Capacity)
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: premium-reserved
spec:
weight: 100 # Higher weight = Higher priority
template:
spec:
requirements:
- key: "node.kubernetes.io/instance-type"
operator: In
values: ["m5.large", "m5.xlarge"] # Specific reserved types
nodeClassRef:
name: default
---
# NodePool 2: Standard Priority (e.g., Spot)
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: standard-spot
spec:
weight: 50
template:
spec:
requirements:
- key: "karpenter.sh/capacity-type"
operator: In
values: ["spot"]
nodeClassRef:
name: default
Best Practices for Isolation
- Mutual Exclusivity: While weights handle overlaps, the official Karpenter guidance suggests making NodePools mutually exclusive whenever possible (using taints/tolerations or unique labels) to simplify debugging.
- Resource Limits: Always set spec.limits on tenant-specific pools to prevent one team from consuming the entire cluster's budget.
- Billing Attribution: Use the spec.template.metadata.labels field in each NodePool to add "Team" or "Project" tags. These labels propagate to the EC2 instances, making it easy to track costs per tenant
How to implement Taints and Tolerations alongside weights for stricter tenant "hard" isolation?
While weights allow Karpenter to prefer one NodePool over another, Taints and Tolerations are required for hard isolation. They ensure that nodes provisioned for one tenant "repel" pods from all other tenants.
The Isolation Strategy
To achieve strict tenant separation, we combine three elements:
- Taints: Applied to the NodePool to prevent unauthorized pods from scheduling on its nodes.
- Tolerations: Applied to the tenant's pods so they can "bypass" the taint.
- Node Affinity: Applied to the tenant's pods to "attract" them specifically to their dedicated nodes.
1. Dedicated Tenant NodePool
In the NodePool spec, add a taint. Any node Karpenter creates from this pool will automatically carry this "keep out" sign.
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: tenant-a-pool
spec:
weight: 50
template:
spec:
taints:
- key: "tenant"
value: "team-a"
effect: "NoSchedule" # Only pods with matching toleration can land here
labels:
tenant: "team-a" # Used for affinity
nodeClassRef:
name: default
2. Tenant Pod Configuration
For Team A's workloads to run, their pods must explicitly tolerate the taint and prefer (or require) the tenant label.
apiVersion: v1
kind: Pod
metadata:
name: team-a-app
spec:
containers:
- name: app
image: nginx
tolerations:
- key: "tenant"
operator: "Equal"
value: "team-a"
effect: "NoSchedule"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "tenant"
operator: In
values: ["team-a"]
Why use both?
- Taint + Toleration alone stops other pods from accidentally using Team A's nodes, but it doesn't stop Team A's pods from accidentally landing on "General" nodes.
- Node Affinity ensures Team A's pods only go to their dedicated nodes.
- Weights (e.g., weight: 100) can still be used within a tenant's pool to prioritize Spot vs. On-Demand specifically for that tenant.
Best Practices
- Mutually Exclusive Pools: It is recommended to design NodePools so they do not overlap. If a pod matches multiple pools, Karpenter uses the one with the highest weight.
- NoExecute for Critical Changes: Use the NoExecute effect if we need to evict existing pods immediately when a node becomes inappropriate for them.
- Limit Resources: Always set spec.limits on each tenant pool to prevent a single team's auto-scaling from exhausting the entire AWS account's resources.
How to ensure our cluster has at least 3 nodes spread across 3 different Availability Zones (AZs)?
This is important if we want to implement highly available architecture. We want nodes to be spread across multiple data centres and with them, pod which belong to our application.
We can define a NodePool that forces a spread across zones using topology:
apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
name: default
spec:
template:
spec:
requirements:
# Force the nodes to be spread across these zones
- key: "topology.kubernetes.io/zone"
operator: In
values: ["us-east-1a", "us-east-1b", "us-east-1c"]
- key: "karpenter.sh/capacity-type"
operator: In
values: ["on-demand"]
# Ensure the autoscaler keeps a minimum of 3 nodes
limits:
cpu: 1000
BONUS: Forcing Pods to use all 3 Zones
Even if we have 3 nodes in 3 zones, Kubernetes might try to put all our pods on just one of those nodes to be "efficient."
To prevent this, we use Topology Spread Constraints. This is the modern, more powerful version of Anti-Affinity. It ensures our pods are distributed evenly across the zones we just created.
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: DoNotSchedule # Or ScheduleAnyway
labelSelector:
matchLabels:
app: my-app
maxSkew: 1: This means the difference in the number of pods between any two zones can't be more than 1. (e.g., 1-1-1 is fine, 2-1-0 is not).
How to check if Karpenter is deployed and operational in the cluster?
To verify that Karpenter is correctly configured and operational in our EKS cluster, we should follow validation steps described below.
1. Check Controller Health
a) Check Pod Status
Ensure the Karpenter controller pods are running without errors in the dedicated namespace (usually kube-system or karpenter).
We know that its pods should be installed in kube-system namespace and that they should have label app.kubernetes.io/name=karpenter so we can filter pods by these two criterias:
% kubectl get pods -n kube-system -l app.kubernetes.io/name=karpenter
NAME READY STATUS RESTARTS AGE
karpenter-598976645b-96dps 1/1 Running 0 11h
karpenter-598976645b-nxm24 1/1 Running 0 12h
b) Inspect Logs
To watch for successful discovery of our cluster endpoint and region use:
% kubectl logs -f -n kube-system -l app.kubernetes.io/name=karpenter -c controller
-f = follow (command does not return)
- l = logs from objects with specified label
-c = only logs from specified container
To verify successful discovery of our EKS cluster endpoint and region, we should look for specific initialisation and informer messages in the Karpenter controller logs.
Key Success Indicators
When Karpenter starts, it must connect to the AWS EKS API to "describe" the cluster. Look for these signs in the output of kubectl logs -n kube-system -l app.kubernetes.io/name=karpenter -c controller:
- "Starting informers...": This indicates Karpenter has successfully authenticated with the Kubernetes API server and is beginning to watch for unschedulable pods.
- Absence of "DescribeCluster" Errors: If discovery is working, we will not see errors like failed to detect the cluster CIDR or AccessDeniedException: ... eks:DescribeCluster.
- Region and Cluster Verification: In newer versions, Karpenter logs its configuration during startup. Look for a log entry mentioning the cluster name and AWS region we provided in our Helm values.
Example log:
{"level":"DEBUG","time":"2026-03-11T01:10:28.203Z","logger":"controller","caller":"operator/operator.go:132","message":"discovered karpenter version","commit":"1c39126","version":"1.3.2"}
{"level":"DEBUG","time":"2026-03-11T01:10:28.461Z","logger":"controller","caller":"operator/operator.go:124","message":"discovered region","commit":"1c39126","region":"us-east-1"}
{"level":"DEBUG","time":"2026-03-11T01:10:28.749Z","logger":"controller","caller":"operator/operator.go:129","message":"discovered region","commit":"1c39126","region":"us-east-1"}
{"level":"DEBUG","time":"2026-03-11T01:10:28.909Z","logger":"controller","caller":"operator/operator.go:135","message":"discovered cluster endpoint","commit":"1c39126","cluster-endpoint":"https://CA0xxxxxxx5FDD.yxx.us-east-1.eks.amazonaws.com"}
{"level":"DEBUG","time":"2026-03-11T01:10:28.914Z","logger":"controller","caller":"operator/operator.go:143","message":"discovered kube dns","commit":"1c39126","kube-dns-ip":"172.20.0.10"}
{"level":"INFO","time":"2026-03-11T01:10:28.948Z","logger":"controller.controller-runtime.metrics","caller":"server/server.go:208","message":"Starting metrics server","commit":"1c39126"}
{"level":"INFO","time":"2026-03-11T01:10:28.948Z","logger":"controller","caller":"manager/runnable_group.go:226","message":"starting server","commit":"1c39126","name":"health probe","addr":"[::]:8081"}
{"level":"INFO","time":"2026-03-11T01:10:28.950Z","logger":"controller.controller-runtime.metrics","caller":"server/server.go:247","message":"Serving metrics server","commit":"1c39126","bindAddress":":8080","secure":true}
{"level":"INFO","time":"2026-03-11T01:10:29.052Z","logger":"controller","caller":"leaderelection/leaderelection.go:215","message":"attempting to acquire leader lease kube-system/karpenter-leader-election...","commit":"1c39126"}
{"level":"DEBUG","time":"2026-03-11T06:00:19.215Z","logger":"controller","caller":"provisioning/provisioner.go:128","message":"computing scheduling decision for provisionable pod(s)","commit":"1c39126","controller":"provisioner","namespace":"","name":"","reconcileID":"921af0a4-f057-4041-bff5-d1861d9f72d1","pending-pods":1,"deleting-pods":0}
{"level":"DEBUG","time":"2026-03-11T06:00:21.223Z","logger":"controller","caller":"provisioning/provisioner.go:128","message":"computing scheduling decision for provisionable pod(s)","commit":"1c39126","controller":"provisioner","namespace":"","name":"","reconcileID":"9f0c7833-8e01-4661-8728-890f0001a634","pending-pods":1,"deleting-pods":0}
{"level":"INFO","time":"2026-03-11T06:00:29.230Z","logger":"controller","caller":"lifecycle/controller.go:148","message":"initialized nodeclaim","commit":"1c39126","controller":"nodeclaim.lifecycle","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","NodeClaim":{"name":"xxxx-ms587"},"namespace":"","name":"xxxxx","reconcileID":"35624d4f-833a-4939-9785-24df4c975e0e","provider-id":"aws:///us-east-1c/i-0123456df20484e26","Node":
{"name":"ip-10-1-46-231.us-east-1.compute.internal"},"allocatable":{"cpu":"3920m","ephemeral-storage":"192128045146","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"15147932Ki","pods":"58"}}
{"level":"DEBUG","time":"2026-03-11T06:00:29.741Z","logger":"controller","caller":"disruption/controller.go:99","message":"marking consolidatable","commit":"1c39126","controller":"nodeclaim.disruption","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","NodeClaim":{"name":"xxxx-ms587"},"namespace":"","name":"xxxx-ms587","reconcileID":"8c5c3d20-36eb-4a78-b0e8-792532db530d","Node":{"name":"ip-10-2-45-230.us-east-1.compute.internal"}}
{"level":"INFO","time":"2026-03-11T06:01:46.399Z","logger":"controller","caller":"disruption/controller.go:193","message":"disrupting node(s)","commit":"1c39126","controller":"disruption","namespace":"","name":"","reconcileID":"acc96c52-0cda-475f-b8a9-1251e7a98dc1","command-id":"3fa9d95e-8f45-48a9-b524-94786e1ac91a","reason":"empty","decision":"delete","disrupted-node-count":1,"replacement-node-count":0,"pod-count":0,"disrupted-nodes":[{"Node":{"name":"ip-10-2-45-230.us-east-1.compute.internal"},"NodeClaim":{"name":"xxxx-ms587"},"capacity-type":"on-demand","instance-type":"m5.xlarge"}],"replacement-nodes":[]}
Common Error Patterns to Watch For
If discovery fails, the logs will explicitly mention connectivity or permission issues:
- DNS/Endpoint Issues: Look for i/o timeout or lookup sts.<region>.amazonaws.com. This often means Karpenter can't reach the AWS STS endpoint to get credentials.
- IAM Permission Issues: Messages stating is not authorized to perform: eks:DescribeCluster mean the controller's IAM role (IRSA) is missing the necessary permissions to discover the cluster details.
- Controller Crash/Restart: If the logs show repeated restarts right after "Starting informers", it often points to a mismatch between the provided clusterName and the actual cluster.
Tip: Enable Debug Logging
If we don't see enough detail, we can increase the log verbosity. Update our Helm deployment with --set logLevel=debug or change the LOG_LEVEL environment variable in the deployment to debug
2. Verify CRD Configurations
Karpenter requires specific Custom Resource Definitions (CRDs) to know how to provision nodes.
(1) List NodePools: Run kubectl get nodepools to ensure our provisioning logic is active.
(2) List EC2NodeClasses: Run kubectl get ec2nodeclasses to confirm AWS-specific settings (like subnets and security groups) are defined.
3. Perform a Scaling Test ("Inflate" Test)
The standard way to test Karpenter is by deploying a "dummy" workload that exceeds current cluster capacity.
(1) Deploy a test app: Apply a deployment (often called inflate) with high CPU/Memory requests.
(2) Scale it up: Run:
% kubectl scale deployment inflate --replicas=5
(3) Watch for new nodes: Monitor:
% kubectl get nodes -w
If configured correctly, Karpenter will detect the pending pods and provision a new EC2 instance within about a minute.
During the inflate scaling test, how to know that a new node was provisioned by karpenter and not cluster autoscaler?
During an inflate scaling test, we can distinguish between nodes provisioned by Karpenter and those from Cluster Autoscaler (CAS) by checking for specific labels, console status, and controller logs.
1. Check for Specific Kubernetes Labels
Karpenter automatically injects unique labels into every node it creates. CAS nodes usually belong to an Auto Scaling Group (ASG) and do not have these specific Karpenter markers.
Run this command to see the labels on our nodes:
kubectl get nodes --show-labels
Look for these Karpenter-exclusive labels:
- karpenter.sh/nodepool: The name of the NodePool that provisioned the node.
- karpenter.sh/capacity-type: Set to spot or on-demand.
- karpenter.k8s.aws/instance-category: (e.g., c, m, r).
Nodes provisioned by Cluster Autoscaler (CAS) don't have a unique "CAS" label. Instead, they carry labels that identify them as members of an Auto Scaling Group (ASG) or an EKS Managed Node Group (MNG).
If we are looking at a node and trying to confirm if it came from CAS, look for these specific markers:
1. Managed Node Group Labels (Most Common)
If we use EKS Managed Node Groups with CAS, the nodes will always have:
- eks.amazonaws.com/nodegroup: The name of the MNG.
- eks.amazonaws.com/nodegroup-image: The AMI ID used.
- eks.amazonaws.com/capacityType: Usually ON_DEMAND or SPOT
- eks.amazonaws.com/sourceLaunchTemplateId:
- eks.amazonaws.com/sourceLaunchTemplateVersion:
2. Auto Scaling Group Labels
Since CAS works by increasing the "Desired Capacity" of an ASG, the underlying EC2 instance is tagged by AWS. Kubernetes reflects these as:
- alpha.eksctl.io/nodegroup-name: (If using eksctl)
- node.kubernetes.io/instance-type: (Standard, but CAS uses this to match ASG definitions)
3. The "Missing" Labels
The easiest way to identify a CAS node during a Karpenter test is by what it doesn't have. A CAS node will NEVER have:
- ❌ karpenter.sh/nodepool
- ❌ karpenter.sh/provisioner-name (deprecated)
- ❌ karpenter.k8s.aws/instance-category
Quick Check Command
Run this to see which nodes belong to Karpenter vs. CAS/MNG:
kubectl get nodes -L karpenter.sh/nodepool,://eks.amazonaws.com
If the nodepool column is populated, it's Karpenter.
If the nodegroup column is populated, it's CAS/MNG.
2. Identify via "Self-Managed" Status in EKS Console
In the AWS EKS Console under the Compute tab:
- Karpenter Nodes: Appear as "Self-managed" because Karpenter bypasses Auto Scaling Groups to launch instances directly via the EC2 Fleet API.
- Cluster Autoscaler Nodes: Appear as part of a "Managed Node Group" or are tied to a specific ASG.
3. Check for the NodeClaim Object
Karpenter creates a NodeClaim for every node it provisions. Cluster Autoscaler does not use this resource. Run
kubectl get nodeclaims
...during the test. If we see new entries appearing that correspond to our inflate pods, Karpenter is doing the work.
4. Monitor Controller Logs
We can watch Karpenter’s real-time decision-making process by tailing its logs. It will explicitly state when it discovers unschedulable pods and which instance type it is launching.
kubectl logs -n kube-system -l app.kubernetes.io/name=karpenter -f
CAS logs, by contrast, will show it interacting with ASGs and increasing the "desired capacity" of a group
How to disable Cluster Autoscaler temporarily to ensure Karpenter is the only one responding to our tests?
To ensure Karpenter is the only controller responding to our scaling tests, we can temporarily disable the Cluster Autoscaler (CAS) by scaling its deployment to zero replicas.
1. Identify the CAS Deployment
The Cluster Autoscaler typically runs in the kube-system namespace. Verify its name first:
kubectl get deployments -n kube-system | grep cluster-autoscaler
2. Scale to Zero
Run the following command to stop the CAS from running. This will terminate the pod responsible for monitoring the cluster and scaling our Auto Scaling Groups (ASGs):
kubectl scale deployment cluster-autoscaler -n kube-system --replicas=0
3. Verify the Shutdown
Ensure no CAS pods are running to prevent them from interfering with our inflate test:
kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-cluster-autoscaler
4. (Optional) Remove ASG Tags
If we want a more permanent "hard" disable without deleting the deployment, we can remove the specific AWS tags from our Auto Scaling Groups that the CAS uses for auto-discovery:
- k8s.io/cluster-autoscaler/enabled
- k8s.io/cluster-autoscaler/<cluster-name>
Without these tags, the CAS will ignore those node groups even if the deployment is scaled back up.
To Re-enable
Once our tests are complete, we can restore the Cluster Autoscaler by scaling it back to its original replica count:
kubectl scale deployment cluster-autoscaler -n kube-system --replicas=1
To confirm which instance types Karpenter chose during our inflate test, we can watch the controller logs in real-time. Karpenter will log exactly how it batches our pods and which instances it requests from AWS.
1. Tail Karpenter Logs
Run the following command while our inflate pods are in a Pending state:
kubectl logs -f -n kube-system -l app.kubernetes.io/name=karpenter
Note: Some installations use the karpenter namespace instead of kube-system.
2. What to Look For
Karpenter logs its decisions in JSON or text format. Look for INFO messages containing found provisionable pod(s) or created nodeclaim.
A typical log entry looks like this:
2024-03-12T10:00:00.000Z INFO controller.provisioner created nodeclaim {"commit": "...", "nodeclaim": "default-abc12", "nodepool": "general-purpose", "requests": {"cpu":"4","memory":"8Gi"}, "instance-types": "m5.xlarge, m6i.xlarge, c5.2xlarge..."}
- requests: Shows the total CPU/Memory requested by our inflate pods.
- instance-types: Lists the candidates Karpenter passed to the EC2 Fleet API. Karpenter usually sends a diversified list (up to 60 types) to ensure high availability and best pricing.
3. Identify the Winning Instance
Once the node is live, we can confirm the final selected type by checking the NodeClaim status:
kubectl get nodeclaims -o custom-columns=NAME:.metadata.name,TYPE:.status.instanceType,ZONE:.status.zone
This command explicitly shows the specific instance type (e.g., m5.large) that AWS actually provisioned for that claim.
4. Enable Debug Logging (Optional)
If we don't see enough detail, we can increase the log verbosity to debug. This will reveal exactly how Karpenter "binpacks" our pods onto different instance options.
kubectl patch deployment karpenter -n kube-system --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/env/0/value", "value": "debug"}]'
Tip: After our test, remember to set the LOG_LEVEL back to info to avoid excessive log storage costs.
4. Monitor NodeClaims
Karpenter uses NodeClaims to manage the lifecycle of the nodes it creates.
- Check Status: Run kubectl get nodeclaims to see if requests for new nodes are being fulfilled.
- Debug Failures: If a node fails to join, use kubectl describe nodeclaim <name> to check for errors like LaunchFailed or issues with IAM permissions.
5. Verify Tagging
Karpenter discovers infrastructure via tags. Ensure our Subnets and Security Groups have the following tag:
Key: karpenter.sh/discovery
Value: <our-cluster-name>
How to know if node was provisioned by Karpenter?
Karpenter applies labels on nodes it provisions so let's check labels:
% kubectl get nodes --show-labels
If labels like karpenter.sh/nodepool or karpenter.sh/provisioner-name exist, Karpenter launched the node.
When setting up EKS Managed Node Group, we set desired, min and max number of nodes for ASG. Does that enable Cluster Autoscaler automatically? How does ASG play with Karpenter?
Setting the desired, min, and max size on an EKS Managed Node Group only configures the underlying AWS Auto Scaling Group (ASG).
- What AWS does: If a node crashes, the ASG will see that the "current" count is less than the "min" (or "desired") and spin up a new node to replace it.
- What AWS does NOT do: It will not look at your pending Kubernetes pods and say, "Oh, we need more space, let's increase the count from 3 to 4."
To get that "intelligent" scaling based on pod demand, we must install a separate controller.
Is Cluster Autoscaler (CAS) enabled by default?
No. Kubernetes Cluster Autoscaler is not enabled by default on EKS.
If we want to use it, we must:
- Deploy the Cluster Autoscaler as a Pod in our cluster (usually via Helm).
- Give that Pod an IAM Role (IRSA) that has permission to update your ASG's desired_capacity.
- Add specific tags to our Node Group so the Autoscaler knows which ASG to "manage."
Do we need to disable CAS to use Karpenter?
Yes, absolutely. We should not run Cluster Autoscaler and Karpenter simultaneously on the same nodes.
- The Conflict: CAS tries to scale nodes by changing the "desired capacity" of an ASG. Karpenter works differently—it bypasses ASGs entirely and talks directly to the EC2 Fleet API to launch specific instances.
- The Result of Running Both: They will fight over the cluster. CAS might try to shrink a group while Karpenter is trying to add capacity, leading to "flapping" nodes and unpredictable costs.
If we switch to Karpenter:
- Uninstall/Scale down the Cluster Autoscaler deployment.
- Set our Node Group sizes to fixed values (or migrate to "headless" node groups where Karpenter manages the entire lifecycle).
- Karpenter is the "New Way": Most AWS users are moving toward Karpenter because it is faster (seconds vs minutes) and more efficient at picking the right instance sizes.
Summary Comparison
Feature ASG (Default) Cluster Autoscaler (CAS) Karpenter
--------- ------------------ ------------------------------- ------------
Logic "Keep X nodes alive" "Add nodes if Pods are Pending" "Provision exactly what Pods need"
Speed Slow (Health-based) Medium (Polling ASG) Fast (Direct EC2 API)
Setup Built-in to EKS Manual Install + IAM Manual Install + IAM
Best for Fixed capacity Traditional scaling Cost-optimization & high speed
Updating Kubernetes version on nodes managed by Karpenter
References: