terraform-aws-modules/terraform-aws-eks: Terraform module to create Amazon Elastic Kubernetes (EKS) resources is a popular Terraform module for provisioning AWS EKS cluster.
In this article we want to explore and breakdown its key components and their purposes.
We'd typically use this module like here:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "21.15.1"
...
}
Let's explore this module's attributes.
1. Cluster Configuration
name, version
Sets the name and Kubernetes version for the EKS cluster. Use local and variable values for flexibility.
endpoint_public_access
Set public (Internet) access to the Kubernetes API endpoint (via kubectl). Disable it for enhanced security.
endpoint_private_access
Set private access to the API endpoint, whether only resources within the VPC can access it. If enabled, it is only reachable from within the VPC (Virtual Private Cloud) where your EKS cluster is deployed. There are few ways to access it:
How to Access the Kubernetes API from VPC1. Use a Bastion Host or EC2 Instance in the VPCLaunch an EC2 instance (bastion host or jump box) in a subnet within the same VPC as your EKS cluster.SSH into this instance, and from there, use kubectl to access the cluster.Alternatively, use SSH port forwarding or a VPN to proxy kubectl commands from your local machine through the bastion.2. Use AWS Systems Manager (SSM) Session ManagerIf your EC2 instances have the SSM agent and the necessary IAM permissions, you can use AWS SSM Session Manager to start a shell session on an instance in the VPC, then run kubectl from there.3. Use a VPN ConnectionSet up a VPN (such as AWS Client VPN or OpenVPN, or Site-to-site VPN for office LAN) that connects your local network to the VPC. Once connected, your local machine will be able to reach the private endpoint.4. Use AWS PrivateLink (Interface VPC Endpoints)For advanced scenarios, you can use AWS PrivateLink to expose the Kubernetes API endpoint privately to other VPCs or on-premises networks.
enable_cluster_creator_admin_permissions
If enabled, grants admin permissions to the user who creates the cluster.
2. Logging and Add-ons
enabled_log_types
Enables logging for various Kubernetes components (API, audit, authenticator, controllerManager, scheduler) for monitoring and troubleshooting.
Example:
enabled_log_types = [
"api",
"audit",
"authenticator",
"controllerManager",
"scheduler"
]
addons
A dictionary-type attribute which installs and configures essential Kubernetes add-ons. Dictionary keys are addon names like:
- coredns
- kube-proxy
- aws-ebs-csi-driver
- vpc-cni
Dictionary values are objects which attributes are:
- most_recent - to set using the latest version (set it to false for version pinning)
- version - addon version (use it for version pinning)
- before_compute - set it to true if addon should be installed and set before nodes (compute layer)
- service_account_role_arn - to configure addon with IAM roles for service accounts, enabling secure integration with AWS services.
Example:
addons = {
...
vpc-cni = {
most_recent = false
version = "v1.21.1-eksbuild.7"
before_compute = true
service_account_role_arn = module.k8s_default_vpc_cni_irsa.iam_role_arn
}
...
}
VPC CNI (Container networking interface) is responsible for allocating IP addresses to the Kubernetes nodes and provides networking to pods. The plugin manage network interfaces (ENIs) on the nodes and uses it to assign IP addresses to pods.
3. Networking
We need to integrate the EKS cluster with existing VPC and subnets:
vpc_id
VPC IDsubnet_ids
Subnets in which nodes (EC2 instances) will be created.
Where your worker nodes (EC2 instances) run.
control_plane_subnet_ids
Where the EKS control plane ENIs (network interfaces) are placed
Defines where the EKS control plane creates its Elastic Network Interfaces (ENIs)
What it controls:
- The EKS control plane runs in an AWS-managed VPC (you don't see it)
- To communicate with your worker nodes, it creates ENIs in your VPC
- These ENIs are placed in the subnets you specify here
Typical configuration:
module "eks" {
source = "terraform-aws-modules/eks/aws"
name = "my-cluster"
# Control plane ENIs go here
control_plane_subnet_ids = [
"subnet-private-1a",
"subnet-private-1b",
"subnet-private-1c"
]
}
Best practices:
- Usually private subnets
- Should span multiple AZs for high availability (AWS requires at least 2)
- Minimum of 2 subnets, maximum of 16
- Each subnet needs at least 5 available IP addresses
What these ENIs do:
- Allow the control plane to communicate with worker nodes
- Allow worker nodes to communicate with the API server
- Handle API server endpoint traffic
security_group_additional_rules
Adds custom security group rules for the cluster, such as allowing node-to-node communication and VPN access for kubectl.
node_security_group_additional_rules
Further customizes node security groups, allowing all node-to-node traffic and all outbound traffic.
Understanding EKS Architecture
An EKS cluster has two main components:
┌─────────────────────────────────────────────────────────┐
│ EKS Cluster │
│ │
│ ┌───────────────────────────────────────┐ │
│ │ Control Plane (AWS Managed) │ │
│ │ - API Server │ │
│ │ - etcd │ │
│ │ - Scheduler │ │
│ │ - Controller Manager │ │
│ │ │ │
│ │ Runs in AWS-managed account │ │
│ └──────────────┬────────────────────────┘ │
│ │ │
│ │ ENIs in your VPC │
│ │ (control_plane_subnet_ids) │
│ ┌──────────────▼────────────────────────┐ │
│ │ Your VPC │ │
│ │ ┌─────────────────────────────┐ │ │
│ │ │ Worker Nodes (subnet_ids) │ │ │
│ │ │ - EC2 instances │ │ │
│ │ │ - Your pods run here │ │ │
│ │ └─────────────────────────────┘ │ │
│ └───────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘
ENI: elastic network interface. It is a logical networking component in a VPC that represents a virtual network card.
4. Node Group Configuration
node_security_group_tags
Adds a tag for Karpenter (an open-source Kubernetes node autoscaler) discovery.
eks_managed_node_group_defaults
Sets default properties for all managed node groups, including:
- Attaching the CNI policy for networking.
- Using a specific SSH key.
- Associating additional security groups.
- Defining block device mappings for EBS volumes.
- Attaching the AmazonSSMManagedInstanceCore policy for SSM access.
eks_managed_node_groups
Defines a default managed node group with:
- A specific AMI type.
- Desired, minimum, and maximum node counts.
- Instance types from a variable.
- On-demand capacity, EBS optimization, and disk size.
- Custom labels for node identification and environment.
The gold standard for production environments is explicit pinning. This ensures that our infrastructure only changes when we decide to change the code. In order to pin AMI version used in node groups we need to set two attributes:
- ami_release_version needs to be set. This prevents nodes from cycling unexpectedly during a routine deployment.
- use_latest_ami_release_version needs to be set to false (without this, terraform plan will still show that it wants to upgrade AMI version, even if we've set ami_release_version)
Example:
eks_managed_node_groups = {
"${local.cluster_name}-v1_33" = {
...
ami_release_version = "1.33.8-20260224"
use_latest_ami_release_version = false
...
5. Tagging
tags
Applies custom tags to all AWS resources created by the module, supporting cost allocation and resource management.
Summary
Our configuration sets up a secure, private, and production-ready EKS cluster with managed node groups, essential add-ons, robust logging, and fine-grained network and IAM controls. It leverages best practices for security (private endpoints, IAM roles for service accounts), scalability (managed node groups, Karpenter tags), and maintainability (modular, versioned, and tagged infrastructure).
---

No comments:
Post a Comment