Saturday 1 June 2024

Deploying Microservices Application on the AWS EKS with Terraform and kubectl

In my previous article, Deploying Microservices Application on the AWS EKS with AWS Console and kubectl, I showed how to provision Kubernetes cluster in AWS EKS via AWS Console and how to use kubectl to deploy a microservices application in it.

Today I want to demonstrate how to use AWS Terraform module for provisioning the EKS cluster infrastructure instead doing it manually in AWS Console.

Let's recap what resources were created in Deploying Microservices Application on the AWS EKS with AWS Console and kubectl, before we were able to deploy the application and let's identify matching resources from AWS Terraform provider:

  • IAM role for cluster
  • EKS cluster
  • IAM role for (worker) nodes
    • name: eksNodeRole
    • resource "aws_iam_role"
  • (worker) node group
    • demo-workers

Cluster IAM role (Cluster service role)

The matching AWS Terraform provider resource is aws_iam_role | Resources | hashicorp/aws | Terraform | Terraform Registry. We'll pay attention to have all required properties set. We'll set:
  • assume_role_policy - (Required) Policy that grants an entity permission to assume the role. This is going to be the policy right above. Let's save it as ../policies/eks_cluster_trust_policy.json
  • managed_policy_arns -  (Optional)  Set of exclusive IAM managed policy ARNs to attach to the IAM role. This will be arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
  • name - (Optional) Friendly name of the role; we used eksClusterRole
  • description - (Optional) Description of the role.
Here is the Terraform code for provisioning this EKS cluster role:

resource "aws_iam_role" "eks-cluster-role" {
  name = "eksClusterRole"
  description = "Amazon EKS - Cluster role"
  assume_role_policy = file("${path.module}/policies/eks_cluster_trust_policy.json")
  managed_policy_arns = [

EKS cluster

The matching AWS Terraform provider resource is aws_eks_cluster | Resources | hashicorp/aws | Terraform | Terraform Registry. We'll pay attention to have all required properties set which are:
  • name - Name of the cluster. We used example-voting-app.
  • role_arn - ARN of the IAM role that provides permissions for the Kubernetes control plane to make calls to AWS API operations on your behalf.
  • vpc_config - Configuration block for the VPC associated with your cluster.
We can optionally set the version property which is a desired Kubernetes master version. 
If you do not specify a value, the latest available version at resource creation is used and no upgrades will occur except those aautomatically triggered by EKS. The value must be configured and increased to upgrade the version when desired. Downgrades are not supported by EKS.
We'll not specify it so the latest available version is used.

Cluster access is set in access_config configuration block of the aws_eks_cluster resource. As we kept default options, we won't explicitly add this block to our aws_eks_cluster resource.

We used defaults for secrets encryption so we won't add encryption_config configuration block to our aws_eks_cluster resource either.

vpc_config is a required configuration block in aws_eks_cluster and subnet_ids is its required property so we'll specify it.

In our Terraform code, we need to obtain the IDs of subnets of the default VPC:

data "aws_vpc" "default" {
  default = true

data "aws_subnets" "default" {
  filter {
    name   = "vpc-id"
    values = []

As we're using default values for most of the properties of our EKS cluster, its Terraform code is quite simple:

resource "aws_eks_cluster" "example-voting-app" {
  name = "example-voting-app"
  role_arn = aws_iam_role.eks-cluster-role.arn
  vpc_config {
    subnet_ids = data.aws_subnets.default.ids

IAM role for (worker) nodes

Before provisioning a Managed node group and adding it to the cluster we need to add a new IAM Role, the one which will be used by worker nodes. 

Amazon EKS node IAM role - Amazon EKS states that this role needs to have these policies attached:
  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Principal": {
        "Service": ""
      "Action": "sts:AssumeRole"

We named it eksNodeRole:

resource "aws_iam_role" "eks_node_role" {
  name = "eksNodeRole"

  description = "IAM Role for EKS Node Group"

  assume_role_policy = jsonencode({
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        Service = ""
    Version = "2012-10-17"

resource "aws_iam_role_policy_attachment" "eks_node_role-AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       =

resource "aws_iam_role_policy_attachment" "eks_node_role-AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       =

resource "aws_iam_role_policy_attachment" "eks_node_role-AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       =

Managed node group (worker nodes)

The matching AWS Terraform resource is aws_eks_node_group | Resources | hashicorp/aws | Terraform | Terraform Registry. The following arguments are required:
  • cluster_name - Name of the EKS Cluster. 
  • node_role_arn - Amazon Resource Name (ARN) of the IAM Role that provides permissions for the EKS Node Group. 
  • scaling_config - Configuration block with scaling settings. (Auto-scaling group settings)
    • desired_size - (Required) Desired number of worker nodes.
    • max_size - (Required) Maximum number of worker nodes.
    • min_size  -  (Required) Minimum number of worker nodes.
  • subnet_ids - Identifiers of EC2 Subnets to associate with the EKS Node Group. 
We can optionally set the following properties:
  • node_group_name - Name of the EKS Node Group. We used demo-workers.
  • ami_type - Type of Amazon Machine Image (AMI) associated with the EKS Node Group. Full list of valid values is here: Nodegroup - Amazon EKS. We'll use "AL2_x86_64"
  • instance_types - List of instance types associated with the EKS Node Group. Defaults to ["t3.medium"]. We'll use cheaper t2.medium (but not micro ones for kubernetes - Pod creation in EKS cluster fails with FailedScheduling error - Stack Overflow)
  • disk_size - Disk size in GiB for worker nodes. Defaults to 50 for Windows, 20 all other node groups. We'll use the default value.

Here is the full code:

resource "aws_eks_node_group" "demo_workers" {
    node_group_name = "demo-workers"
    cluster_name =
    node_role_arn = aws_iam_role.eks_node_role.arn

    scaling_config {
        desired_size = 2
        max_size = 2
        min_size = 2

    subnet_ids = data.aws_subnets.default.ids
    ami_type = "AL2_x86_64"
    instance_types = ["t2.medium"]
    disk_size = 20

The entire Terraform project can be found here: demo-terraform/aws/bk_aws3 at master · BojanKomazec/demo-terraform.

Provisioning the cluster resources

Use now terraform plan and terraform apply commands to provision these resources. Creating the cluster and node group can take around 10 minutes.

Deploying and Testing the application

Destroying cost-bearing resources

To prevent being charged for running the cluster, once we're done with testing the application, we can destroy worker node group and then the cluster:

$ terraform destroy -target="aws_eks_node_group.demo_workers"
$ terraform destroy -target="aws_eks_cluster.example-voting-app"

Names of the resources can be copied from Terraform scripts or from the Terraform state which is in the output of the following command:

$ terraform show


No comments: