Tuesday, 31 May 2022

Terraform Operators and Conditional Expressions

 



Numeric Operators


% terraform console
> 1+2
3
> 2-1
1
> 2*3
6
> 8/2
4


Equality & Comparison Operators


> 1 == 1
true
> 1 < 2
true
> 2 < 1
false
> 2 >= 1
true
> 1 == "1"
false
> 1 != "1"
true
>  


Logical Operators


AND, OR, NOT

> 1 < 2 && false
false
> 1 < 2 && true
true
> 1 < 2 || false
true
> 1 < 2 || true
true
> !(1 < 2) || false
false
> !(1 < 2)
false


main.tf:

variable flag {
type = bool
default = false
}

variable num_a {
type = number
default = 11
}

variable num_b {
        type = number
        default = 22
}

> var.flag
false
> !var.flag
true
> var.num_a
11
> var.num_b
22
> var.num_b < var.num_a
false

Conditional Expressions


value = condition ? value_if_condition_is_true : value_if_condition_is_false

Example: We want to provision a password generator that creates a random password of the length specified by the user. If length is less than 8 characters then generator will use the default length value of 8.

main.tf:

resource "random_password" "pwd-generator" {
length = var.length < 8 ? 8 : var.length
}

output password {
value = random_password.pwd-generator.result
    sensitive = true
}

variable length {
type = number
}

In terminal:

$ terraform apply -var=length=6 -auto-approve
$ terraform output password
"DIo${L-*"

$ terraform apply -var=length=10 -auto-approve
$ terraform output password
"0Y3}Fh2Na2"

Terraform Functions

 


file() reads data from a file.

content = file("users.json")

 
length() returns number of elements in a list or map:
 
count = length(var.users)

 
toset() converts a list (duplicates are allowed) into a set (no duplicates).

variable users {
    type = list
    default = [
        "Anne",
        "Anne",
        "Billy",
        "Connor",
    ]
    description = "A list of users"
}

resource "local_file" "users" {
    ...
    for_each = toset(var.users)
}

Terraform Interactive Console


It can be used for testing functions and interpolations. To launch it:

$ terraform console

Let's test file function:

$ terraform console
> file("users.txt")
<<EOT
Anne
Anne
Billy
Connor

EOT
>  

> length(var.users)
4

> toset(var.users)
toset([
  "Anne",
  "Billy",
  "Connor",
])


 

Numeric Functions



> max(1, 2, 3)
3
> min(1, 2, 3)
1


variable values {
    type = set(number) 
    default = [1, 2, 3]
}

To use variable as a function parameter, we need to use the expansion symbol

> max(var.values...)
3

> ceil(1.1)
2
> ceil(1.99)
2
> floor(1.01)
1
> floor(1.99)
1




String Functions

 
 variable ami_ids {
    type = string
    default = "ami-000, AMI-001, ami-002, ami-003"
}

> var.ami_ids
"ami-000, AMI-001, ami-002, ami-003"

> split(",", var.ami_ids)
tolist([
  "ami-000",
  " AMI-001",
  " ami-002",
  " ami-003",
])

> lower(var.ami_ids)
"ami-000, ami-001, ami-002, ami-003"
 
> upper(var.ami_ids)
"AMI-000, AMI-001, AMI-002, AMI-003"

 
To convert only the first character to uppercase:

> title(var.ami_ids)
"Ami-000, AMI-001, Ami-002, Ami-003"

substr - extracts the substring.

> substr(var.ami_ids, 0, 3)
"ami"
> substr(var.ami_ids, 0, 7)
"ami-000"


To get the all characters from the offset to the end of the string, length should be set to -1.


> join(".", [192, 168, 0, 1])
"192.168.0.1"
> join(".", ["192", "168", "0", "1"])
"192.168.0.1"

> join(",", var.users)
"Anne,Anne,Billy,Connor"



Collection Functions

 
> length(var.users)
4

 
 
> index(var.users, "Anne")
0
> index(var.users, "Billy")
2

 
 
To return the element at the specified index:
 
> element(var.users, 3)
"Connor"

 
 
> contains(var.users, "Bojan")
false
> contains(var.users, "Billy")
true

 
 
variable "amis" {
    type = map
    default = {
        "eu-west-1" = "ami-000"
        "eu-south-2" = "ami-001"
        "us-east-1" = "ami-002"
    }
}
 
 
> keys(var.amis)
tolist([
  "eu-south-2",
  "eu-west-1",
  "us-east-1",
])

 
 
> values(var.amis)
tolist([
  "ami-001",
  "ami-000",
  "ami-002",
])
 
 
> lookup(var.amis, "us-east-1")
"ami-002"> lookup(var.amis, "us-east-2")

│ Error: Error in function call

│   on <console-input> line 1:
│   (source code not available)

│ Call to function "lookup" failed: lookup failed to find key "us-east-2".


> lookup(var.amis, "us-east-2", "ami-003")
"ami-003"
 
 

Terraform Modules

 

Terraform considers every .tf file in configuration directory as configuration file. This means that we can define all resources in a single .tf file or divide them into multiple .tf files. 
 
In practice, there can be hundreds of resources and both options above prevent reusability. 

Terraform module is any configuration directory which contains configuration files.

A module where we run Terraform commands is called a root module.

To include a module A (in directory A) in a configuration file in module B (in directory B) we can do the following:

../my-projects/A/
../my-projects/A/main.tf
../my-projects/A/variables.tf
../my-projects/B/
../my-projects/B/main.tf

where ../my-projects/B/main.tf:
 
module "project-B" {
    source = "../A"
}
 
Module A is a child module of module B. project-B is the logical name of the module. source is a a required argument in module block. Its value is a relative or an absolute path to the child directory. 

In practice, all reusable modules should be stored in a modules directory, grouped by their projects:
 

../my-projects/modules/
../my-projects/modules/A/app_server.tf
../my-projects/modules/A/dynamodb_table.tf
../my-projects/modules/A/s3_bucket.tf
../my-projects/modules/A/variables.tf
 
This example shows the project outline and configuration for provisioning resources for application that needs to be deployed in various AWS regions.
 
../my-projects/modules/
../my-projects/modules/my-app/app_server.tf
../my-projects/modules/my-app/dynamodb_table.tf
../my-projects/modules/my-app/s3_bucket.tf
../my-projects/modules/my-app/variables.tf
 
 
../my-projects/modules/my-app/app_server.tf:
 
resource "aws_instance" "my_app_server" {
    ami = var.ami
    instance_type = "t2.medium" 
    tags = {
        Name = "${var.app_region}-my-app-server"
    }
    depends_on= [
        aws_dynamodb_table.orders_db,
        aws_s3_bucket.products_data
    ]
}
 
../my-projects/modules/my-app/s3_bucket.tf:
 
resource "aws_s3_bucket" "products_data" {
    bucket = "${var.app_region}-${var.bucket}"
}

../my-projects/modules/my-app/dynamodb_table.tf:
 
resource "aws_dynamodb_table" "orders_db" {
    name = "orders_data" 
    billing_mode = "PAY_PER_REQUEST"
    hash_key = "OrderID"
    attribute {
        name = "OrderID" 
        type = "N"
    }
}

../my-projects/modules/my-app/variables.tf:
 
variable "app_region" {
    type = string
}

variable "bucket" {
    default = "product-manuals"
}

variable "ami" {
    type = string
}


If we want to deploy this infrastructure stack to e.g. eu-west-1 region (Ireland) we can create a directory ../my-projects/my-app-ie/ and in it:
 
../my-projects/my-app-ie/provider.tf:
 
provider "aws" {
    region = "eu-west-1"
}
 
../my-projects/my-app-ie/main.tf:
 
module "my_app_ie" {
    source = "../modules/my-app"
    app_region = "eu-west-1"
    ami = "ami-01234567890"
}
 
We can see that there are only two variables that differentiate deployment to each region. To provision this infrastructure stack in this region we just need to cd into ../my-projects/my-app-ie/ and execute:
 
$ terraform init
$ terraform apply

If we want to deploy it in e.g. Brasil, we'll have:
 
../my-projects/my-app-br/provider.tf:
 
provider "aws" {
    region = "sa-east-1"


../my-projects/my-app-br/main.tf:
 
module "my_app_br" {
    source = "../modules/my-app"
    app_region = "sa-east-1"
    ami = "ami-3456789012"
}
 
 

Using modules from the public registry

 
Apart from provider plugins, Terraform registry also contains modules:



Modules are grouped by the provider for which they are created. There are two types of modules:

  • verified - tested and maintained by Hashicorp
  • community - not validated by Hashicorp
 
Example of verified module: AWS module security-group, used to create EC2-VPC security groups on AWS. 
 

 
 
To use it in our own configuration we can first copy-paste code snippet which can be found under Provision Instructions section:

module "security-group" {
  source  = "terraform-aws-modules/security-group/aws"
  version = "4.9.0"
  # insert the 3 required variables here
}


module security-group has ssh submodule which can be used to create predefined security groups like this one which allows inbound SSH:

module "security-group_ssh" {
    source  = "terraform-aws-modules/security-group/aws//modules/ssh"
    version = "4.9.0"
    # insert the 2 required variables here
    vpc_id = "vpc-0123456789" 
    ingress_cidr_blocks = [ "10.11.0.0/16" ]
    name = "ssh-access"
}
 
terraform get only downloads module from the registry:
 
$ terraform get

When using 3rd party modules, terraform apply might be provisioning additional resources (on top of those we explicitly add to the configuration), as per module's configuration.

Friday, 27 May 2022

Importing infrastructure in Terraform

 

Importing Resources


Some resources might be provisioned manually, via AWS Console, or by Ansible. If we want such resources to start being managed by TF we need to import them. The general syntax of the import command is:

terraform import <resource_type>.<resource_name> <attributes>

<attribute> is the resource attribute which can uniquely identify the resource such as ID. 

This command does not update configuration file but it tries to update state file with the details of the infrastructure being imported.

Example:

$ terraform import aws_instance.my-other-server i-0123456789

The first run of this command fails with error:

Error: resource address aws_instance.my-other-server does not exist in the configuration

To fix it, we can manually add it but without filling any details - we keep the resource block empty:

resource "aws_instance" "my-other-server" {
}

terraform import should now run with no errors. This resource is now imported into TF state file.

If we try to run terraform apply now, it would show the error: attributes not defined. This is because our resource has no attributes, it is still empty in the configuration file and we need to assign correct values. 

We can inspect terraform.tfstate and see the values of all attributes that belong to this resource.
Alternatively, we can find these details in AWS Management Console or by using AWS CLI like e.g.:

$ aws ec2 describe-instances

If we want to find a value of some particular attribute:

$ aws ec2 describe-instances --filters "Name=image-id,Values=ami-0123456789" | jq -r '.Reservations[].Instances[].InstanceId'

We should copy them into the resource configuration, e.g.:

resource "aws_instance" "my-other-server" {
    ami = "ami-0123456789"
    instance_type = "t2.micro"
    key_name = "ws"
    vpc_security_group_ids = [ "sg-0123456789" ]
}

This resource can now be fully managed by usual Terraform workflow including terraform apply.
 
 

Importing EC2 Key Pair

 
Let's assume EC2 key pair was created manually in AWS Management Console:
 
 
 
We want to get it under Terraform management (to be a part of our Terraform state). 
 
In our root configuration (e.g. main.tf file) we need to specify this resource and use its AWS Console Name as the value of the key_name attribute:

main.tf:

...

resource "aws_key_pair" "ec2--my-app" {
    key_name = "key-pair--ec2--my-app"
}
 
...

We can then perform the import:

$ terraform import aws_key_pair.ec2--my-app key-pair--ec2--my-app
aws_key_pair.ec2--my-app: Importing from ID "key-pair--ec2--my-app"...
aws_key_pair.ec2--my-app: Import prepared!
  Prepared aws_key_pair for import
aws_key_pair.ec2--my-app: Refreshing state... [id=key-pair--ec2--my-app]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.


terraform plan fails now:

$ terraform plan

│ Error: Missing required argument

│   on main.tf line 16, in resource "aws_key_pair" "ec2--my-app":
│   16: resource "aws_key_pair" "ec2--my-app" {

│ The argument "public_key" is required, but no definition was found.

During manual creation of EC2 key pair in AWS Console we have downloaded the private key so we can get the public key from it:

$ sudo chmod 400 key-pair--ec2--my-app.pem
$ ssh-keygen -y -f key-pair--ec2--my-app.pem > key-pair--ec2--my-app.pub

We can then reference this file in public_key value:
 
 
resource "aws_key_pair" "ec2--my-app" {
    key_name = "key-pair--ec2--my-app"
    public_key = file("./keys/key-pair--ec2--my-slack-app.pub")
}

Now:

$ terraform plan
aws_key_pair.ec2--my-app: Refreshing state... [id=key-pair--ec2--my-app]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
-/+ destroy and then create replacement

Terraform will perform the following actions:

 ...

  # aws_key_pair.ec2--my-app must be replaced
-/+ resource "aws_key_pair" "ec2--my-app" {
      ~ arn             = "arn:aws:ec2:eu-west-1:036201477220:key-pair/key-pair--ec2--my-app" -> (known after apply)
      ~ fingerprint     = "a1:bc:ab:15:7e:87:d3:3b:e9:33:cd:21:8e:24:e7:8b:7b:ad:be:ad" -> (known after apply)
      ~ id              = "key-pair--ec2--my-app" -> (known after apply)
      + key_name_prefix = (known after apply)
      ~ key_pair_id     = "key-0986398ef799fdd42" -> (known after apply)
      + public_key      = "ssh-rsa AAAAB4NzaC1yc2EAAAADAQABAAABAQCmo/In0KJapZmvLFpBWwoOtf7RXrV4iQPjDcddWzG79q8jJlJKVtG1kI3l9XuU8hzmG0eqpyyhy61Hr9pLFtFWFUDa+RqAHYpUwSWV9a4JXRLwA5lEnxvXfIRGIHx7cALTawiVmVDTFJGqkJUfjWD7jHZTaK8NjOBY9k/IX0E51LayxjWxm2jJ1LJ8TTuSr/NYOpsnBDfmojgU9B3ZWAbvrtFwC6JkRJ0dR3YMx392TA9ky9MM/o/ItpZqOWWG64fDcEqNSUeIYPa+oLLlTyZy8aqwTJfLbV554x7G/U0vrd1H3H58GjANEuJAT7oHo94IcyQdmIgSXwlQtyXDEgbB" # forces replacement
      - tags            = {
          - "Description" = "Key pair used for SSH access"
        } -> null
      ~ tags_all        = {
          - "Description" = "Key pair used for SSH access"
        } -> (known after apply)
        # (1 unchanged attribute hidden)
    }

Plan: 2 to add, 0 to change, 1 to destroy.

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.


So terraform apply will replace the Key Pair we created which is not ideal (how to get hold of private key?). aws_key_pair documentation confirms this limitation when importing the key pair:

The AWS API does not include the public key in the response, so terraform apply will attempt to replace the key pair. There is currently no supported workaround for this limitation.

This brings me to conclusion that if we want to provision EC2 instance via Terraform, the best way to manage its SSH key pair is to create them on the local machine (via 3rd party tool like  ssh-keygen and then use aws_key_pair resource type) rather than create them in AWS Management Console.



How to create PKA key pair using AWS

To SSH to our EC2 instance we need to create a Public Key Authentication (PKA) key pair which consist of public and private key. Public key is stored on EC2 AMI (in ~/.ssh/authorized_keys) and this happens on the first boot. Private key needs to be present on the machine where from we want to establish SSH connection. Its path is passed to SSH connect command.
 
We can create key pair in multiple ways:

 

Public key needs to be imported to EC2 instance. One way is to via Terraform, by using aws_key_pair and passing its id attribute value as the value of key_name attribute of aws_instance.

 

How to create PKA key pair using AWS Management Console

 
Log in to AWS Management Console and in left hand list find Key Pairs item in Network & Security group:
 
 
 
At the beginning we have no key pairs created so we click on Create key pair button:
 

 
 
This opens a dialog where we can choose the key pair name, encryption type and private key file format:
 

 
When we click on Create key pair button, private key file (named key-pair--ec2--my-app.pem in this example) gets downloaded to our computer automatically and we can see that new key pair is now listed:
 


 
 
If you want the same key pair to work in multiple AWS regions, make sure public key is applied to each region.
 

How to password-protect the private key file

 
 To password-protect downloaded pem file we can use:

$ ssh-keygen -p -f key-pair--ec2--my-app.pem 
 
If file is readable by anyone, this operation will fail with:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0664 for 'key-pair--ec2--my-app.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Failed to load key key-pair--ec2--my-app.pem: bad permissions


File was indeed readable by everyone:

$ ls -la key-pair--ec2--my-app.pem
-rw-rw-r-- 1 bojan bojan 1678 May 27 11:34 key-pair--ec2--my-app.pem

 
To rectify the error above, we need to assign read permissions only to the file owner:
 
$ sudo chmod 400 key-pair--ec2--my-app.pem 
 
$ ls -la key-pair--ec2--my-app.pem
-r-------- 1 bojan bojan 1678 May 27 11:34 key-pair--ec2--my-app.pem


We can now set the password on the file:

$ sudo ssh-keygen -p -f key-pair--ec2--my-app.pem
Enter new passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved with the new passphrase.

Next time this file is used by ssh, you'll be prompted to enter the password.
 
 

How to create public key from the private key 



It is not possible to download (or see) public key in EC2 Key pairs dashboard (list, as seen on the screenshot above). But it is possible to generate it from the private key (.pem file):

$ ssh-keygen -y -f key-pair--ec2--my-app.pem > key-pair--ec2--my-app.pub
Enter passphrase:
 
$ cat key-pair--ec2--my-app.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCtkH9hzk...0a+UPwPy
hD


How to create PKA key pair using AWS CLI

 
The following command creates the key pair, automatically uploads it to AWS EC2 (it gets listed in AWS Management Console, among EC2 key pairs) and saves private key on the local machine:
 
$ aws ec2 create-key-pair \
--key-name key-pair--ec2--my-app \
--query 'KeyMaterial' \
--output text > key-pair--ec2--my-app.pem


---

Both approaches shown above provision EC2 key pairs manually. 

If we want to use Terraform the best way is to use 3rd party (e.g. OpenSSH) to create key pairs locally and then use aws_key_pair resource in TF configuration. It is NOT possible importing manually provisioned key pairs into TF state without recreating them. (For more details see Importing infrastructure in Terraform | My Public Notepad)



---

Tuesday, 24 May 2022

Terraform Provisioners

 


Terraform provisioners allow running commands or scripts on provisioned resources or local host. To run a bootstrap script upon resource is provisioned we can use remote-exec provisioner:

 resource "aws_instance" "my-web-server" {
    ...
    provisioner "remote-exec" {
        inline = [
                     "sudo apt update"
                     "sudo apt -y install nginx"
                     "systemctl enable nginx"
                     "systemctl start nginx"
        ]
    }

    vpc_security_group_ids = [ aws_security_group.ssh-access.id ]
    key_name = aws_key_pair.my-webserver.id
    ...
}

resource "aws_security_group" "ssh-access" {
    name = "ssh-access"
    description = "Allows SSH connection from anywhere"
    ingress = {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
}

resource "aws_key_pair" "my-webserver" {
    public_key = ...
}

For this to work there must be:
  • Network connectivity between local machine and that remote EC2 instance: SSH for Linux and WinRM for Windows. This can be achieved by using proper security groups while creating remote resources
  • Authentication (SSH private key)
Connection to the resource can be defined in connection block:

 resource "aws_instance" "my-web-server" {
    ...
    connection = {
        type = "ssh"
        host = self.public_ip
        user = "ubuntu"
        private_key = file(pathexpand("~/.ssh/my-webserver.pub"))
    }
    ...
}

self.public_ip will contain the public IP address of the provisioned instance.

terraform apply will establish this SSH connection and then execute commands in remote-exec provisioner.

To run tasks on the local machine where Terraform runs we need to use local-exec provisioner. This can be useful if e.g. we want to gather some data from the provisioned resource and write it into a local file. local-exec does not require connection block.

 resource "aws_instance" "my-web-server" {
    ...
    provisioner "local-exec" {
        command = "echo ${aws_instance.my-web-server.public_ip} >> /tmp/ip.txt"
    }
    ...
}

After terraform apply we'll have the file created and populated.

Example #2: Upon provisioning elastic IP resource we want its public_dns to be saved in a local file:

resource "aws_eip" "my-web-server-eip" {
    vpc = true
    instance = aws_instance.my-web-server.id
    provisioner "local-exec" {
        command = "echo ${aws_eip.my-web-server-eip.public_dns} >> /root/my-web-server-eip_public_dns.txt"
    }
}

Example #3: Instead of manually executing sudo chown 400 command on a private key created by TF script and also adding it to the local keychain, we can use local-exec to automate this:

resource "tls_private_key" "rsa-4096-private-key" {
    algorithm = "RSA"
    rsa_bits  = 4096
}

...

resource "local_file" "ec2-key" {
    content  = tls_private_key.rsa-4096-private-key.private_key_pem
    filename = "${path.module}/temp/ec2-key"
    file_permission = "400"
    provisioner "local-exec" {
        command = "ssh-add ${self.filename}"
    }
}

By default, provisioners are run after resources are created. These are so called creation-time provisioners.

destroy-time provisioners run before resources are destroyed and they are made as such by using setting when attribute to a value destroy:

 resource "aws_instance" "my-web-server" {
    ...
    provisioner "local-exec" {
        command = "echo Instance ${instance.my-web-server.public_ip} created! > /tmp/state.txt"
    }

    provisioner "local-exec" {
        command = "echo Instance ${instance.my-web-server.public_ip} removed! > /tmp/state.txt"
        when = destroy
    }
    ...
}

By default, if any of the provisioners' tasks fails, the complete terraform apply also fails. This can explicitly be set by using on_failure attribute and setting it to fail. If we want to make the success of the provisioner's command not to determine the success of the provisioning the whole infrastructure, we can set on_failure to continue:

 resource "aws_instance" "my-web-server" {
    ...
    provisioner "local-exec" {
        command = ...
        on_failure = fail
    }

    provisioner "local-exec" {
        command = ...
        on_failure = continue

    }
    ...
}

Provisioners should be used as the last resort, sparingly.
  • Provisioners add to configuration complexity.
  • Terraform plan does not keep information on provisioners.
  • connection block needs to be defined for some provisioners to work. This network connectivity between local host and remote resource and authentication might not be always be desirable.

Provisioners that are native to resource should be used. 

We should try first to use options natively available for the resource type for the provider used. E.g. user_data is a native feature for EC2 instances and when using it we don't need to define connection block.

Here is the list of resources and their native options (attributes) for some infrastructure providers, in form Provider - Resource - Option: 
  • AWS - aws_instance - user_data
  • Azure - azurerm_virtual_machine - custom data
  • GCP - google_compute_instance - meta_data
  • Vmware vSphere - vsphere_virtual_machine - user_data.txt
It is recommended to keep the post-provisioning task to the minimum. Instead of using AMI with only OS installed, we should build in advance custom AMIs that contain software and configuration for a resources and then use these AMIs.

Example: we can create a custom AMI which already has Nginx installed. 

Tools like Packer can help in creating a custom AMI in a declarative way. We specify what we want to have installed in a json file (e.g. nginx.json) and Packer creates a custom AMI with Nginx. This way we don't need to use provisioners at all.


user_data 


We can list commands directly in configuration file:

main.tf:
 
resource "aws_instance" "my-instance" {
    ...
    user_data = << EOF
        #! /bin/bash
        sudo yum update
        sudo yum install -y htop
    EOF
    ...
}

Better approach is to keep all commands in the bash script which then gets loaded into TF script:

bootstrap.sh:
 
#! /bin/bash
sudo yum update
sudo yum install -y htop

main.tf:

resource "aws_instance" "my-instance" {
    ...
    user_data = "${file(bootstrap.sh)}"
    ...
}




---

Resources:


Monday, 23 May 2022

Managing AWS EC2 using Terraform

 

For provisioning EC2 instance, we need to use aws_instance resource. 

provider.tf:

provider "aws" {
    region = "eu-west-1"
}

main.tf:

resource "aws_instance" "my-web-server" {
    ami = "ami-0123456789abcdef"
    instance_type = "t2.micro"
    tags = {
        Name = "my-web-server"
        Description = "My Nginx on Ubuntu server"
    }
    user_data = <<EOF
                #!/bin/bash
                sudo apt update
                sudo apt install nginx
                systemctl enable nginx
                systemctl start nginx
                EOF
}


ami and instance_type are mandatory and tags and user_data are optional attributes. 
 
Same AMIs have different IDs, depending on the region. If some AMI is not available in the chosen region, terraform apply will issue the following error:

Error: creating EC2 Instance: InvalidAMIID.NotFound: The image id '[ami-033b95fb8078dc481]' does not exist
│       status code: 400, request id: 1818bb3d-6455-411a-b3e5-8e2cbca60371


---

We could have also used variables (usually defined in separate file e.g. variables.tf) and set attributes' values to them like:

variable "ami" {
    default = "ami-0123456789abcdef"
}

resource "aws_instance" "my-web-server" {
    ami = var.ami
    ...
}
---
 
Instead of embedding the list of commands in configuration file, we could have used file function output as the value for user_data attribute:

install-nginx.sh:

#!/bin/bash
sudo apt -y update
sudo apt -y install nginx
sudo systemctl start nginx

main.tf:

resource "aws_instance" "my-web-server" {
    ...
    user_data = file("./install-nginx.sh")
}

If we add user_data after EC2 instance has already been provisioned, the next terraform apply will destroy this EC2 instance, create a new one and then execute user_data on it.
---
 
Instead of using user_data and install required software each time EC2 AMI is provisioned, it might be more time- and resource-efficient to build a custom AMI which contains this software and then use this AMI. This way software is installed only once.
---

terraform apply will now provision this resource and we'll have our server up and running.

At this moment, we cannot access this machine (via SSH) as we don't know its IP address and we haven't specified a key pair.

We can reuse an existing key pair by provisioning aws_key_pair resource in main.tf:

resource "aws_key_pair" "my-webserver" {
    public_key = file(pathexpand("~/.ssh/my-webserver.pub"))
}

public_key is mandatory argument while key_name, key_name_prefix and tags are optional.

We assumed here that public key is present on the local machine running Terraform. (PKA key pair might have been created on this machine or copied from another machine but it is not yet present among AWS EC2 key pairs - AWS key pair resource is yet to be created!)

We could have also embedded the public key as a string:

resource "aws_key_pair" "my-webserver" {
    public_key = "ssh-rsa ABCD234234....Dcg464wf user@iac-server"
}

We can now refer to this key from aws_instance resource:

 resource "aws_instance" "my-web-server" {
    ...
    key_nameaws_key_pair.my-webserver.id
    ...
}

To provision networking required to access the EC2 instance we need to provision AWS Security Group resource, just like when provisioning EC2 instance manually. aws_security_group block is used for this:

resource "aws_security_group" "ssh-access" {
    name = "ssh-access"
    description = "Allows SSH connection from anywhere"
    ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
}

---
 
NOTE: There are two ways do define certain attributes (ingress rules in our case above):
  • attributes as blocks (block syntax) - where values for all options need to be provided
  • dynamic blocks (as above) - we only need to provide values for options of interest
 
---
 
We can now reference this security group from our EC2 instance resource:

 resource "aws_instance" "my-web-server" {
    ...
    vpc_security_group_ids = [ aws_security_group.ssh-access.id ]
    ...
}

For manual SSH connection we also want to know the public IP address that will get assigned to our EC2 instance once this one is provisioned. We can use the output variable to capture it:

output public_ip {
    value = aws_instance.my-web-server.public_ip
}

The value of this variable gets displayed in the output of terraform apply
 
Example terraform apply output snippet:
...
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Outputs:
public_ip = "35.145.57.118"
 
We can then SSH to this server manually:

$ ssh -i ~/.ssh/my-webserver.pem <user_name>@<public_IP>


For Amazon Linux 2 or the Amazon Linux AMI, the user name is ec2-user
For an Ubuntu AMI, the user name is ubuntu.
...
 

We can also add -v option to ssh command in order to enable debug mode:

$ ssh -i ~/.ssh/my-webserver.pem ec2-user@35.145.57.118 -v
 
 
We use the public IPv4 address to access this server. However, when this server is rebooted or recreated, this IP address would change. To fix this, we can create an Elastic IP Address by using aws_eip resource. It is a static IPv4 address which does not change over time.

resource "aws_eip" "my-web-server-eip" {
    instance = aws_instance.my-web-server.id
    vpc = true
}

---

Resources: