Tuesday, 31 May 2022

Terraform Operators and Conditional Expressions

 



Numeric Operators


% terraform console
> 1+2
3
> 2-1
1
> 2*3
6
> 8/2
4


Equality & Comparison Operators


> 1 == 1
true
> 1 < 2
true
> 2 < 1
false
> 2 >= 1
true
> 1 == "1"
false
> 1 != "1"
true
>  


Logical Operators


AND, OR, NOT

> 1 < 2 && false
false
> 1 < 2 && true
true
> 1 < 2 || false
true
> 1 < 2 || true
true
> !(1 < 2) || false
false
> !(1 < 2)
false


main.tf:

variable flag {
type = bool
default = false
}

variable num_a {
type = number
default = 11
}

variable num_b {
        type = number
        default = 22
}

> var.flag
false
> !var.flag
true
> var.num_a
11
> var.num_b
22
> var.num_b < var.num_a
false

Conditional Expressions


value = condition ? value_if_condition_is_true : value_if_condition_is_false

Example: We want to provision a password generator that creates a random password of the length specified by the user. If length is less than 8 characters then generator will use the default length value of 8.

main.tf:

resource "random_password" "pwd-generator" {
length = var.length < 8 ? 8 : var.length
}

output password {
value = random_password.pwd-generator.result
    sensitive = true
}

variable length {
type = number
}

In terminal:

$ terraform apply -var=length=6 -auto-approve
$ terraform output password
"DIo${L-*"

$ terraform apply -var=length=10 -auto-approve
$ terraform output password
"0Y3}Fh2Na2"

Terraform Functions

 


file() reads data from a file.

content = file("users.json")

 
length() returns number of elements in a list or map:
 
count = length(var.users)

 
toset() converts a list (duplicates are allowed) into a set (no duplicates).

variable users {
    type = list
    default = [
        "Anne",
        "Anne",
        "Billy",
        "Connor",
    ]
    description = "A list of users"
}

resource "local_file" "users" {
    ...
    for_each = toset(var.users)
}

Terraform Interactive Console


It can be used for testing functions and interpolations. To launch it:

$ terraform console

Let's test file function:

$ terraform console
> file("users.txt")
<<EOT
Anne
Anne
Billy
Connor

EOT
>  

> length(var.users)
4

> toset(var.users)
toset([
  "Anne",
  "Billy",
  "Connor",
])


 

Numeric Functions



> max(1, 2, 3)
3
> min(1, 2, 3)
1


variable values {
    type = set(number) 
    default = [1, 2, 3]
}

To use variable as a function parameter, we need to use the expansion symbol

> max(var.values...)
3

> ceil(1.1)
2
> ceil(1.99)
2
> floor(1.01)
1
> floor(1.99)
1




String Functions

 
 variable ami_ids {
    type = string
    default = "ami-000, AMI-001, ami-002, ami-003"
}

> var.ami_ids
"ami-000, AMI-001, ami-002, ami-003"

> split(",", var.ami_ids)
tolist([
  "ami-000",
  " AMI-001",
  " ami-002",
  " ami-003",
])

> lower(var.ami_ids)
"ami-000, ami-001, ami-002, ami-003"
 
> upper(var.ami_ids)
"AMI-000, AMI-001, AMI-002, AMI-003"

 
To convert only the first character to uppercase:

> title(var.ami_ids)
"Ami-000, AMI-001, Ami-002, Ami-003"

substr - extracts the substring.

> substr(var.ami_ids, 0, 3)
"ami"
> substr(var.ami_ids, 0, 7)
"ami-000"


To get the all characters from the offset to the end of the string, length should be set to -1.


> join(".", [192, 168, 0, 1])
"192.168.0.1"
> join(".", ["192", "168", "0", "1"])
"192.168.0.1"

> join(",", var.users)
"Anne,Anne,Billy,Connor"



Collection Functions

 
> length(var.users)
4

 
 
> index(var.users, "Anne")
0
> index(var.users, "Billy")
2

 
 
To return the element at the specified index:
 
> element(var.users, 3)
"Connor"

 
 
> contains(var.users, "Bojan")
false
> contains(var.users, "Billy")
true

 
 
variable "amis" {
    type = map
    default = {
        "eu-west-1" = "ami-000"
        "eu-south-2" = "ami-001"
        "us-east-1" = "ami-002"
    }
}
 
 
> keys(var.amis)
tolist([
  "eu-south-2",
  "eu-west-1",
  "us-east-1",
])

 
 
> values(var.amis)
tolist([
  "ami-001",
  "ami-000",
  "ami-002",
])
 
 
> lookup(var.amis, "us-east-1")
"ami-002"> lookup(var.amis, "us-east-2")

│ Error: Error in function call

│   on <console-input> line 1:
│   (source code not available)

│ Call to function "lookup" failed: lookup failed to find key "us-east-2".


> lookup(var.amis, "us-east-2", "ami-003")
"ami-003"
 
 

Terraform Modules

 

Terraform considers every .tf file in configuration directory as configuration file. This means that we can define all resources in a single .tf file or divide them into multiple .tf files. 
 
In practice, there can be hundreds of resources and both options above prevent reusability. 

Terraform module is:

A module where we run Terraform commands from is called a root module. Every Terraform configuration therefore has a root module.

Terraform commands operate on configuration files in the root module (current (working) directory) but configuration files can load configuration files (other modules) from local or remote sources via module blocks:

module "child_module_local_name" {
   source = ...
   version = ...
   child_module_input_variable_1 = ...
   child_module_input_variable_2 = ...
   child_module_input_variable_3 = ...
   ...    
}

We say that root module calls other modules (child modules) to include their resources into configuration. In the example above, the root module calls a child module and uses child_module_local_name as its local name. It sets child module's input variables and later it can reference ONLY output variables declared in the child module by using the following syntax:

id = module.child_module_local_name.provisioned_resource_id

provided that in the child module, in outputs.tf we have something like:

output "aws_resource_id" {
  description = "The ID of the AWS resource this module creates"
  value       = try(aws_resource.this.id, "")
}


Root module loads a local module if this resides in a local filesystem. 
Root module loads a remote module if this one is a remote resource.

source is mandatory argument and is used for specifying local or remote location of the child module. 

version is used for modules published in remote repositories.

Other arguments are simply input variables for child module where we set thier values thus passing data into child module (this is like calling a function in conventional programming language).

 

Calling local modules

 
Let's see how to load a local module. Let's assume we have the following hierarchy:
 
../my-projects/A/
../my-projects/A/main.tf
../my-projects/A/variables.tf
../my-projects/B/
../my-projects/B/main.tf

To include a module A (in directory A) in a configuration file in module B (in directory B) we can do the following in ../my-projects/B/main.tf:
 
module "project-A" {
    source = "../A"
}
 
Module A is a child module of module B. project-A is the logical name of the module. source is a a required argument in module block. Its value is a relative or an absolute path to the child directory. 

In practice, all reusable modules should be stored in a modules directory, grouped by their projects.

This example shows the project outline and configuration for provisioning resources for application that needs to be deployed in various AWS regions.

Project outline:

../my-projects/modules/
../my-projects/modules/my-app/app_server.tf
../my-projects/modules/my-app/dynamodb_table.tf
../my-projects/modules/my-app/s3_bucket.tf
../my-projects/modules/my-app/variables.tf
 
../my-projects/modules/my-app/app_server.tf:
 
resource "aws_instance" "my_app_server" {
    ami = var.ami
    instance_type = "t2.medium" 
    tags = {
        Name = "${var.app_region}-my-app-server"
    }
    depends_on= [
        aws_dynamodb_table.orders_db,
        aws_s3_bucket.products_data
    ]
}
 
../my-projects/modules/my-app/s3_bucket.tf:
 
resource "aws_s3_bucket" "products_data" {
    bucket = "${var.app_region}-${var.bucket}"
}

../my-projects/modules/my-app/dynamodb_table.tf:
 
resource "aws_dynamodb_table" "orders_db" {
    name = "orders_data" 
    billing_mode = "PAY_PER_REQUEST"
    hash_key = "OrderID"
    attribute {
        name = "OrderID" 
        type = "N"
    }
}

../my-projects/modules/my-app/variables.tf:
 
variable "app_region" {
    type = string
}

variable "bucket" {
    default = "product-manuals"
}

variable "ami" {
    type = string
}


If we want to deploy this infrastructure stack to e.g. eu-west-1 region (Ireland) we can create a directory ../my-projects/my-app-ie/ and in it:
 
../my-projects/my-app-ie/provider.tf:
 
provider "aws" {
    region = "eu-west-1"
}
 
../my-projects/my-app-ie/main.tf:
 
module "my_app_ie" {
    source = "../modules/my-app"
    app_region = "eu-west-1"
    ami = "ami-01234567890"
}
 
We can see that there are only two variables that differentiate deployment to each region. To provision this infrastructure stack in this region we just need to cd into ../my-projects/my-app-ie/ and execute:
 
$ terraform init
$ terraform apply

If we want to deploy it in e.g. Brasil, we'll have:
 
../my-projects/my-app-br/provider.tf:
 
provider "aws" {
    region = "sa-east-1"


../my-projects/my-app-br/main.tf:
 
module "my_app_br" {
    source = "../modules/my-app"
    app_region = "sa-east-1"
    ami = "ami-3456789012"
}
 
The usual practice is that the same variables are defined and set at the parent level so they can be used for setting values for module's variables:
 
 ../my-projects/my-app-br/variables.tf:
 
variable "app_region" {
    type = string
}

variable "bucket" {
    default = "product-manuals"
}

variable "ami" {
    type = string
    default = "ami-123456789"
}

 
 
...and these values are then passed to the module:
 
../my-projects/my-app-br/main.tf:
 
module "my_app_br" {
    source = "../modules/my-app"
    app_region = var.app_region
    ami = var.ami
}
 
We can see that app_region does not have value set in the code. Variables defined at parent level can be set when calling terraform plan or terraform apply, from a command line:
 
$ terraform apply -var app_region=eu-west-1

If we try to pass value for some variable that is not defined at the root/parent level, we'll get the following error:
 

│ Error: Value for undeclared variable

│ A variable named "appregion" was assigned on the command line, but the root module does not declare a variable of that name. To use this value, add a
│ "variable" block to the configuration.



Calling modules from the public registry

 
Apart from provider plugins, Terraform registry also contains modules:



Modules are grouped by the provider for which they are created. There are two types of modules:

  • verified - tested and maintained by Hashicorp
  • community - not validated by Hashicorp
 
Example of verified module: AWS module security-group, used to create EC2-VPC security groups on AWS. 
 

 
 
To use it in our own configuration we can first copy-paste code snippet which can be found under Provision Instructions section:

module "security-group" {
  source  = "terraform-aws-modules/security-group/aws"
  version = "4.9.0"
}


module security-group has ssh submodule which can be used to create predefined security groups like this one which allows inbound SSH:

module "security-group_ssh" {
    source  = "terraform-aws-modules/security-group/aws//modules/ssh"
    version = "4.9.0"
    vpc_id = "vpc-0123456789" 
    ingress_cidr_blocks = [ "10.11.0.0/16" ]
    name = "ssh-access"
}
 
terraform get only downloads module from the registry:
 
$ terraform get

When using 3rd party modules, terraform apply might be provisioning additional resources (on top of those we explicitly add to the configuration), as per module's configuration.
 

Calling modules from another Git repository

 
It is possible to call modules defined in an arbitrary Git repository. 
 
 There are two different ways to write a Git SSH URL for Terraform:

# "scp-style":
git::username@hostname:path

# "URL-style":
git::ssh://username@hostname/path

 
In both of these cases, Terraform is just taking the portion after the git:: prefix (after also removing any //subdir and ?rev=... portions) and passing it to git clone:

git clone username@hostname:path
git clone ssh://username@hostname/path

 
How the rest of this is interpreted is entirely up to git. Notice that the scp-style string uses a colon to separate the path from the hostname, while the URL style uses a slash, as described in the official git documentation.
 
It is recommended using the "URL-style" because it's consistent with the other URL forms accepted in module source addresses and thus probably more familiar/intuitive to readers.

If your SSH server is running on a non-standard TCP port (not port 22) then you can include a port number only with a URL-style address by introducing a colon after the hostname:

# URL-style with port number
git::ssh://username@hostname:port/path

 
 
Let's assume we have TF module in repo ssh://git@git.example.com, in the directory path/to/module/. In order to call this module we need to use the following value for source:

module "child_module_name"  {
    source = "git::ssh://git@git.example.com/org/repo//path/to/module"
}

If using web url and tag v2.1 on the default branch:

source = "git::https://git.example.com/org/repo.git?ref=v2.1"
 
If using some other branch:
 
source = "git::https://git.example.com/org/repo.git?ref=branch-name"
 
If using a module nested in hierarchy:
 
source = "git::https://git.example.com/org/repo.git//path/to/module?ref=branch-name"

It is also possible to specify particular commit id:

source = "git::https://git.example.com/org/repo.git//path/to/module?ref=62d462976d84fdea54b47d80dcabbf680badcad1"

How to reference TF module from an arbitrary branch in github?

module "my_module" {
    source = "git::https://git.example.com/terraform-modules/my-aws-s3-module.git/modules/s3-bucket?ref=feature/my-branch"

modules/s3-bucket is path to the module in the remote repo.

If we want to reference a git tag e.g. v0.0.1 on default branch:
 source = "git::https://git.example.com/terraform-modules/my-aws-s3-module.git?ref=v0.0.1


Resources:

 
 

Friday, 27 May 2022

Importing infrastructure in Terraform

 

Importing Resources


Some resources might be provisioned manually, via AWS Console, or by Ansible. If we want such resources to start being managed by TF we need to import them. The general syntax of the import command is:

terraform import <resource_type>.<resource_name> <attributes>

<attribute> is the resource attribute which can uniquely identify the resource such as ID. 

This command does not update configuration file but it tries to update state file with the details of the infrastructure being imported.

Example:

$ terraform import aws_instance.my-other-server i-0123456789

The first run of this command fails with error:

Error: resource address aws_instance.my-other-server does not exist in the configuration

To fix it, we can manually add it but without filling any details - we keep the resource block empty:

resource "aws_instance" "my-other-server" {
}

terraform import should now run with no errors. This resource is now imported into TF state file.

If we try to run terraform apply now, it would show the error: attributes not defined. This is because our resource has no attributes, it is still empty in the configuration file and we need to assign correct values. 

We can inspect terraform.tfstate and see the values of all attributes that belong to this resource.
Alternatively, we can find these details in AWS Management Console or by using AWS CLI like e.g.:

$ aws ec2 describe-instances

If we want to find a value of some particular attribute:

$ aws ec2 describe-instances --filters "Name=image-id,Values=ami-0123456789" | jq -r '.Reservations[].Instances[].InstanceId'

We should copy them into the resource configuration, e.g.:

resource "aws_instance" "my-other-server" {
    ami = "ami-0123456789"
    instance_type = "t2.micro"
    key_name = "ws"
    vpc_security_group_ids = [ "sg-0123456789" ]
}

This resource can now be fully managed by usual Terraform workflow including terraform apply.
 
 

Importing EC2 Key Pair

 
Let's assume EC2 key pair was created manually in AWS Management Console:
 
 
 
We want to get it under Terraform management (to be a part of our Terraform state). 
 
In our root configuration (e.g. main.tf file) we need to specify this resource and use its AWS Console Name as the value of the key_name attribute:

main.tf:

...

resource "aws_key_pair" "ec2--my-app" {
    key_name = "key-pair--ec2--my-app"
}
 
...

We can then perform the import:

$ terraform import aws_key_pair.ec2--my-app key-pair--ec2--my-app
aws_key_pair.ec2--my-app: Importing from ID "key-pair--ec2--my-app"...
aws_key_pair.ec2--my-app: Import prepared!
  Prepared aws_key_pair for import
aws_key_pair.ec2--my-app: Refreshing state... [id=key-pair--ec2--my-app]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.


terraform plan fails now:

$ terraform plan

│ Error: Missing required argument

│   on main.tf line 16, in resource "aws_key_pair" "ec2--my-app":
│   16: resource "aws_key_pair" "ec2--my-app" {

│ The argument "public_key" is required, but no definition was found.

During manual creation of EC2 key pair in AWS Console we have downloaded the private key so we can get the public key from it:

$ sudo chmod 400 key-pair--ec2--my-app.pem
$ ssh-keygen -y -f key-pair--ec2--my-app.pem > key-pair--ec2--my-app.pub

We can then reference this file in public_key value:
 
 
resource "aws_key_pair" "ec2--my-app" {
    key_name = "key-pair--ec2--my-app"
    public_key = file("./keys/key-pair--ec2--my-slack-app.pub")
}

Now:

$ terraform plan
aws_key_pair.ec2--my-app: Refreshing state... [id=key-pair--ec2--my-app]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
-/+ destroy and then create replacement

Terraform will perform the following actions:

 ...

  # aws_key_pair.ec2--my-app must be replaced
-/+ resource "aws_key_pair" "ec2--my-app" {
      ~ arn             = "arn:aws:ec2:eu-west-1:036201477220:key-pair/key-pair--ec2--my-app" -> (known after apply)
      ~ fingerprint     = "a1:bc:ab:15:7e:87:d3:3b:e9:33:cd:21:8e:24:e7:8b:7b:ad:be:ad" -> (known after apply)
      ~ id              = "key-pair--ec2--my-app" -> (known after apply)
      + key_name_prefix = (known after apply)
      ~ key_pair_id     = "key-0986398ef799fdd42" -> (known after apply)
      + public_key      = "ssh-rsa AAAAB4NzaC1yc2EAAAADAQABAAABAQCmo/In0KJapZmvLFpBWwoOtf7RXrV4iQPjDcddWzG79q8jJlJKVtG1kI3l9XuU8hzmG0eqpyyhy61Hr9pLFtFWFUDa+RqAHYpUwSWV9a4JXRLwA5lEnxvXfIRGIHx7cALTawiVmVDTFJGqkJUfjWD7jHZTaK8NjOBY9k/IX0E51LayxjWxm2jJ1LJ8TTuSr/NYOpsnBDfmojgU9B3ZWAbvrtFwC6JkRJ0dR3YMx392TA9ky9MM/o/ItpZqOWWG64fDcEqNSUeIYPa+oLLlTyZy8aqwTJfLbV554x7G/U0vrd1H3H58GjANEuJAT7oHo94IcyQdmIgSXwlQtyXDEgbB" # forces replacement
      - tags            = {
          - "Description" = "Key pair used for SSH access"
        } -> null
      ~ tags_all        = {
          - "Description" = "Key pair used for SSH access"
        } -> (known after apply)
        # (1 unchanged attribute hidden)
    }

Plan: 2 to add, 0 to change, 1 to destroy.

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.


So terraform apply will replace the Key Pair we created which is not ideal (how to get hold of private key?). aws_key_pair documentation confirms this limitation when importing the key pair:

The AWS API does not include the public key in the response, so terraform apply will attempt to replace the key pair. There is currently no supported workaround for this limitation.

This brings me to conclusion that if we want to provision EC2 instance via Terraform, the best way to manage its SSH key pair is to create them on the local machine (via 3rd party tool like  ssh-keygen and then use aws_key_pair resource type) rather than create them in AWS Management Console.


Modern and safer way: using import block


Introduced in Terraform v1.5, the import block is a declarative way to bring existing infrastructure under Terraform management. Unlike the legacy terraform import CLI command, which immediately modifies the state file without a preview, the import block allows you to review the import operation as part of your standard terraform plan workflow before applying any changes. 

Core Syntax


The block requires two primary arguments:
  • to: The resource address where you want to import the infrastructure in your Terraform configuration (e.g., aws_s3_bucket.example).
  • id: The provider-specific unique identifier for the existing resource (e.g., a bucket name, an AWS ARN, or an Azure Resource ID). 

import {
  to = aws_s3_bucket.my_existing_bucket
  id = "my-unique-bucket-name"
}


Key Features

  • Automatic Config Generation: You can run terraform plan -generate-config-out=generated.tf to have Terraform automatically create the HCL resource blocks for the resources you are importing.
  • Bulk Imports: You can include multiple import blocks in your configuration to bring in many resources at once.
  • Iteration with for_each: Starting in Terraform v1.7, import blocks support for_each, allowing you to import collections of similar resources using a single block.
  • Version Control: Because imports are now defined in code, they can be reviewed via Pull Requests and tracked in version control history. 

Standard Workflow

  • Define the Block: Add an import block to your configuration file (often a temporary imports.tf).
  • Generate or Write Config: Either manually write the matching resource block or use the -generate-config-out flag during a plan.
  • Plan: Run terraform plan to preview the import. Terraform will show "1 to import" in the plan summary.
  • Apply: Run terraform apply to execute the import and update your state file.
  • Clean Up: Once the import is successful, it is a best practice to remove the import block from your code, as it is a one-time operation. 

Import Block vs. CLI Command


Feature                 import Block (Modern)         terraform import CLI (Legacy)
----------------------   ---------------------------------------    ----------------------------------------
Workflow         Declarative (in code)                 Imperative (one-off command)
Preview                 Yes, via terraform plan                 No, modifies state immediately
Config Generation Built-in via -generate-config-out Manual (must write code first)
Safety                 High (plan/apply cycle)                 Low (easy to make state errors)
CI/CD                 Pipeline-friendly                         Difficult to automate safely


Always use the import block method. Because it requires an apply to touch the state, it gives you a "safety buffer" where you can verify that your import isn't going to accidentally trigger a replace (destroy/create) due to a typo in your code.

Once the apply is finished and the user is in your state, you should delete the import block from your code. If you leave it there, Terraform will simply ignore it on future runs because the resource is already in the state.


Example (automatic implementation of terraform resource): importing S3 bucket


To import an existing S3 bucket using the declarative import block, you only need to provide the bucket's name as the unique identifier.

1. Create the Import Block 


Add this to your Terraform configuration (e.g., in a file named imports.tf): 

import {
  to = aws_s3_bucket.my_imported_bucket
  id = "your-actual-bucket-name" # The name as it appears in the AWS Console
}

2. Generate the Configuration Automatically


Instead of writing the resource "aws_s3_bucket" block manually, you can have Terraform generate it for you based on the live settings of the existing bucket. 

Run the following command in your terminal:

terraform plan -generate-config-out=generated.tf

3. Review and Apply


  • Check the new file: Open generated.tf. It will contain a full resource "aws_s3_bucket" "my_imported_bucket" block with all current settings (tags, versioning, etc.).
  • Clean up: It is recommended to remove read-only or default attributes (like arn or hosted_zone_id) from the generated code before proceeding.
  • Execute: Run terraform apply to finalize the import and record the bucket in your state file. 

Why use this over the old way?


Unlike the legacy terraform import aws_s3_bucket.name bucket-name command, the import block allows you to:
  • Preview the import in a standard terraform plan before any state changes occur.
  • Avoid manual coding by using the -generate-config-out flag to capture complex existing configurations.
  • Version control your imports, ensuring your entire team can see what is being brought into management. 
Note: If you are importing a Directory Bucket (S3 Express One Zone), the id format must be the full name: [bucket_name]--[azid]--x-s3


Example of implementing resources code manually: importing IAM IC User


1. Create resources and import block



resource "aws_identitystore_user" "bojan_komazec" {
  identity_store_id = local.identity_store_id

  display_name = "Bojan Komazec"
  user_name    = "bojan@example.com"

  name {
    given_name  = "Bojan"
    family_name = "Komazec"
  }

  emails {
    primary = true
    type   = "work"
    value = "bojan@example.com"
  }
}

resource "aws_identitystore_group_membership" "bojan_komazec_in_devops" {
  identity_store_id = local.identity_store_id
  group_id          = aws_identitystore_group.devops.group_id
  member_id         = aws_identitystore_user.bojan_komazec.user_id
}

import {
  to = aws_identitystore_user.bojan_komazec
  id = "d-xxxxxxxxx/xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx"
}

import {
  to = aws_identitystore_group_membership.bojan_komazec_in_devops
  id = "d-xxxxxxxxxx/yyyyyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyy"
}


2. Execute terraform plan


Verify that it shows that NO resources will be replaced or deleted and that 2 resources will be imported:

Terraform will perform the following actions:

  # aws_identitystore_group_membership.bojan_komazec_in_devops will be imported
    resource "aws_identitystore_group_membership" "bojan_komazec_in_devops" {
       ...
    }

  # aws_identitystore_user.bojan_komazec will be imported
    resource "aws_identitystore_user" "bojan_komazec" {
       ...
    }

Plan: 2 to import, 0 to add, 0 to change, 0 to destroy.

3. Execute terraform apply


4. Remove import blocks


----

How to create PKA key pair using AWS

To SSH to our EC2 instance we need to create a Public Key Authentication (PKA) key pair which consist of public and private key. Public key is stored on EC2 AMI (in ~/.ssh/authorized_keys) and this happens on the first boot. Private key needs to be present on the machine where from we want to establish SSH connection. Its path is passed to SSH connect command.
 
We can create key pair in multiple ways:

 

Public key needs to be imported to EC2 instance. One way is via Terraform, by using aws_key_pair and passing its id attribute value as the value of key_name attribute of aws_instance.

 

How to create PKA key pair using AWS Management Console

 
Log in to AWS Management Console and in left hand list find Key Pairs item in Network & Security group:
 
 
 
At the beginning we have no key pairs created so we click on Create key pair button:
 

 
 
This opens a dialog where we can choose the key pair name, encryption type and private key file format:
 

 
When we click on Create key pair button, private key file (named key-pair--ec2--my-app.pem in this example) gets downloaded to our computer automatically and we can see that new key pair is now listed:
 


 
 
If you want the same key pair to work in multiple AWS regions, make sure public key is applied to each region.
 

How to password-protect the private key file

 
 To password-protect downloaded pem file we can use:

$ ssh-keygen -p -f key-pair--ec2--my-app.pem 
 
If file is readable by anyone, this operation will fail with:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0664 for 'key-pair--ec2--my-app.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Failed to load key key-pair--ec2--my-app.pem: bad permissions


File was indeed readable by everyone:

$ ls -la key-pair--ec2--my-app.pem
-rw-rw-r-- 1 bojan bojan 1678 May 27 11:34 key-pair--ec2--my-app.pem

 
To rectify the error above, we need to assign read permissions only to the file owner:
 
$ sudo chmod 400 key-pair--ec2--my-app.pem 
 
$ ls -la key-pair--ec2--my-app.pem
-r-------- 1 bojan bojan 1678 May 27 11:34 key-pair--ec2--my-app.pem


We can now set the password on the file:

$ sudo ssh-keygen -p -f key-pair--ec2--my-app.pem
Enter new passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved with the new passphrase.

Next time this file is used by ssh, you'll be prompted to enter the password.
 
 

How to create public key from the private key 



It is not possible to download (or see) public key in EC2 Key pairs dashboard (list, as seen on the screenshot above). But it is possible to generate it from the private key (.pem file):

$ ssh-keygen -y -f key-pair--ec2--my-app.pem > key-pair--ec2--my-app.pub
Enter passphrase:
 
$ cat key-pair--ec2--my-app.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCtkH9hzk...0a+UPwPy
hD


How to create PKA key pair using AWS CLI

 
The following command creates the key pair, automatically uploads it to AWS EC2 (it gets listed in AWS Management Console, among EC2 key pairs) and saves private key on the local machine:
 
$ aws ec2 create-key-pair \
--key-name key-pair--ec2--my-app \
--query 'KeyMaterial' \
--output text > key-pair--ec2--my-app.pem

 
--query "KeyMaterial" prints the private key material to the output
--output text > my-key-pair.pem saves the private key material in a file with the specified extension. The extension can be either .pem or .ppk
 
Additional arguments:
 
--key-type: rsa (default) or ed25519
--key-format: pem (default) or ppk 


 

---

Both approaches shown above provision EC2 key pairs manually. 

If we want to use Terraform the best way is to use 3rd party (e.g. OpenSSH) to create key pairs locally and then use aws_key_pair resource in TF configuration. It is NOT possible importing manually provisioned key pairs into TF state without recreating them. (For more details see Importing infrastructure in Terraform | My Public Notepad)


How to list/find key pairs?


Use ec2 describe-key-pairs. To list all key pairs:

$ aws ec2 describe-key-pairs


To list details for specific key pair:
     
$ aws ec2 describe-key-pairs --key-names key-pair--ec2--bojan-temp
{
    "KeyPairs": [
        {
            "KeyPairId": "key-0483bced858d885ba",
            "KeyFingerprint": "c2:18:8e:93:ee:52:f9:13:bb:05:9d:94:0c:52:af:9b:ff:6b:d5:3f",
            "KeyName": "key-pair--ec2--bojan-temp",
            "KeyType": "rsa",
            "Tags": []
        }
    ]
}



How to remove specified key pair?


 
$ aws ec2 delete-key-pair --key-name key_pair_name



---