Importing Resources
Some resources might be provisioned manually, via AWS Console, or by Ansible. If we want such resources to start being managed by TF we need to import them. The general syntax of the import command is:
terraform import <resource_type>.<resource_name> <attributes>
<attribute> is the resource attribute which can uniquely identify the resource such as ID.
This
command does not update configuration file but it tries to update state
file with the details of the infrastructure being imported.
Example:
$ terraform import aws_instance.my-other-server i-0123456789
The first run of this command fails with error:
Error: resource address aws_instance.my-other-server does not exist in the configuration
To fix it, we can manually add it but without filling any details - we keep the resource block empty:
resource "aws_instance" "my-other-server" {
}
terraform import should now run with no errors. This resource is now imported into TF state file.
If we try to run terraform apply
now, it would show the error: attributes not defined. This is because
our resource has no attributes, it is still empty in the configuration
file and we need to assign correct values.
We can inspect terraform.tfstate and see the values of all attributes that belong to this resource.
Alternatively, we can find these details in AWS Management Console or by using AWS CLI like e.g.:
$ aws ec2 describe-instances
If we want to find a value of some particular attribute:
$ aws ec2 describe-instances --filters "Name=image-id,Values=ami-0123456789" | jq -r '.Reservations[].Instances[].InstanceId'
We should copy them into the resource configuration, e.g.:
resource "aws_instance" "my-other-server" {
ami = "ami-0123456789"
instance_type = "t2.micro"
key_name = "ws"
vpc_security_group_ids = [ "sg-0123456789" ]
}
This resource can now be fully managed by usual Terraform workflow including terraform apply.
Importing EC2 Key Pair
Let's assume EC2 key pair was created manually in AWS Management Console:
We want to get it under Terraform management (to be a part of our Terraform state).
In our root configuration (e.g. main.tf file) we need to specify this resource and use its AWS Console Name as the value of the key_name attribute:
main.tf:
...
resource "aws_key_pair" "ec2--my-app" {
key_name = "key-pair--ec2--my-app"
}
}
...
We can then perform the import:
$ terraform import aws_key_pair.ec2--my-app key-pair--ec2--my-app
aws_key_pair.ec2--my-app: Importing from ID "key-pair--ec2--my-app"...
aws_key_pair.ec2--my-app: Import prepared!
Prepared aws_key_pair for import
aws_key_pair.ec2--my-app: Refreshing state... [id=key-pair--ec2--my-app]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
aws_key_pair.ec2--my-app: Importing from ID "key-pair--ec2--my-app"...
aws_key_pair.ec2--my-app: Import prepared!
Prepared aws_key_pair for import
aws_key_pair.ec2--my-app: Refreshing state... [id=key-pair--ec2--my-app]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
terraform plan fails now:
$ terraform plan
╷
│ Error: Missing required argument
│
│ on main.tf line 16, in resource "aws_key_pair" "ec2--my-app":
│ 16: resource "aws_key_pair" "ec2--my-app" {
│
│ The argument "public_key" is required, but no definition was found.
╵
╷
│ Error: Missing required argument
│
│ on main.tf line 16, in resource "aws_key_pair" "ec2--my-app":
│ 16: resource "aws_key_pair" "ec2--my-app" {
│
│ The argument "public_key" is required, but no definition was found.
╵
During manual creation of EC2 key pair in AWS Console we have downloaded the private key so we can get the public key from it:
$ sudo chmod 400 key-pair--ec2--my-app.pem
$ ssh-keygen -y -f key-pair--ec2--my-app.pem > key-pair--ec2--my-app.pub
We can then reference this file in public_key value:
resource "aws_key_pair" "ec2--my-app" {
key_name = "key-pair--ec2--my-app"
public_key = file("./keys/key-pair--ec2--my-slack-app.pub")
}
}
Now:
$ terraform plan
aws_key_pair.ec2--my-app: Refreshing state... [id=key-pair--ec2--my-app]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
-/+ destroy and then create replacement
Terraform will perform the following actions:
...
aws_key_pair.ec2--my-app: Refreshing state... [id=key-pair--ec2--my-app]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
-/+ destroy and then create replacement
Terraform will perform the following actions:
...
# aws_key_pair.ec2--my-app must be replaced
-/+ resource "aws_key_pair" "ec2--my-app" {
~ arn = "arn:aws:ec2:eu-west-1:036201477220:key-pair/key-pair--ec2--my-app" -> (known after apply)
~ fingerprint = "a1:bc:ab:15:7e:87:d3:3b:e9:33:cd:21:8e:24:e7:8b:7b:ad:be:ad" -> (known after apply)
~ id = "key-pair--ec2--my-app" -> (known after apply)
+ key_name_prefix = (known after apply)
~ key_pair_id = "key-0986398ef799fdd42" -> (known after apply)
+ public_key = "ssh-rsa AAAAB4NzaC1yc2EAAAADAQABAAABAQCmo/In0KJapZmvLFpBWwoOtf7RXrV4iQPjDcddWzG79q8jJlJKVtG1kI3l9XuU8hzmG0eqpyyhy61Hr9pLFtFWFUDa+RqAHYpUwSWV9a4JXRLwA5lEnxvXfIRGIHx7cALTawiVmVDTFJGqkJUfjWD7jHZTaK8NjOBY9k/IX0E51LayxjWxm2jJ1LJ8TTuSr/NYOpsnBDfmojgU9B3ZWAbvrtFwC6JkRJ0dR3YMx392TA9ky9MM/o/ItpZqOWWG64fDcEqNSUeIYPa+oLLlTyZy8aqwTJfLbV554x7G/U0vrd1H3H58GjANEuJAT7oHo94IcyQdmIgSXwlQtyXDEgbB" # forces replacement
- tags = {
- "Description" = "Key pair used for SSH access"
} -> null
~ tags_all = {
- "Description" = "Key pair used for SSH access"
} -> (known after apply)
# (1 unchanged attribute hidden)
}
Plan: 2 to add, 0 to change, 1 to destroy.
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
So terraform apply will replace the Key Pair we created which is not ideal (how to get hold of private key?). aws_key_pair documentation confirms this limitation when importing the key pair:
The AWS API does not include the public key in the response, so terraform apply will attempt to replace the key pair. There is currently no supported workaround for this limitation.
This brings me to conclusion that if we want to provision EC2 instance via Terraform, the best way to manage its SSH key pair is to create them on the local machine (via 3rd party tool like ssh-keygen and then use aws_key_pair resource type) rather than create them in AWS Management Console.
Modern and safer way: using import block
Introduced in Terraform v1.5, the import block is a declarative way to bring existing infrastructure under Terraform management. Unlike the legacy terraform import CLI command, which immediately modifies the state file without a preview, the import block allows you to review the import operation as part of your standard terraform plan workflow before applying any changes.
Core Syntax
The block requires two primary arguments:
- to: The resource address where you want to import the infrastructure in your Terraform configuration (e.g., aws_s3_bucket.example).
- id: The provider-specific unique identifier for the existing resource (e.g., a bucket name, an AWS ARN, or an Azure Resource ID).
import {
to = aws_s3_bucket.my_existing_bucket
id = "my-unique-bucket-name"
}
Key Features
- Automatic Config Generation: You can run terraform plan -generate-config-out=generated.tf to have Terraform automatically create the HCL resource blocks for the resources you are importing.
- Bulk Imports: You can include multiple import blocks in your configuration to bring in many resources at once.
- Iteration with for_each: Starting in Terraform v1.7, import blocks support for_each, allowing you to import collections of similar resources using a single block.
- Version Control: Because imports are now defined in code, they can be reviewed via Pull Requests and tracked in version control history.
Standard Workflow
- Define the Block: Add an import block to your configuration file (often a temporary imports.tf).
- Generate or Write Config: Either manually write the matching resource block or use the -generate-config-out flag during a plan.
- Plan: Run terraform plan to preview the import. Terraform will show "1 to import" in the plan summary.
- Apply: Run terraform apply to execute the import and update your state file.
- Clean Up: Once the import is successful, it is a best practice to remove the import block from your code, as it is a one-time operation.
Import Block vs. CLI Command
Feature import Block (Modern) terraform import CLI (Legacy)
---------------------- --------------------------------------- ----------------------------------------
Workflow Declarative (in code) Imperative (one-off command)
Preview Yes, via terraform plan No, modifies state immediately
Config Generation Built-in via -generate-config-out Manual (must write code first)
Safety High (plan/apply cycle) Low (easy to make state errors)
CI/CD Pipeline-friendly Difficult to automate safely
Always use the import block method. Because it requires an apply to touch the state, it gives you a "safety buffer" where you can verify that your import isn't going to accidentally trigger a replace (destroy/create) due to a typo in your code.
Once the apply is finished and the user is in your state, you should delete the import block from your code. If you leave it there, Terraform will simply ignore it on future runs because the resource is already in the state.
Example (automatic implementation of terraform resource): importing S3 bucket
To import an existing S3 bucket using the declarative import block, you only need to provide the bucket's name as the unique identifier.
1. Create the Import Block
Add this to your Terraform configuration (e.g., in a file named imports.tf):
import {
to = aws_s3_bucket.my_imported_bucket
id = "your-actual-bucket-name" # The name as it appears in the AWS Console
}
2. Generate the Configuration Automatically
Instead of writing the resource "aws_s3_bucket" block manually, you can have Terraform generate it for you based on the live settings of the existing bucket.
Run the following command in your terminal:
terraform plan -generate-config-out=generated.tf
3. Review and Apply
- Check the new file: Open generated.tf. It will contain a full resource "aws_s3_bucket" "my_imported_bucket" block with all current settings (tags, versioning, etc.).
- Clean up: It is recommended to remove read-only or default attributes (like arn or hosted_zone_id) from the generated code before proceeding.
- Execute: Run terraform apply to finalize the import and record the bucket in your state file.
Why use this over the old way?
Unlike the legacy terraform import aws_s3_bucket.name bucket-name command, the import block allows you to:
- Preview the import in a standard terraform plan before any state changes occur.
- Avoid manual coding by using the -generate-config-out flag to capture complex existing configurations.
- Version control your imports, ensuring your entire team can see what is being brought into management.
Note: If you are importing a Directory Bucket (S3 Express One Zone), the id format must be the full name: [bucket_name]--[azid]--x-s3
Example of implementing resources code manually: importing IAM IC User
1. Create resources and import block
resource "aws_identitystore_user" "bojan_komazec" {
identity_store_id = local.identity_store_id
display_name = "Bojan Komazec"
user_name = "bojan@example.com"
name {
given_name = "Bojan"
family_name = "Komazec"
}
emails {
primary = true
type = "work"
value = "bojan@example.com"
}
}
resource "aws_identitystore_group_membership" "bojan_komazec_in_devops" {
identity_store_id = local.identity_store_id
group_id = aws_identitystore_group.devops.group_id
member_id = aws_identitystore_user.bojan_komazec.user_id
}
import {
to = aws_identitystore_user.bojan_komazec
id = "d-xxxxxxxxx/xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx"
}
import {
to = aws_identitystore_group_membership.bojan_komazec_in_devops
id = "d-xxxxxxxxxx/yyyyyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyy"
}
2. Execute terraform plan
Verify that it shows that NO resources will be replaced or deleted and that 2 resources will be imported:
Terraform will perform the following actions:
# aws_identitystore_group_membership.bojan_komazec_in_devops will be imported
resource "aws_identitystore_group_membership" "bojan_komazec_in_devops" {
...
}
# aws_identitystore_user.bojan_komazec will be imported
resource "aws_identitystore_user" "bojan_komazec" {
...
}
Plan: 2 to import, 0 to add, 0 to change, 0 to destroy.
3. Execute terraform apply
4. Remove import blocks
----


No comments:
Post a Comment