Showing posts with label DynamoDB. Show all posts
Showing posts with label DynamoDB. Show all posts

Thursday, 19 May 2022

Using AWS S3 as Terraform Backend

 

Terraform backend is the place where Terraform stores the state file. By default, it's a local storage (local machine) but it can also be a remote one (AWS, GCS etc...).

In Terraform State | My Public Notepad it was discussed why it's better to use a remote Terraform backend rather than a local one or using a version control system (e.g. Git repository).
 
AWS-based remote backend comprises:
  • S3 bucket which stores TF state file
    • bucket name e.g. tf-state-bucket
    • key (of the stored resource) is the object path where the state file is stored e.g. path/to/terraform.tfstate
    • region e.g. eu-south-1
       
  • DynamoDB table which implements state locking and consistency checks
    • name e.g. tf-state-locking
    • this table must have a primary (hash) key named lockId
 
To configure remote backend in Terraform, we need to use terraform block in configuration file. We already mentioned this block in Terraform Providers | My Public Notepad when we wanted to specify exact plugin version. There we used terraform_providers block but here, to specify TF backend, we need to use backend block:

main.tf:
 
resource "local_file" "foo" {
    filename = "/root/foo.txt"
    content = "This is a content of foo.txt."
}


It is a good practice to keep terraform block in a separate file e.g. terraform.tf:

terraform {
    backend "s3"  {
        bucket = "tf-state-bucket"
        key = "path/to/terraform.tfstate"
        region = "eu-south-1"
        dynamodb_table = "tf-state-locking"
    }
}

backend block "s3" has 3 mandatory attributes: bucket, key and region. dynamodb_table is an optional argument. 


If we've used terraform init before switching to the remote backend, terraform apply would issue an error stating that backend reinitialization is required. We simply need to re-run terraform init which will migrate pre-existing state from local to a new s3 backend (state file will be copied from a local disk into s3 bucket). After this we can delete local state file:

$ rm -rf terraform.tfstate

Any future executions of terraform plan or apply would be using the state file stored remotely, in s3 bucket. Pulling and pushing the terraform.tfstate file will be automatic. Prior to each of these operations the state lock would be acquired and after them, it would be released. This will keep integrity of the remotely stored state file.
 

Resources:


Monday, 16 May 2022

Managing AWS DynamoDB using Terraform



Provisioning a new DynamoDB table


resource "aws_dynamodb_table" "mobile_phones" {
    name = "mobile_phones"
    hash_key = "IMEI"
    billing_mode = "PAY_PER_REQUEST"
    attribute = {
        name = "IMEI"
        type = "N"
    }
}

hash_key is table's primary key.
type = "N" means that data type is Number; "S" would mean that data type is String.
 
If we want to define more attributes, we'll add more attribute blocks e.g.
 
attribute {
    name = "Model"
    type = "S"
}

Adding new items to table

Item value needs to be in JSON format and we need to specify data type (S for string and N for number) and value.

resource "aws_dynamodb_table_item" {
    table_name = aws_dynamodb_table. mobile_phones.name
    hash_key = aws_dynamodb_table. mobile_phones.hash_key
    item = <<EOF
    {
        "Manufacturer": {"S" : "Samsung"},
        "Model": {"S": "S80"},
        "Year": {"N": 2017},
        "IMEI": {"N": 45243582345234632048432},
    }
    EOF
}

terraform apply inserts this item into the table. 

This approach is not used for inserting and managing large amounts of data.

AWS Dynamo DB


 


DynamoDB:
  • provided and fully managed by AWS
  • highly scalable DB
  • low-latency access
    • single digit millisecond latency
  • high availability
    • data is replicated across multiple regions 
  • NoSQL DB
    • Data is stored in the form of key-value pairs and documents
    • schema-less DB that only requires a table name and primary key
      • table's primary key is made up of one or two attributes that uniquely identify items, partition the data and sort data within each partition

NoSQL DB Example: 
We want to store info about mobile phones. Each mobile phone (item) has attributes like Manufacturer, Model, Year, IMEI etc...This data can be represented as:


{
    "Manufacturer": "Samsung",
    "Model": "S80",
    "Year": 2017,
    "IMEI": 45243582345234632048432
}

{
    "Manufacturer": "Xiaomi",
    "Model": "X1",
    "Year": 2020,
    "IMEI": 75564582345234632048445
}

Each item can be uniquely identified by its IMEI and so this attribute is used as Primary Key. When adding new items it is mandatory to provide the value for their primary key while the other attributes are optional and can have a null value.

Provisioning DynamoDB using AWS Management Console

After logging in into AWS Management Console, go to Services >> Database >> DynamoDB and click on "Create table" button. This opens Create DynamoDB Table page where we can type in:
  • Table name e.g. mobile_phones
  • Primary key
    • can be of type string, binary or number
    • e.g. imei (number type)
Once we press "Create" button we get to mobile_phones page which contains all details about this table in the form of multiple tabs:
  • Overview
  • Items
    • Click on "Create item" button opens a modal dialog where we need to specify the value for the primary key (imei in our case). We can then perform Append/Insert/Remove action with new/existing items. Only specifying the value of primary key is mandatory. We don't need to specify values for any other attributes.
    • We can search (Scan) the items by specifying the filter (by clicking on the "Add filter")
  • Metrics
  • Alarms
  • Capacity
  • Indexes
  • Global Tables
  • Backups
  • Contributor Insights
  • Triggers
  • Access Control
  • Tags