Thursday, 18 May 2023

My PHP notes

 


I've never worked professionally with PHP but had to test some code so here are just some snippets I had to run in tehplayground...


Types

Boolean

Boolean false implicitly converts to an empty string. In order to get 'false' when boolean variable is false, we need to do the following conversion manually:

print($myarray_is_empty ? 'true' : 'false');


Arrays


 <?php
// example code

// $welcome = file_get_contents('/content/welcome');

// empty array
$my_array =
array();

if ($my_array
== NULL) {
    print "\$my_array == NULL\n";
}

if ($my_array
=== NULL) {
    print "\$my_array === NULL\n";
}

if (
is_null($my_array)) {
    print "is_null(\$my_array) is true\n";
}

print "my_array = {$my_array}\n"; // line 21


print "var_dump(\$my_array) = \n";
var_dump($my_array);

if (
in_array("test", $my_array)) {
    echo "test is in array";
}


if (!in_array("test", $my_array)) {
   
echo "test is not in array";
}

print "Unsetting \$my_array";
unset($my_array);


if ($my_array == NULL) {
    print "\$my_array == NULL\n";
}

if ($my_array === NULL) { // line 44
    print "\$my_array === NULL\n";
}

if (in_array("test", $my_array)) { // line 48
    echo "test is in array";
}

?>

 

Output:

 

$my_array == NULL

Warning: Array to string conversion in Standard input code on line 21
my_array = Array
var_dump($my_array) =
array(0) {
}
test is not in arrayUnsetting $my_array
Warning: Undefined variable $my_array in Standard input code on line 40
$my_array == NULL

Warning: Undefined variable $my_array in Standard input code on line 44
$my_array === NULL

Warning: Undefined variable $my_array in Standard input code on line 48

Fatal error: Uncaught TypeError: in_array(): Argument #2 ($haystack) must be of type array, null given in Standard input code:48
Stack trace:
#0 Standard input code(48): in_array('test', NULL)
#1 {main}
  thrown in Standard input code on line 48 

 

What happens if foreach is used on undefined variable ($myarray):

foreach($myarray as $element) {
   print $element;
}



Output:

Warning: Undefined variable $myarray in Standard input code on line 8

Warning: foreach() argument must be of type array|object, null given in Standard input code on line 8


if ($myarray) { // 12
    print $myarray;
}

Output:

Warning: Undefined variable $myarray in Standard input code on line 12


if ($myarray == null) { // 16

    echo '$myarray == null';
}

 

Output:

Warning: Undefined variable $myarray in Standard input code on line 16
 

 

 

PHP array_diff() Function



if (empty($not_declared_array)) {
echo("not_declared_array is empty.\n");
} else {
echo("not_declared_array is not empty.\n");
}

$null_array = null;

if (
empty($null_array)) {
echo("null_array is empty.\n");
} else {
echo('$null_array is not empty.\n');
}

$array = [];

if (
empty($array)) {
echo("array is empty.\n");
} else {
echo('array is not empty.\n');
}




Output:

not_declared_array is empty.
null_array is empty.
array is empty.


RegEx

PHP: preg_match - Manual

PHP preg_match(): Regular Expressions (Regex)

 

$input = '--parameter-1=3 --parameter-2=bing.com --parameter-3=0';
preg_match('~parameter-1=(.*?) ~', $input, $output);
echo var_dump($output[1]);

preg_match('~parameter-2=(.*?) ~', $input, $output);
echo var_dump($output[1]);

preg_match('~parameter-3=(.*?)$~', $input, $output);
echo var_dump($output[1]);

Output:
 
string(1) "3"
string(8) "bing.com"
string(1) "0"

---

Friday, 10 March 2023

AWS EFS with Terraform







resource "aws_efs_file_system" "my-app-data-efs" {
  tags = {
    Name = "my-app-data-efs"
  }
}



In AWS Console, we can go to Amazon EFS >> File systems and verify that it's created. Its attributes are:

Name: my-app-data-efs
File system ID: fs-1d130ce4a92769f59
Encrypted: Unencrypted
Total size: 6.00 KiB
Size in Standard / One Zone: 6.00 KiB    
Size in Standard-IA / One Zone-IA: 0 Bytes
Provisioned Throughput (MiB/s):    -     
File system state: Available
Creation time: Thu, 09 Mar 2023 10:41:55 GMT
Availability Zone: Standard
 
Performance mode: General Purpose
Throughput mode: Bursting
Lifecycle management:
Transition into IA: None
Transition out of IA: None
Availability zone: Standard
Automatic backups: Disabled
Encrypted: No
File system state: Available
DNS name: No mount targets available

 
It will have no Access points and no Mount targets defined:



 
To provide mount target, we need to use aws_efs_mount_target | Resources | hashicorp/aws | Terraform Registry. Required attributes are EFS (for which we want to create mount target) and subnet (in which we want this mount target to be):

resource "aws_efs_mount_target" "my-app-data-efs-mt" {
  file_system_id = aws_efs_file_system.my-app-data-efs.id
  subnet_id = "subnet-14321c874d6d35c6a"
}


terraform plan output:

Terraform will perform the following actions:

  # aws_efs_mount_target.my-app-data-efs-mt will be created
  + resource "aws_efs_mount_target" "my-app-data-efs-mt" {
      + availability_zone_id   = (known after apply)
      + availability_zone_name = (known after apply)
      + dns_name               = (known after apply)
      + file_system_arn        = (known after apply)
      + file_system_id         = "fs-1d130ce4a92769f59"
      + id                     = (known after apply)
      + ip_address             = (known after apply)
      + mount_target_dns_name  = (known after apply)
      + network_interface_id   = (known after apply)
      + owner_id               = (known after apply)
      + security_groups        = (known after apply)
      + subnet_id              = "subnet-14321c874d6d35c6a"
    }

Plan: 1 to add, 0 to change, 0 to destroy.


After applying this change, we can check again Network settings for EFS where we'll see that mount target is now available:

 
 
The next step will be mounting EFS onto EC2 instance.
 

Resources:

 

File System Performance Metrics

image source: https://www.dnsstuff.com/latency-throughput-bandwidth

 

 
File system performance is measured by:
  • Latency
    • delay between request and response
    • a measure of the length of time it takes for a single I/O request to be completed from the application's point of view
    • measured separately for read (usually in microseconds) and write (usually in milliseconds) operations
      • If the I/O is a data read, latency is the time it takes for the data to come back. If the I/O is a write, latency is the time for the write acknowledgement to return.
    • affects application's acceleration
  • Throughput / Bandwidth
    • measures how many units of information a system can process in a period of time
    • describes the amount of data able to flow through a point in the data path over a given time
    • throughput and latency are often competing goals - the lower the latency, the higher the throughput
    • measured separately for file system read (usually in GiBps) and file system write (usually in MiBps) operations 
    • typically the best storage metric when measuring data that needs to be streamed rapidly, such as images and video files.
  • Input/Output operations per second (IOPS)
    • number of I/O operations per second
    • measured separately for read and write operations
    • as the number of IOPS requested from the device increases the latency will increase
    • affects application's scalability

Thursday, 9 March 2023

Amazon Elastic File System (EFS)

 
 

 
Amazon Elastic File System (EFS) is:
  • cloud-native data store
  • shared file storage - can be accessed by multiple computers at the same time
    • can be made available to VPC
      • EC2 instances can then securely mount EFS to store and access data
      • applications running on multiple EC2 instances can access the EFS at the same time
    • EFS can also be mounted on on-premises data center servers when connected to Amazon VPC with AWS Direct Connect or VPN making it easy to:
      • migrate data to EFS
      • enable cloud bursting
      • back up on-premises data to EFS
  • supports low latency applications and also highly-parallelized scale out jobs requiring high throughput (read here what's the difference between latency and throughput: File System Performance Metrics | My Public Notepad)
  • high throughput
    • throughput for a file system scales automatically as capacity grows
    • for workloads with high throughput and low capacity requirements, throughput can be provisioned independent of capacity 
  • there are 2 storage classes: 
    • Standard
    • EFS IA (Infrequent Access) - for less frequently accessed data we can configure EFS to store data in a cost-optimized IA storage class
      • LifeCycle Management automatically and transparently moves files access less frequently to EFS IA
  • has 2 performance modes so we can tailor EFS to our application needs
    • General Purpose
    • Max I/O
 

Benefits of using EFS

  • file storage system which is:
    • simple - supports Network File System (NFS) versions 4.0 and 4.1 (NFSv4) protocol. This means that computers can access files on EFS by using standard file system tools and interfaces provided by OS. This is the reason why nfs is specified as the filesystem type supported by kernel when using mount command to mount EFS device on the EC2 instance (mount -t nfs ...).
    • serverless - no need to provision infrastructure
    • scalable performance - lifecycle management
    • elastic - automatically grow or shrink as we add/remove files
      • can grow to petabytes (PB)
  • fully managed - no need to manage it
  • easy to set up via AWS Management Console, API or CLI
    • "set and forget"
  • cost-effective data store: you pay for the storage you use
  • access data securely, via existing AWS security infrastructure (IAM)
EFS symbol

Drawbacks of EFS

  • supports Linux only (it doesn't support Windows)

 

When to use EFS?

  • when thousands of EC2 instances from multiple availability zones or on-premises servers need concurrently to access data
    • EFS provides concurrent access for tens of thousands of connections for EC2 instances, containers and lambda functions
  • designed for high availability and durability, for storing data redundantly across multiple (3) availability zones
  • ideal for machine learning, analytics, web serving, content management, media storage, DB backups

How to create EFS?


In AWS Console, go to EFS and click on Create file system.


 
 We can then set:
  • Name of our file system
  • VPC where we want EC2 instances to connect to our file system
  • Storage class [EFS storage classes - Amazon Elastic File System]
    • Standard (AWS used to name this Regional) - Stores data redundantly across multiple AZs (recommended)
    • One Zone - Stores data redundantly within a single AZ
      • we need to select desired availability zone

 


 

We can customize File system settings:

 
 



Note that by default Lifecycle management sets that files that haven't been access for 30 days will automatically be transferred from Standard to Standard-Infrequent Access storage (which is cheaper and so this is cost-effective measure).


We can then customize Network access:

Note that EFS is an entity connected to a network. EFS has assigned IP address in each availability zone. It is a mount target that provides an IP address for an NFSv4 endpoint at which we can mount an Amazon EFS file system.
 
So mount target provides a network interface (in the selected subnet in the AZ) for EFS mounted at it. 
 
When mount target state is available, our EFS system is mounted onto mount target and can be referred to via its url (or IP address). 

This does not mean it can still be accessible from EC2 instances. We need to mount EFS onto EC2. For that we need to specify a mount point (the local directory on the client where the EFS file system is mounted & accessible). This is one of settings that can be set when launching EC2 from AWS Console (the right-hand value in File system setting).

We can create a security group for EFS and use it everywhere - for each subnet/AZ. This security group can allow e.g. TCP traffic from anywhere.

Finally, we can customize File system policy:


 

Once EFS is created, it will take some more time for network interfaces to be created.
 

 

How to mount EFS on EC2 instance?

 
When creating EC2, we can select our EFS when setting File systems:
 

 
We also need to add EFS security group to the list of security groups used by this EC2 instance.

Once our EC2 instance is up and running, we can SSH to it and check mounted file systems with df tool:
 
$ df -T -h
 
-T - display file system types (Type column)
-h - display information about disk drives in human-readable format (kilobytes, megabytes, gigabytes and so on)
 




Resources:

 
 
 
 
 
 

Wednesday, 8 March 2023

AWS EC2 Auto Scaling with Terraform

 


aws_autoscaling_group | Resources | hashicorp/aws | Terraform Registry

The minimum implementation that will pass terraform plan checks is:

resource "aws_autoscaling_group" "my_app" {
  min_size = 1
  max_size = 1
}

terraform plan output:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_autoscaling_group.my_app will be created
  + resource "aws_autoscaling_group" "my_app" {
      + arn                       = (known after apply)
      + availability_zones        = (known after apply)
      + default_cooldown          = (known after apply)
      + desired_capacity          = (known after apply)
      + force_delete              = false
      + force_delete_warm_pool    = false
      + health_check_grace_period = 300
      + health_check_type         = (known after apply)
      + id                        = (known after apply)
      + max_size                  = 1
      + metrics_granularity       = "1Minute"
      + min_size                  = 1
      + name                      = (known after apply)
      + protect_from_scale_in     = false
      + service_linked_role_arn   = (known after apply)
      + vpc_zone_identifier       = (known after apply)
      + wait_for_capacity_timeout = "10m"
    }

 

If we try to run terraform apply, we'll get the following error:

Error: One of `launch_configuration`, `launch_template`, or `mixed_instances_policy` must be set for an Auto Scaling Group 

 

Using Launch Configuration for defining EC2

Let's use a launch configuration (despite AWS discouraging the use of launch configurations in favour of launch templates; example with launch template is further down in this article).

We need to know the ID of the AMI we want to use. We'll choose the latest Amazon Linux 2 image, of t2.micro type which allows free tier.



If we select it, the next page will show its ID:



Terraform resource we'll use is aws_launch_configuration | Resources | hashicorp/aws | Terraform Registry.

 

# EC2 >> Launch configurations
resource "aws_launch_configuration" "my-app" {
  name          = "my-app"
  image_id      = "ami-006dcf34c09e50022"
  instance_type = "t2.micro"
}

We can now update our auto scaling group:

resource "aws_autoscaling_group" "my-app" {
  min_size = 1
  max_size = 1
  name = "my-app"
  launch_configuration = aws_launch_configuration.my-app.name
}

 

terraform apply still complains:

Error: Error creating Auto Scaling Group: ValidationError: At least one Availability Zone or VPC Subnet is required.
        status code: 400, request id: ad34ea76-a6d5-419a-bc48-0ffb15b4e76f

 

Let's define the subnet which we want our instances to be launched into:

resource "aws_autoscaling_group" "my-app" {
  min_size = 1
  max_size = 1
  name = "
my-app"
  launch_configuration = aws_launch_configuration.
my-app.name
  vpc_zone_identifier = [ "subnet-14321c874d6d35c6a" ]
}

terraform apply will now create the autoscaling group together with launch configuration. This can be verified by looking at EC2 >> Auto Scaling groups and EC2 >> Launch configurations. And most importantly, auto scaling group will launch the new EC2 instance, in subnet we denoted in the configuration. This instance can be found in EC2 >> Instances.

 

Using Launch Template for defining EC2

AWS discourages the use of launch configurations in favour of launch templates.

Terraform resource is aws_launch_template | Resources | hashicorp/aws | Terraform Registry. Its description says:

Provides an EC2 launch template resource. Can be used to create instances or auto scaling groups.

Here are the key differences between launch templates (LT) and launch configuration (LC):

  • LT have more EC2 options than LC
  • LT are getting latest features from Amazon EC2
  • LC are still supported but are not getting the latest EC2 features
  • LC is immutable (resource can't be edited; if we want to change it, we need to destroy it first and then re-create it)
  • LT can be edited and updated
  • LT can have multiple versions which allows creation of parameter subsets (With versioning, you can create a subset of the full set of parameters and then reuse it to create other templates or template versions. - Partial configuration for reuse and inheritance)
  • LT allows using T2 unlimited burst credit option
  • LT allows provisioning using both On-demand and Spot Instances.
  • LT can be used to launch a standalone instance using AWS Console, SDK and CLI.


...

 

---

Monday, 27 February 2023

AWS NAT Gateway

 


What is NAT?

From AWS documentation:

A Network Address Translation (NAT) gateway is a device that forwards traffic from private subnets to other networks.

There are two types of NAT gateways:

  • Public: Instances in private subnets can connect to the internet but cannot receive unsolicited inbound connections from the internet.
  • Private: Instances in private subnets can connect to other VPCs or your on-premises network.

Each private or public NAT gateway must have a private IPv4 address assigned to it. Each public NAT gateway must also have an elastic IP (EIP) address (which is static public address associated with your AWS account) associated with it. Choosing a private IPv4 address is optional. If you don't choose a private IPv4 address, one will be automatically assigned to your NAT gateway at random from the subnet that your NAT gateway is in. You can configure a custom private IPv4 address in Additional settings.

After you create the NAT gateway, you must update the route table that’s associated with the subnet you chose for the NAT gateway. If you create a public NAT gateway, you must add a route to the route table that directs traffic destined for the internet to the NAT gateway. If you create a private NAT gateway, you must add a route to the route table that directs traffic destined for another VPC or your on-premises network to the NAT gateway.

 

When to use NAT?


From AWS documentation:

The instances in the public subnet can send outbound traffic directly to the internet, whereas the instances in the private subnet can't. Instead, the instances in the private subnet can access the internet by using a network address translation (NAT) gateway that resides in the public subnet. The database servers can connect to the internet for software updates using the NAT gateway, but the internet cannot establish connections to the database servers.

 

Note that NAT is required if instances in private subnet need to send a request (initiate a new connection) to the host in Internet. If request has reached private instance (via Application Load Balancer for example), then NAT is not required. See: amazon web services - Can a EC2 in the private subnet sends traffic to the internet through ELB without using NAT gateway/instance? - Server Fault

 

How to create NAT?


 

Private NAT gateway traffic can't reach the internet.
 
 
From AWS documentation about Additional settings:
 
When assigning private IPv4 addresses to a NAT gateway, choose how you want to assign them:

  • Auto-assign: AWS automatically chooses a primary private IPv4 address and you choose if you want AWS to assign up to 7 secondary private IPv4 addresses to assign to the NAT gateway. AWS automatically chooses and assigns them for you at random from the subnet that your NAT gateway is in.
  • Custom: Choose the primary private IPv4 address and up to 7 secondary private IPv4 addresses to assign to the NAT gateway.
You can assign up to 8 private IPv4 addresses to your private NAT gateway. The first IPv4 address that you assign will be the primary IPv4 address, and any additional addresses will be considered secondary IPv4 addresses. Choosing private IPv4 addresses is optional. If you don't choose a private IPv4 address, one will be automatically assigned to your NAT gateway. You can configure custom private IPv4 addresses in Additional settings.
Secondary IPv4 addresses are optional and should be assigned or allocated when your workloads that use a NAT gateway exceed 55,000 concurrent connections to a single destination (the same destination IP, destination port, and protocol). Secondary IPv4 addresses increase the number of available ports, and therefore they increase the limit on the number of concurrent connections that your workloads can establish using a NAT gateway.

You can use the NAT gateway CloudWatch metrics ErrorPortAllocation and PacketsDropCount to determine if your NAT gateway is generating port allocation errors or dropping packets. To resolve this issue, add secondary IPv4 addresses to your NAT gateway.You can assign up to 8 private IPv4 addresses to your private NAT gateway. The first IPv4 address that you assign will be the primary IPv4 address, and any additional addresses will be considered secondary IPv4 addresses. Choosing private IPv4 addresses is optional. If you don't choose a private IPv4 address, one will be automatically assigned to your NAT gateway. You can configure custom private IPv4 addresses in Additional settings.
Secondary IPv4 addresses are optional and should be assigned or allocated when your workloads that use a NAT gateway exceed 55,000 concurrent connections to a single destination (the same destination IP, destination port, and protocol). Secondary IPv4 addresses increase the number of available ports, and therefore they increase the limit on the number of concurrent connections that your workloads can establish using a NAT gateway.

 
Here are some typical architectures that include NAT:
 
Source: https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-scenarios.html

 
 

How to associate instances in private subnets with NATs?

 
The following diagrams show how routing tables are used to associate instances running in private subnets with NAT gateway created in public subnets thus allowing outbound traffic to Internet.
 
Source: https://www.packetswitch.co.uk/content/images/2020/06/Ghost-3-x-NAT-Gateway.png

 
 
Source: https://serverfault.com/questions/854475/aws-nat-gateway-in-public-subnet-why



 
Source: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html

 
 

References:

 

Thursday, 16 February 2023

DNS (Domain Name System)

 

 
 
 

DNS (Domain Name System)

  • a protocol, part of the Internet Protocol (IP) Suite
  • hierarchical and distributed naming system for computers, services, and other resources in the Internet
  • naming database in which internet domain names are located and translated into Internet Protocol (IP) addresses 
  • translates domain names to IP addresses so browsers can load Internet resources
  • helps Internet users and network devices discover websites using human-readable host names, instead of numeric IP addresses; For humans, domain names are a lot easier to remember than a sequence of numbers.
  • DNS configuration settings of some website are what allows visitors to still access that website even after it gets moved to a new hosting provider (its IP address will change but domain name will not)
 
For a hosted web site we need to specify (usually 2) DNS servers. These could be provided by hosting provider but we can specify custom ones e.g. Cloudflare DNS servers. These DNS servers will be nodes in DNS distributed database system which will be providing DNS records about our domains for whoever queries about them. Let's see which DNS records we can set.

Common DNS Records

  • A (A record, Address record,  IPv4 address record)
    • maps from an IPv4 address to a domain name
    • used to point the domain name at one or multiple IP addresses
    • also referred to as a host or hostname
  • AAAA (IPv6 address record) maps domain name to IPv6 address
  • CNAME (Canonical Name record)
    • used to create an alias from one hostname to another
    • maps one domain name (an alias) to another (the canonical name)
    • Example: example.com has an A record which points to the IP address. If we say "www.example.com is a CNAME to example.com" and "ftp.example.com is a CNAME to example.com" that means that someone accessing www.example.com or ftp.example.com will be pointed to the same IP address that example.com points to. This is useful so that when your IP address changes, you only have to update example.com’s entry (DNS A record for example.com), and www.example.com and ftp.example.com automatically point to the right place.
    • If you already have an A record, you will not use a CNAME
    • CNAME record tells anyone visiting a subdomain to also use the same DNS records as another domain or subdomain. 
    • This sort of thing is convenient when running multiple services from a single IP address (e.g. FTP server and web server share the same IP address but different port)
    • CNAME records only work for subdomains and must always point to another domain or subdomain and never directly to an IP address.
    • When a DNS resolver encounters a CNAME record while looking for a regular resource record, it will restart the query using the canonical name instead of the original name.
  • MX (Mail eXchanger)
    • allows you to control the delivery of mail for a given domain or subdomain. In our context, MX records can be set on a host-by-host basis to point to other hosts on the Internet (usually with permanent connections) that are set up to accept and/or route mail for your hostname(s). Setting a backup MX makes the entry you specify a secondary mail exchanger. This means that delivery will be attempted to your host first, and then to the backup host you specify if that fails.
  • TXT (TXT records) 
    • used to store information. Common uses include SPF, DKIM, etc.

 
It can take up to 72 hours for setting new DNS records to take effect - while change is replicated across all DNS servers on the internet. (see DNS Propagation)
 
The network of DNS servers is hierarchical. Types of DNS servers are:
  • Recursive resolvers (DNS recursors)
    • clients first send to them DNS queries
    • they are assigned by ISP but can be set manually:
      • Cloudflare 1.1.1.1
      • Google (8.8.8.8 and 8.8.4.4)
    • they respond either with cached data or send the request to root, TDL and finally to Authoritative nameserver from which they receive IP address
    • every recursive resolver knows about 13 (types of) DNS root nameservers
  • Root nameservers
    • when receive query about some domain name e.g. example.com they return the address of the TLD nameserver which contains information about the domain extension e.g. .com 
    • there are over 600 root nameservers which sync among themselves (anycast routing) and all contain the same data
  • TLD (Top-Level Domain) nameservers
    • they are domain extension-specific - each of them contains the list of authoritative servers for only a single domain e.g. .com or .ai. 
    • they return the address of authoritative servers
    • Larger TLDs and registrars (like GoDaddy, Namcheap etc...) use an API call to notify the TLD operator of any new registrations and changes
  • Authoritative nameserver
    • resolver’s last step in the journey for an IP address
    • they are domain-specific - each of them contains the list of IP addresses for a particular doman e.g. ftp.example.com or www.example.com
    • they return the IP address for a given hostname or, if domain has a CNAME (alias domain name), resolver needs to repeat the whole process in order to get the IP address for that alias host name.
    • when you register your web site, name servers you set for it are authoritative nameservers
 

To manage DNS records of domain e.g. example.com means setting DNS records for its root and subdomains. For each record we set:
  • Type: A, CNAME, MX, ...
  • Name: e.g. ftp (for ftp.example.com)
  • Content: this is the value which depends on the type e.g. IPv4 address if A record, alias if CNAME etc....
  • Proxy status: DNS only (proxy disabled) or Proxied (proxy enabled)
  • TTL (Time to Live) - in minutes

Resources:


Tuesday, 7 February 2023

AWS Security Groups

AWS Security Groups control the inbound and outbound traffic for various AWS resources: 
  • EC2 instance
    • running applications e.g. web server
    • running as DNS server
  • RDS - Database server
  • EFS file system
  • Elastic Load Balancer
  • VPC peering rules

 

Security groups are VPC-specific (and therefore region-specific). They can only be used within the VPC they are created. The exception is where there is a peering connection to another VPC, in which case they can be referred to in the peered VPC. 


For Security Group we can set:

  • Name
  • Description
  • VPC. VPC is region-specific so is security group.
  • Inbound rules
  • Outbound rules

For Security Group Rule (Inbound or Outbound) we can set:
  • Type. The protocol to open to network traffic. You can choose a common protocol, such as SSH (for a Linux instance), RDP (for a Windows instance), and HTTP and HTTPS to allow Internet traffic to reach your instance. You can also manually enter a custom port or port ranges.
  • Protocol. The type of protocol, for example TCP or UDP. Provides an additional selection for ICMP.
  • Port range. For custom rules and protocols, you can manually enter a port number or a port range.
  • Source. Determines the traffic that can reach your instance. Specify a single IP address, or an IP address range in CIDR notation (for example, 203.0.113.5/32). If connecting from behind a firewall, you'll need the IP address range used by the client computers. You can specify the name or ID of another security group in the same region. To specify a security group in another AWS account (EC2-Classic only), prefix it with the account ID and a forward slash, for example: 111122223333/OtherSecurityGroup.
  • Description. A description for a security group rule.
    A description can be up to 255 characters in length.
    Allowed characters are a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=;{}!$*.


Example: we have a Node.js application that is receiving traffic on port 8080, only from a Load Balancer that is on the same VPC. This means we need to create an Inbound rule:

  • Type: Custom TCP
  • Protocol: TCP
  • Port range: 8080
  • Source: Custom; CIDR block: 172.0.0.0/16 (in our example we're using a default VPC so we'll put here its private IP address block thus allowing only access from the private network)

 

Terraform Security Group resource

aws_security_group | Resources | hashicorp/aws | Terraform Registry

 

Terraform Security Group Rule resource

It represents a single ingress or egress group rule, which can be added to external Security Groups: 

aws_security_group_rule | Resources | hashicorp/aws | Terraform Registry

Required arguments: 

  • from_port: start port
  • to_port: end port
  • protocol
    • icmp
    • icmpv6
    • tcp
    • udp
    • all
  • security_group_id: Security group to apply this rule to.
  • type
    • ingress (inbound)
    • egress (outbound) 
Optional arguments:
  • self. Whether the security group itself will be added as a source to this ingress rule.
  • source_security_group_id.  Security group id to allow access to/from, depending on the type.
  • ...


Because security group rule gets attached to the security group, we need to instruct Terraform to provision security group rule after the security group. We do this by using depends_on meta argument:

resource "aws_security_group_rule" "my_ec2_ssh" {
  type            = "ingress"
  from_port       = 22
  to_port         = 22
  protocol        = "tcp"
  cidr_blocks = var.ssh_ip_range
  security_group_id = aws_security_group.my_ec2_sg.id
  depends_on = [aws_security_group.my_ec2_sg]
}


Resources:

Security group rules for different use cases - Amazon Elastic Compute Cloud

Monday, 6 February 2023

AWS EC2: Auto Scaling




If our application is getting more traffic, we need to scale it: we need to create more virtual machines to run it so can handle the load, we need a load balancer to distribute the network traffic to these instances. Instead of doing this manually (creating new instances and e.g. using NGINX as load balancer) we can use AWS Auto-scaling Group and AWS Application Load Balancer.
  • Automatically maintains application performance based on the user requirement at the lowest possible price
  • Service which helps user to monitor applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost
  • Benefits:
    • better fault tolerance
    • better cost management
    • reliability of your service
    • scalability
    • flexibility - changes can be made on the fly
  • Snapshot vs AMI
    • Snapshot
      • used as a backup of a single EBS volume attached to the EC2 instance
      • opt for it when the instance contains multiple static EBS volumes
      • pay only for the storage of the modified data
      • a non-bootable image on EBS volume
    • AMI
      • used as a backup of an EC2 instance
      • widely used to replace a failed EC2 instance
      • pay only for the storage that you use
      • bootable image on EC2 instance
      • creating an AMI image will also create EBS snapshots


How does AWS auto scaling work?

  • Configure single unified scaling policy per application source
  • explore the application
  • choose the service you want to scale 
  • select what to optimize e.g. cost or performance
  • keep track of scaling

Different scaling plans

  • scaling plan helps user to configure a set of instructions for scaling based on software requirement

Launch Template


Let's first explore a tool that can save time when scaling (creating multiple EC2 instances) manually.

How to set up a Launch Template?

EC2 >> Instances >> Launch Template:


We can set:

  • Name
  • Version description
  • AMI. We can create AMI for each version of our application an name them e.g. amy-my-app-v1, ami-my-app-v2 etc...In the same way we can create new version of launch template and bind desired AMI (application) version to it.
  • Instance type e.g. t2.micro 
  • Key pair (for secure connection to the instance)
  • Network settings
    • Launch into:
      • VPC
      • Shared network
    • Security group
  • Storage (volumes e.g. Volume 1(AMI Root, 8GB, EBS, General Purpose SSD))
  • Resource tags
  • Network interfaces
  • User data

User data example for setting an environment variable in our application's env config file:

#!/bin/bash -ex
sudo -u ec2-user bash -c "echo \"
MY_ENV_VAR="My env var value, set from the template" 
\" > /home/ec2-user/my-app.env
systemctl restart --now --no-block version-my-app.service

 

Shebang arguments explained:

-e Exit immediately if a command exits with a non-zero status.
-x Print commands and their arguments as they are executed.

 

Launch templates are versioned but we can't manually set the version number. AWS does it automatically by incrementing numbers from version 1.

Launch templates can be used outside Auto-scaling or Load Balancing: whenever we want to launch the instance, we don't need to manually fill details about the new instance, we can just use Launch instance from template.

In our scenario though, we want auto-scaling group to use launch template in order to launch EC2 instances. 

 

Auto scaling group

 

Auto scaling group manages how many EC2 instances will be running in parallel.

How to set up an Auto scaling group?

EC2 >> Auto scaling groups >> Create Auto scaling group

 


  • Choose launch template or configuration
    • Auto scaling group name
    • Launch template. We can select it and also select:
      • Launch template version (we can use this to select the version of our application that we want to be running on these new instances)
  • Choose instance launch options
    • Network
      • VPC
      • Availability Zones and subnets (in the form az_name|subnet_name); We want to list here all of them so we have a large pool of AZs in case something happens to EC2 instances running in some of them. 
    • Instance type requirements - we can override launch template here
  • Configure advanced options
    • Load balancing (optional)
      • No Load Balancer - traffic to auto scaling group will not be fronted by a load balancer
      • Attach to an existing load balancer
        • We'll specify here target groups actually (not load balancer directly)
      • Attach to a new load balancer - quickly create a basic load balancer
    • Health checks (optional)
      • Health Check type
        • EC2 - always enabled
        • ELB - if load balancing is enabled
      • Health check grace period - amount of time until EC2 auto scaling performs the first health check on new instances after they are put into service e.g. 300 seconds
    • Additional settings (optional)
      • Monitoring - enable group metrics collection withing CloudWatch
  • Configure group size and scaling policies
    • Group size
      • Desired capacity. How many EC2 instances do we want running simultaneously? Must be between minimum and maximum. E.g. 2
      • Minimum capacity e.g. 2
      • Maximum capacity e.g. 8
    • Scaling policies - choose whether to use a scaling policy to dynamically resize auto scaling group to meet changes in demand
      • Target tracking scaling policy - choose a desired outcome and leave it to the scaling policy to add and remove capacity as needed to achieve that outcome
        • Name - we can chose an arbitrary name
        • Metric type e.g. Average CPU utilization (average over all EC2 instances running)
        • Target value e.g. 50 (%)
        • Instances need ____ seconds warm up before including in metric
        • Disable scale in to create only a scale-out policy
      • None
    • Instance scale-in protection (optional); if enabled, newly created instances will be protected from scale-in by default
  • Add notifications - send notifications to SNS topics whenever Amazon EC2 auto-scaling launches or terminates the EC2 instances in your auto scaling group
  • Add tags
  • Review

As soon as auto scaling group is created it becomes active. If we set to have 2 as a desired number of instances, auto scaling group will immediately create them. If we try to delete them (e.g. manually stop them), they will get to Terminated state and auto scaling group will immediately re-launch 2 new instances. If we want permanently to stop these instances we need first to delete auto scaling group and then delete those EC2 instances.

As seen above, Load balancing is optional so auto scaling (using auto scaling groups) can all happen with NO Load Balancer involved. But typically, we want to have load balancer so the load is equally distributed across all running EC2 instances. We can set up NGINX server as load balancer or use AWS (Application) Load Balancer in which case, in Load balancing options above, we'll choose to attach auto scaling group to an existing or newly created Load Balancer. It's worth mentioning that auto-scaling group is associated to load balancer indirectly, via target groups.

 

Resources:

App Scaling - AWS Application Auto Scaling - AWS

amazon web services - AWS EC2 Auto Scaling Groups: I get Min and Max, but what's Desired instances limit for? - Stack Overflow

Set capacity limits on your Auto Scaling group - Amazon EC2 Auto Scaling  

 amazon web services - When stopping EC2 instance, it starts again automatically listed separately, previous one changes to terminated - Server Fault