Thursday, 18 May 2023

My PHP notes

 


I've never worked professionally with PHP but had to test some code so here are just some snippets I had to run in tehplayground...

 <?php
// example code

// $welcome = file_get_contents('/content/welcome');

// empty array
$my_array =
array();

if ($my_array
== NULL) {
    print "\$my_array == NULL\n";
}

if ($my_array
=== NULL) {
    print "\$my_array === NULL\n";
}

if (
is_null($my_array)) {
    print "is_null(\$my_array) is true\n";
}

print "my_array = {$my_array}\n"; // line 21


print "var_dump(\$my_array) = \n";
var_dump($my_array);

if (
in_array("test", $my_array)) {
    echo "test is in array";
}


if (!in_array("test", $my_array)) {
   
echo "test is not in array";
}

print "Unsetting \$my_array";
unset($my_array);


if ($my_array == NULL) {
    print "\$my_array == NULL\n";
}

if ($my_array === NULL) { // line 44
    print "\$my_array === NULL\n";
}

if (in_array("test", $my_array)) { // line 48
    echo "test is in array";
}

?>

 

Output:

 

$my_array == NULL

Warning: Array to string conversion in Standard input code on line 21
my_array = Array
var_dump($my_array) =
array(0) {
}
test is not in arrayUnsetting $my_array
Warning: Undefined variable $my_array in Standard input code on line 40
$my_array == NULL

Warning: Undefined variable $my_array in Standard input code on line 44
$my_array === NULL

Warning: Undefined variable $my_array in Standard input code on line 48

Fatal error: Uncaught TypeError: in_array(): Argument #2 ($haystack) must be of type array, null given in Standard input code:48
Stack trace:
#0 Standard input code(48): in_array('test', NULL)
#1 {main}
  thrown in Standard input code on line 48 

 



if (empty($not_declared_array)) {
echo("not_declared_array is empty.\n");
} else {
echo("not_declared_array is not empty.\n");
}

$null_array = null;

if (
empty($null_array)) {
echo("null_array is empty.\n");
} else {
echo('$null_array is not empty.\n');
}

$array = [];

if (
empty($array)) {
echo("array is empty.\n");
} else {
echo('array is not empty.\n');
}




Output:

not_declared_array is empty.
null_array is empty.
array is empty.

Friday, 10 March 2023

AWS EFS with Terraform







resource "aws_efs_file_system" "my-app-data-efs" {
  tags = {
    Name = "my-app-data-efs"
  }
}



In AWS Console, we can go to Amazon EFS >> File systems and verify that it's created. Its attributes are:

Name: my-app-data-efs
File system ID: fs-1d130ce4a92769f59
Encrypted: Unencrypted
Total size: 6.00 KiB
Size in Standard / One Zone: 6.00 KiB    
Size in Standard-IA / One Zone-IA: 0 Bytes
Provisioned Throughput (MiB/s):    -     
File system state: Available
Creation time: Thu, 09 Mar 2023 10:41:55 GMT
Availability Zone: Standard
 
Performance mode: General Purpose
Throughput mode: Bursting
Lifecycle management:
Transition into IA: None
Transition out of IA: None
Availability zone: Standard
Automatic backups: Disabled
Encrypted: No
File system state: Available
DNS name: No mount targets available

 
It will have no Access points and no Mount targets defined:



 
To provide mount target, we need to use aws_efs_mount_target | Resources | hashicorp/aws | Terraform Registry. Required attributes are EFS (for which we want to create mount target) and subnet (in which we want this mount target to be):

resource "aws_efs_mount_target" "my-app-data-efs-mt" {
  file_system_id = aws_efs_file_system.my-app-data-efs.id
  subnet_id = "subnet-14321c874d6d35c6a"
}


terraform plan output:

Terraform will perform the following actions:

  # aws_efs_mount_target.my-app-data-efs-mt will be created
  + resource "aws_efs_mount_target" "my-app-data-efs-mt" {
      + availability_zone_id   = (known after apply)
      + availability_zone_name = (known after apply)
      + dns_name               = (known after apply)
      + file_system_arn        = (known after apply)
      + file_system_id         = "fs-1d130ce4a92769f59"
      + id                     = (known after apply)
      + ip_address             = (known after apply)
      + mount_target_dns_name  = (known after apply)
      + network_interface_id   = (known after apply)
      + owner_id               = (known after apply)
      + security_groups        = (known after apply)
      + subnet_id              = "subnet-14321c874d6d35c6a"
    }

Plan: 1 to add, 0 to change, 0 to destroy.


After applying this change, we can check again Network settings for EFS where we'll see that mount target is now available:

 
 
The next step will be mounting EFS onto EC2 instance.
 

Resources:

 

File System Performance Metrics

image source: https://www.dnsstuff.com/latency-throughput-bandwidth

 

 
File system performance is measured by:
  • Latency
    • delay between request and response
    • a measure of the length of time it takes for a single I/O request to be completed from the application's point of view
    • measured separately for read (usually in microseconds) and write (usually in milliseconds) operations
      • If the I/O is a data read, latency is the time it takes for the data to come back. If the I/O is a write, latency is the time for the write acknowledgement to return.
    • affects application's acceleration
  • Throughput / Bandwidth
    • measures how many units of information a system can process in a period of time
    • describes the amount of data able to flow through a point in the data path over a given time
    • throughput and latency are often competing goals - the lower the latency, the higher the throughput
    • measured separately for file system read (usually in GiBps) and file system write (usually in MiBps) operations 
    • typically the best storage metric when measuring data that needs to be streamed rapidly, such as images and video files.
  • Input/Output operations per second (IOPS)
    • number of I/O operations per second
    • measured separately for read and write operations
    • as the number of IOPS requested from the device increases the latency will increase
    • affects application's scalability

Thursday, 9 March 2023

Amazon Elastic File System (EFS)

 
 

 
Amazon Elastic File System (EFS) is:
  • cloud-native data store
  • shared file storage - can be accessed by multiple computers at the same time
    • can be made available to VPC
      • EC2 instances can then securely mount EFS to store and access data
      • applications running on multiple EC2 instances can access the EFS at the same time
    • EFS can also be mounted on on-premises data center servers when connected to Amazon VPC with AWS Direct Connect or VPN making it easy to:
      • migrate data to EFS
      • enable cloud bursting
      • back up on-premises data to EFS
  • supports low latency applications and also highly-parallelized scale out jobs requiring high throughput (read here what's the difference between latency and throughput: File System Performance Metrics | My Public Notepad)
  • high throughput
    • throughput for a file system scales automatically as capacity grows
    • for workloads with high throughput and low capacity requirements, throughput can be provisioned independent of capacity 
  • there are 2 storage classes: 
    • Standard
    • EFS IA (Infrequent Access) - for less frequently accessed data we can configure EFS to store data in a cost-optimized IA storage class
      • LifeCycle Management automatically and transparently moves files access less frequently to EFS IA
  • has 2 performance modes so we can tailor EFS to our application needs
    • General Purpose
    • Max I/O
 

Benefits of using EFS

  • file storage system which is:
    • simple - supports Network File System (NFS) versions 4.0 and 4.1 (NFSv4) protocol. This means that computers can access files on EFS by using standard file system tools and interfaces provided by OS. This is the reason why nfs is specified as the filesystem type supported by kernel when using mount command to mount EFS device on the EC2 instance (mount -t nfs ...).
    • serverless - no need to provision infrastructure
    • scalable performance - lifecycle management
    • elastic - automatically grow or shrink as we add/remove files
      • can grow to petabytes (PB)
  • fully managed - no need to manage it
  • easy to set up via AWS Management Console, API or CLI
    • "set and forget"
  • cost-effective data store: you pay for the storage you use
  • access data securely, via existing AWS security infrastructure (IAM)
EFS symbol

Drawbacks of EFS

  • supports Linux only (it doesn't support Windows)

 

When to use EFS?

  • when thousands of EC2 instances from multiple availability zones or on-premises servers need concurrently to access data
    • EFS provides concurrent access for tens of thousands of connections for EC2 instances, containers and lambda functions
  • designed for high availability and durability, for storing data redundantly across multiple (3) availability zones
  • ideal for machine learning, analytics, web serving, content management, media storage, DB backups

How to create EFS?


In AWS Console, go to EFS and click on Create file system.


 
 We can then set:
  • Name of our file system
  • VPC where we want EC2 instances to connect to our file system
  • Storage class [EFS storage classes - Amazon Elastic File System]
    • Standard (AWS used to name this Regional) - Stores data redundantly across multiple AZs (recommended)
    • One Zone - Stores data redundantly within a single AZ
      • we need to select desired availability zone

 


 

We can customize File system settings:

 
 



Note that by default Lifecycle management sets that files that haven't been access for 30 days will automatically be transferred from Standard to Standard-Infrequent Access storage (which is cheaper and so this is cost-effective measure).


We can then customize Network access:

Note that EFS is an entity connected to a network. EFS has assigned IP address in each availability zone. It is a mount target that provides an IP address for an NFSv4 endpoint at which we can mount an Amazon EFS file system.
 
So mount target provides a network interface (in the selected subnet in the AZ) for EFS mounted at it. 
 
When mount target state is available, our EFS system is mounted onto mount target and can be referred to via its url (or IP address). 

This does not mean it can still be accessible from EC2 instances. We need to mount EFS onto EC2. For that we need to specify a mount point (the local directory on the client where the EFS file system is mounted & accessible). This is one of settings that can be set when launching EC2 from AWS Console (the right-hand value in File system setting).

We can create a security group for EFS and use it everywhere - for each subnet/AZ. This security group can allow e.g. TCP traffic from anywhere.

Finally, we can customize File system policy:


 

Once EFS is created, it will take some more time for network interfaces to be created.
 

 

How to mount EFS on EC2 instance?

 
When creating EC2, we can select our EFS when setting File systems:
 

 
We also need to add EFS security group to the list of security groups used by this EC2 instance.

Once our EC2 instance is up and running, we can SSH to it and check mounted file systems with df tool:
 
$ df -T -h
 
-T - display file system types (Type column)
-h - display information about disk drives in human-readable format (kilobytes, megabytes, gigabytes and so on)
 




Resources:

 
 
 
 
 
 

Wednesday, 8 March 2023

AWS EC2 Auto Scaling with Terraform

 


aws_autoscaling_group | Resources | hashicorp/aws | Terraform Registry

The minimum implementation that will pass terraform plan checks is:

resource "aws_autoscaling_group" "my_app" {
  min_size = 1
  max_size = 1
}

terraform plan output:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_autoscaling_group.my_app will be created
  + resource "aws_autoscaling_group" "my_app" {
      + arn                       = (known after apply)
      + availability_zones        = (known after apply)
      + default_cooldown          = (known after apply)
      + desired_capacity          = (known after apply)
      + force_delete              = false
      + force_delete_warm_pool    = false
      + health_check_grace_period = 300
      + health_check_type         = (known after apply)
      + id                        = (known after apply)
      + max_size                  = 1
      + metrics_granularity       = "1Minute"
      + min_size                  = 1
      + name                      = (known after apply)
      + protect_from_scale_in     = false
      + service_linked_role_arn   = (known after apply)
      + vpc_zone_identifier       = (known after apply)
      + wait_for_capacity_timeout = "10m"
    }

 

If we try to run terraform apply, we'll get the following error:

Error: One of `launch_configuration`, `launch_template`, or `mixed_instances_policy` must be set for an Auto Scaling Group 

 

Using Launch Configuration for defining EC2

Let's use a launch configuration (despite AWS discouraging the use of launch configurations in favour of launch templates; example with launch template is further down in this article).

We need to know the ID of the AMI we want to use. We'll choose the latest Amazon Linux 2 image, of t2.micro type which allows free tier.



If we select it, the next page will show its ID:



Terraform resource we'll use is aws_launch_configuration | Resources | hashicorp/aws | Terraform Registry.

 

# EC2 >> Launch configurations
resource "aws_launch_configuration" "my-app" {
  name          = "my-app"
  image_id      = "ami-006dcf34c09e50022"
  instance_type = "t2.micro"
}

We can now update our auto scaling group:

resource "aws_autoscaling_group" "my-app" {
  min_size = 1
  max_size = 1
  name = "my-app"
  launch_configuration = aws_launch_configuration.my-app.name
}

 

terraform apply still complains:

Error: Error creating Auto Scaling Group: ValidationError: At least one Availability Zone or VPC Subnet is required.
        status code: 400, request id: ad34ea76-a6d5-419a-bc48-0ffb15b4e76f

 

Let's define the subnet which we want our instances to be launched into:

resource "aws_autoscaling_group" "my-app" {
  min_size = 1
  max_size = 1
  name = "
my-app"
  launch_configuration = aws_launch_configuration.
my-app.name
  vpc_zone_identifier = [ "subnet-14321c874d6d35c6a" ]
}

terraform apply will now create the autoscaling group together with launch configuration. This can be verified by looking at EC2 >> Auto Scaling groups and EC2 >> Launch configurations. And most importantly, auto scaling group will launch the new EC2 instance, in subnet we denoted in the configuration. This instance can be found in EC2 >> Instances.

 

Using Launch Template for defining EC2

AWS discourages the use of launch configurations in favour of launch templates.

Terraform resource is aws_launch_template | Resources | hashicorp/aws | Terraform Registry. Its description says:

Provides an EC2 launch template resource. Can be used to create instances or auto scaling groups.

Here are the key differences between launch templates (LT) and launch configuration (LC):

  • LT have more EC2 options than LC
  • LT are getting latest features from Amazon EC2
  • LC are still supported but are not getting the latest EC2 features
  • LC is immutable (resource can't be edited; if we want to change it, we need to destroy it first and then re-create it)
  • LT can be edited and updated
  • LT can have multiple versions which allows creation of parameter subsets (With versioning, you can create a subset of the full set of parameters and then reuse it to create other templates or template versions. - Partial configuration for reuse and inheritance)
  • LT allows using T2 unlimited burst credit option
  • LT allows provisioning using both On-demand and Spot Instances.
  • LT can be used to launch a standalone instance using AWS Console, SDK and CLI.


...

 

---

Monday, 27 February 2023

AWS NAT Gateway

 


What is NAT?

From AWS documentation:

A Network Address Translation (NAT) gateway is a device that forwards traffic from private subnets to other networks.

There are two types of NAT gateways:

  • Public: Instances in private subnets can connect to the internet but cannot receive unsolicited inbound connections from the internet.
  • Private: Instances in private subnets can connect to other VPCs or your on-premises network.

Each private or public NAT gateway must have a private IPv4 address assigned to it. Each public NAT gateway must also have an elastic IP (EIP) address (which is static public address associated with your AWS account) associated with it. Choosing a private IPv4 address is optional. If you don't choose a private IPv4 address, one will be automatically assigned to your NAT gateway at random from the subnet that your NAT gateway is in. You can configure a custom private IPv4 address in Additional settings.

After you create the NAT gateway, you must update the route table that’s associated with the subnet you chose for the NAT gateway. If you create a public NAT gateway, you must add a route to the route table that directs traffic destined for the internet to the NAT gateway. If you create a private NAT gateway, you must add a route to the route table that directs traffic destined for another VPC or your on-premises network to the NAT gateway.

 

When to use NAT?


From AWS documentation:

The instances in the public subnet can send outbound traffic directly to the internet, whereas the instances in the private subnet can't. Instead, the instances in the private subnet can access the internet by using a network address translation (NAT) gateway that resides in the public subnet. The database servers can connect to the internet for software updates using the NAT gateway, but the internet cannot establish connections to the database servers.

 

Note that NAT is required if instances in private subnet need to send a request (initiate a new connection) to the host in Internet. If request has reached private instance (via Application Load Balancer for example), then NAT is not required. See: amazon web services - Can a EC2 in the private subnet sends traffic to the internet through ELB without using NAT gateway/instance? - Server Fault

 

How to create NAT?


 

Private NAT gateway traffic can't reach the internet.
 
 
From AWS documentation about Additional settings:
 
When assigning private IPv4 addresses to a NAT gateway, choose how you want to assign them:

  • Auto-assign: AWS automatically chooses a primary private IPv4 address and you choose if you want AWS to assign up to 7 secondary private IPv4 addresses to assign to the NAT gateway. AWS automatically chooses and assigns them for you at random from the subnet that your NAT gateway is in.
  • Custom: Choose the primary private IPv4 address and up to 7 secondary private IPv4 addresses to assign to the NAT gateway.
You can assign up to 8 private IPv4 addresses to your private NAT gateway. The first IPv4 address that you assign will be the primary IPv4 address, and any additional addresses will be considered secondary IPv4 addresses. Choosing private IPv4 addresses is optional. If you don't choose a private IPv4 address, one will be automatically assigned to your NAT gateway. You can configure custom private IPv4 addresses in Additional settings.
Secondary IPv4 addresses are optional and should be assigned or allocated when your workloads that use a NAT gateway exceed 55,000 concurrent connections to a single destination (the same destination IP, destination port, and protocol). Secondary IPv4 addresses increase the number of available ports, and therefore they increase the limit on the number of concurrent connections that your workloads can establish using a NAT gateway.

You can use the NAT gateway CloudWatch metrics ErrorPortAllocation and PacketsDropCount to determine if your NAT gateway is generating port allocation errors or dropping packets. To resolve this issue, add secondary IPv4 addresses to your NAT gateway.You can assign up to 8 private IPv4 addresses to your private NAT gateway. The first IPv4 address that you assign will be the primary IPv4 address, and any additional addresses will be considered secondary IPv4 addresses. Choosing private IPv4 addresses is optional. If you don't choose a private IPv4 address, one will be automatically assigned to your NAT gateway. You can configure custom private IPv4 addresses in Additional settings.
Secondary IPv4 addresses are optional and should be assigned or allocated when your workloads that use a NAT gateway exceed 55,000 concurrent connections to a single destination (the same destination IP, destination port, and protocol). Secondary IPv4 addresses increase the number of available ports, and therefore they increase the limit on the number of concurrent connections that your workloads can establish using a NAT gateway.

 
Here are some typical architectures that include NAT:
 
Source: https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-scenarios.html

 
 

How to associate instances in private subnets with NATs?

 
The following diagrams show how routing tables are used to associate instances running in private subnets with NAT gateway created in public subnets thus allowing outbound traffic to Internet.
 
Source: https://www.packetswitch.co.uk/content/images/2020/06/Ghost-3-x-NAT-Gateway.png

 
 
Source: https://serverfault.com/questions/854475/aws-nat-gateway-in-public-subnet-why



 
Source: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html

 
 

References:

 

Thursday, 16 February 2023

DNS (Domain Name System)

 

 
 
 

DNS (Domain Name System)

  • a protocol, part of the Internet Protocol (IP) Suite
  • hierarchical and distributed naming system for computers, services, and other resources in the Internet
  • naming database in which internet domain names are located and translated into Internet Protocol (IP) addresses 
  • translates domain names to IP addresses so browsers can load Internet resources
  • helps Internet users and network devices discover websites using human-readable host names, instead of numeric IP addresses; For humans, domain names are a lot easier to remember than a sequence of numbers.
  • DNS configuration settings of some website are what allows visitors to still access that website even after it gets moved to a new hosting provider (its IP address will change but domain name will not)
 
For a hosted web site we need to specify (usually 2) DNS servers. These could be provided by hosting provider but we can specify custom ones e.g. Cloudflare DNS servers. These DNS servers will be nodes in DNS distributed database system which will be providing DNS records about our domains for whoever queries about them. Let's see which DNS records we can set.

Common DNS Records

  • A (A record, Address record,  IPv4 address record)
    • maps from an IPv4 address to a domain name
    • used to point the domain name at one or multiple IP addresses
    • also referred to as a host or hostname
  • AAAA (IPv6 address record) maps domain name to IPv6 address
  • CNAME (Canonical Name record)
    • used to create an alias from one hostname to another
    • maps one domain name (an alias) to another (the canonical name)
    • Example: example.com has an A record which points to the IP address. If we say "www.example.com is a CNAME to example.com" and "ftp.example.com is a CNAME to example.com" that means that someone accessing www.example.com or ftp.example.com will be pointed to the same IP address that example.com points to. This is useful so that when your IP address changes, you only have to update example.com’s entry (DNS A record for example.com), and www.example.com and ftp.example.com automatically point to the right place.
    • If you already have an A record, you will not use a CNAME
    • CNAME record tells anyone visiting a subdomain to also use the same DNS records as another domain or subdomain. 
    • This sort of thing is convenient when running multiple services from a single IP address (e.g. FTP server and web server share the same IP address but different port)
    • CNAME records only work for subdomains and must always point to another domain or subdomain and never directly to an IP address.
    • When a DNS resolver encounters a CNAME record while looking for a regular resource record, it will restart the query using the canonical name instead of the original name.
  • MX (Mail eXchanger)
    • allows you to control the delivery of mail for a given domain or subdomain. In our context, MX records can be set on a host-by-host basis to point to other hosts on the Internet (usually with permanent connections) that are set up to accept and/or route mail for your hostname(s). Setting a backup MX makes the entry you specify a secondary mail exchanger. This means that delivery will be attempted to your host first, and then to the backup host you specify if that fails.
  • TXT (TXT records) 
    • used to store information. Common uses include SPF, DKIM, etc.

 
It can take up to 72 hours for setting new DNS records to take effect - while change is replicated across all DNS servers on the internet. (see DNS Propagation)
 
The network of DNS servers is hierarchical. Types of DNS servers are:
  • Recursive resolvers (DNS recursors)
    • clients first send to them DNS queries
    • they are assigned by ISP but can be set manually:
      • Cloudflare 1.1.1.1
      • Google (8.8.8.8 and 8.8.4.4)
    • they respond either with cached data or send the request to root, TDL and finally to Authoritative nameserver from which they receive IP address
    • every recursive resolver knows about 13 (types of) DNS root nameservers
  • Root nameservers
    • when receive query about some domain name e.g. example.com they return the address of the TLD nameserver which contains information about the domain extension e.g. .com 
    • there are over 600 root nameservers which sync among themselves (anycast routing) and all contain the same data
  • TLD (Top-Level Domain) nameservers
    • they are domain extension-specific - each of them contains the list of authoritative servers for only a single domain e.g. .com or .ai. 
    • they return the address of authoritative servers
    • Larger TLDs and registrars (like GoDaddy, Namcheap etc...) use an API call to notify the TLD operator of any new registrations and changes
  • Authoritative nameserver
    • resolver’s last step in the journey for an IP address
    • they are domain-specific - each of them contains the list of IP addresses for a particular doman e.g. ftp.example.com or www.example.com
    • they return the IP address for a given hostname or, if domain has a CNAME (alias domain name), resolver needs to repeat the whole process in order to get the IP address for that alias host name.
    • when you register your web site, name servers you set for it are authoritative nameservers
 

To manage DNS records of domain e.g. example.com means setting DNS records for its root and subdomains. For each record we set:
  • Type: A, CNAME, MX, ...
  • Name: e.g. ftp (for ftp.example.com)
  • Content: this is the value which depends on the type e.g. IPv4 address if A record, alias if CNAME etc....
  • Proxy status: DNS only (proxy disabled) or Proxied (proxy enabled)
  • TTL (Time to Live) - in minutes

Resources: