Showing posts with label DevOps. Show all posts
Showing posts with label DevOps. Show all posts

Friday, 5 September 2025

Introduction to Amazon Kinesis Data Streams

 


Amazon Kinesis Data Streams is one of 4 Amazon Kinesis services. It helps to easily stream data at any scale.

  • There are no servers to manage. The on-demand mode (one of two capacity modes, the other being a provisioned one) eliminates the need to provision or manage capacity required for running applications.
  • Automatic provisioning and scaling with the on-demand mode.
  • Pay only for what you use 
  • Built-in integrations with other AWS services to create analytics, serverless, and application integration solutions
  • To ingest and collect terabytes of data per day from application and service logs, clickstream data, sensor data, and in-app user events to power live dashboards, generate metrics, and deliver data into data lakes
  • To build applications for high-frequency event data such as clickstream data, and gain access to insights in seconds, not days, using AWS Lambda or Amazon Managed Service for Apache Flink
  • To pair with AWS Lambda to respond to or adjust immediate occurrences within the event-driven applications in your environment, at any scale.

The producers continually push data to Kinesis Data Streams, and the consumers process the data in real time. Producer can be e.g. CloudWatch Log and consumer can be e.g. Lambda function.

Capacity Modes

  • Provisioned
    • data stream capacity is fixed
    • 1 shard has fixed capacities: 
      • Write: Maximum 1 MiB/second 1,000 records/second 
      • Read: Maximum 2 MiB/second
    • N shards will multiply R/W capacity by N
  • On-demand
    • data stream capacity scales automatically

Data Retention Period

A Kinesis data stream stores records from 24 hours by default, up to 8760 hours (365 days). You can update the retention period via the Kinesis Data Streams console or by using the IncreaseStreamRetentionPeriod and the DecreaseStreamRetentionPeriod operations.


Data Stream << Shards << Data Records


Kinesis data stream is a set of shards. Each shard has a sequence of data records.

A data record is the unit of data stored in a Kinesis data stream. Data records are composed of:
  • sequence number
    • within the shard
    • assigned by Kinesis Data Streams
  • partition key
  • data blob, which is an immutable sequence of bytes
    • Kinesis Data Streams does not inspect, interpret, or change the data in the blob in any way
    • A data blob can be up to 1 MB.

How to find all shards in a stream?

% aws kinesis list-shards \
--stream-name my-stream \
--region us-east-2 \
--profile my-profile
{
    "Shards": [
        {
            "ShardId": "shardId-000000000000",
            "HashKeyRange": {
                "StartingHashKey": "0",
                "EndingHashKey": "440282366920938463463374607431768211455"
            },
            "SequenceNumberRange": {
                "StartingSequenceNumber": "49663754454378916691333541504734985347376184017408753666"
            }
        }
    ]
}


Note HashKeyRange - it is used by data producers to determine into which shard to write the next document.

Shard Iterator


Shard iterators are a fundamental tool for consumers to process data in the correct order and without missing records. They provide fine control over the reading position, making it possible to build scalable, reliable stream processing workflows.

A shard iterator is a reference marker that specifies the exact position within a shard from which data reading should begin and continue sequentially. It enables consumers to access, read, and process records from a particular location within a data stream shard.

Different types of shard iterators control where reading starts:
  • AT_SEQUENCE_NUMBER: Reads from the specific sequence number.
  • AFTER_SEQUENCE_NUMBER: Reads just after the specified sequence number.
  • TRIM_HORIZON: Starts from the oldest available data in the shard.
  • LATEST: Starts from the newest record (most recently added record, records added after the iterator is created)
  • AT_TIMESTAMP: Starts from a specific timestamp in the shard
When reading repeatedly from a stream, the shard iterator is updated after each GetRecords request (with the NextShardIterator returned by the API).

This mechanism allows applications to seamlessly resume reading from the correct point in the stream.

Proper management of shard iterators is essential to avoid data loss or duplicate reads, especially as data retention policies and processing speeds can affect the availability of data in the stream


How to get an iterator in a shard?


A shard iterator is obtained using the GetShardIterator API, which marks the spot in the shard to start reading records.

The command aws kinesis get-shard-iterator is used to obtain a pointer (iterator) to a specific position in a shard of a Kinesis stream. We need this iterator to actually read records from the stream using aws kinesis get-records.


% ITERATOR=$(aws kinesis get-shard-iterator \
    --stream-name my-stream \
    --shard-id shardId-000000000000 \
    --shard-iterator-type TRIM_HORIZON \
    --query 'ShardIterator' \
    --output text \
    --region us-east-2 \
    --profile my-profile)

ShardIterator is a token we pass to aws kinesis get-records to actually fetch the data.

% echo $ITERATOR
AAAAAAAAAAHf65JkbV8ZJQ...Exsy8WerU5Z8LKI8wtXm95+blpXljd0UWgDs7Seo9QlpikJI/6U=


We can now read records directly from the stream:

% aws kinesis get-records \
--shard-iterator $ITERATOR \
--limit 50 \
--region us-east-2 \
--profile my-profile
{
    "Records": [
        {
            "SequenceNumber": "49666716197061357389751170868210642185623031110770360322",
            "ApproximateArrivalTimestamp": "2025-09-03T17:37:00.343000+01:00",
            "Data": "H4sIAAAAAAAA/+3YTW/TMBgH8K8S...KaevxIQAA",
            "PartitionKey": "54dc991cd70ae6242c35f01972968478"
        },
        {
            "SequenceNumber": "49666716197061357389751170868211851111442645739945066498",
            "ApproximateArrivalTimestamp": "2025-09-03T17:37:00.343000+01:00",
            "Data": "H4sIAAAAAAAA/+3YS2vcM...889uPBV8BVXceAAA=",
            "PartitionKey": "e4e9ad254c154281a67d05a33fa0ea31"
        },
        ...
   ]
}

Data field in each record is Base64-encoded. That’s because Kinesis doesn’t know what your producers are sending (could be JSON, gzipped text, protobuf, etc.), so it just delivers raw bytes.

Why each record read from the shard has a different Partition Key?


Each record in a Kinesis stream can have a different PartitionKey because the PartitionKey is chosen by the data producer for each record and is not tied directly to the way records are read from the stream or which iterator type is used. Even when reading from a single shard using TRIM_HORIZON, records within that shard may have different PartitionKeys because the PartitionKey is used to route records to shards at the time of writing—not to group records within a shard during reading.

How Partition Keys and Shards Work


The PartitionKey determines to which shard a record is sent by hashing the key and mapping it to a shard's hash key range. Multiple records with different PartitionKeys can end up in the same shard, especially if the total number of unique partition keys is greater than the number of shards, or if the hash function maps them together. When reading from a shard (with any iterator, including TRIM_HORIZON), the records are read in order of arrival, but each can have any PartitionKey defined at ingest time.

Reading and PartitionKeys


Using TRIM_HORIZON just means starting at the oldest record available in the shard. It does not guarantee all records have the same PartitionKey, only that they are the oldest records remaining for that shard. Records from a single shard will often have various PartitionKeys, all mixed together as per their original ingest. Therefore, it is normal and expected to see a variety of PartitionKeys when reading a batch of records from the same shard with TRIM_HORIZON


Writing and Reading

Data records written by the same producer can end up in different shards if the producer chooses different PartitionKeys for those records. The distribution of records across shards is determined by the hash of the PartitionKey assigned to each record, not by the producer or the order of writing.

Distribution Across Shards

If a producer sends data with varying PartitionKeys, Kinesis uses those keys' hashes to assign records to shards; thus, even the same producer's records can be spread across multiple shards.
If the producer uses the same PartitionKey for all records, then all its records will go to the same shard, preserving strict ordering for that key within the shard.

Reading N Records: Producer and Sequence

When reading N records from a shard iterator, those records are the next available records in that shard, in the order they arrived in that shard. These records will not necessarily all be from the same producer, nor are they guaranteed to be N consecutive records produced by any single producer.

Records from different producers, as well as from the same producer if it used multiple PartitionKeys, can appear in any sequence within a shard, depending on how PartitionKeys are mapped at write time.
In summary, unless a producer always uses the same PartitionKey, its records may spread across shards, and any batch read from a shard iterator will simply reflect the ordering of records within that shard, including records from multiple producers and PartitionKeys.


How to get records content in a human-readable format?

We need to extract Data field from a record, Base64 decode it and then process the data further. In our example the payload was gzip-compressed JSON (as that's what CloudWatch Logs → Kinesis subscription delivers) so we need to decompress it and parse it as JSON:


% aws kinesis get-records \
--shard-iterator $ITERATOR \
--limit 1 \
--query 'Records[0].Data' \
--output text \
--region us-east-2 \
--profile my-profile \
| base64 --decode \
| gzip -d \
| jq .

{
  "messageType": "DATA_MESSAGE",
  "owner": "123456789999",
  "logGroup": "/aws/lambda/my-lambda",
  "logStream": "2025/09/03/[$LATEST]202865ca545544deb61360d571180d45",
  "subscriptionFilters": [
    "my-lambda-MainSubscriptionFilter-r1YlqNvCVDqk"
  ],
  "logEvents": [
    {
      "id": "39180581104302115620949753249891540826922898325656305664",
      "timestamp": 1756918020250,
      "message": "{\"log.level\":\"info\",\"@timestamp\":\"2025-09-03T16:47:00.250Z\",\"log.origin\":{\"function\":\"github.com/elastic/apm-aws-lambda/app.(*App).Run\",\"file.name\":\"app/run.go\",\"file.line\":98},\"message\":\"Exiting due to shutdown event with reason spindown\",\"ecs.version\":\"1.6.0\"}\n"
    },
    {
      "id": "39180581104302115620949753249891540826922898325656305665",
      "timestamp": 1756918020250,
      "message": "{\"log.level\":\"warn\",\"@timestamp\":\"2025-09-03T16:47:00.250Z\",\"log.origin\":{\"function\":\"github.com/elastic/apm-aws-lambda/apmproxy.(*Client).forwardLambdaData\",\"file.name\":\"apmproxy/apmserver.go\",\"file.line\":357},\"message\":\"Dropping lambda data due to error: metadata is not yet available\",\"ecs.version\":\"1.6.0\"}\n"
    },
    {
      "id": "39180581104302115620949753249891540826922898325656305666",
      "timestamp": 1756918020250,
      "message": "{\"log.level\":\"warn\",\"@timestamp\":\"2025-09-03T16:47:00.250Z\",\"log.origin\":{\"function\":\"github.com/elastic/apm-aws-lambda/apmproxy.(*Client).forwardLambdaData\",\"file.name\":\"apmproxy/apmserver.go\",\"file.line\":357},\"message\":\"Dropping lambda data due to error: metadata is not yet available\",\"ecs.version\":\"1.6.0\"}\n"
    }
  ]
}


Metrics to Observe


Stream has producers and consumers. If rate of writing new records is higher than rate of reading them, records will reach their retention age (24 hours by default) and the oldest unread will start being removed from the stream and lost forever.

To prevent this from happening we can monitor some stream metrics and also set alarms when they reach critical thresholds.


GetRecords.IteratorAgeMilliseconds

It measures how old the oldest record returned by GetRecords is (how far our consumer lags). A very large value → our consumer(s) aren’t keeping up with the incoming write rate.

The age of the last record in all GetRecords calls made against a Kinesis stream, measured over the specified time period. Age is the difference between the current time and when the last record of the GetRecords call was written to the stream. The Minimum and Maximum statistics can be used to track the progress of Kinesis consumer applications. A value of zero indicates that the records being read are completely caught up with the stream. Shard-level metric name: IteratorAgeMilliseconds.

Meaningful Statistics: Minimum, Maximum, Average, Samples

Unit info: Milliseconds

There are 86,400,000 milliseconds in a day so if the reading of this metric goes above it, that means that some records will be lost.

Iterator-age number is a classic “consumer is falling behind” symptom.


GetRecords.Bytes

The number of bytes retrieved from the Kinesis stream, measured over the specified time period. Minimum, Maximum, and Average statistics represent the bytes in a single GetRecords operation for the stream in the specified time period.  
             
Shard-level metric name: OutgoingBytes
             
Meaningful Statistics: Minimum, Maximum, Average, Sum, Samples
             
Unit info: Bytes



Addressing Bottlenecks


If Lambda function is stream consumer, with just 1 shard, only 1 Lambda invocation can read at a time (per shard). If our log rate exceeds what that shard can handle, the iterator age skyrockets.

It is possible to change number of shards on a live Kinesis stream:

aws kinesis update-shard-count \
  --stream-name your-stream-name \
  --target-shard-count 4 \
  --scaling-type UNIFORM_SCALING


Each shard = 1 MiB/s write and 2 MiB/s read, so 4 shards = 4 MiB/s write and 8 MiB/s read.
Lambda will then process 4 records batches in parallel (one per shard).



Amazon Kinesis Data Streams Terminology and concepts - Amazon Kinesis Data Streams

Introduction to Amazon Kinesis

 


Amazon Kinesis is a Serverless Streaming Data Service which has 4 Service types:

  • Amazon Kinesis Video Streams - to securely stream video from connected devices to AWS for analytics, machine learning (ML), playback, and other processing
  • Amazon Kinesis Data Streams - to easily stream data at any scale
  • Amazon Data Firehose - to reliably loads real-time streams into data lakes, warehouses, and analytics services
  • Amazon Managed Service for Apache Flink - to transform and analyze streaming data in real time



Friday, 8 August 2025

AWS EKS Cluster Networking





If we select a cluster and go to Networking tab, we'll see the following settings: 
  • VPC
  • Cluster IP address family
  • Service IPv4 range
  • Subnets
  • Cluster security group
  • Additional security groups
  • API server endpoint access

Manage drop down groups them into following:
  • VPC Resources (Network environment)
    • Subnets
    • Additional security groups - optional
  • Endpoint access (API server endpoint access)
  • Remote networks


We'll describe here the meaning and purpose for each of them.


VPC


Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you have defined. This virtual network closely resembles a traditional network that you would operate in your own data center, with the benefits of using the scalable infrastructure of AWS. 

A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. A subnet is a range of IP addresses in your VPC. 

Each Managed Node Group requires you to specify one of more subnets that are defined within the VPC used by the Amazon EKS cluster. Nodes are launched into subnets that you provide. The size of your subnets determines the number of nodes and pods that you can run within them. 

You can run nodes across multiple AWS availability zones by providing multiple subnets that are each associated different availability zones. Nodes are distributed evenly across all of the designated Availability Zones

If you are using the Kubernetes Cluster Autoscaler and running stateful pods, you should create one Node Group for each availability zone using a single subnet and enable the -\-balance-similar-node-groups feature in cluster autoscaler.

EKS suggests using Private Subnets for worker nodes.




Cluster IP address family



Select the IP address type that pods and services in your cluster will receive: IPv4 or IPv6.

Amazon EKS does not support dual stack clusters. However, if you worker nodes contain an IPv4 address, Amazon EKS will configure IPv6 pod routing so that pods can communicate with cluster external IPv4 endpoints.


Service IPv4 range


The IP address range from which cluster services will receive IP addresses. Manually configuring this range can help prevent conflicts between Kubernetes services and other networks peered or connected to your VPC.

Service CIDR is only configurable when choosing IPv4 as your cluster IP address family. With IPv6, the service CIDR will be an auto generated unique local address (ULA) range.


Subnets


Choose the subnets in your VPC where the control plane may place elastic network interfaces (ENIs) to facilitate communication with your cluster. The specified subnets must span at least two availability zones.

To control exactly where the ENIs will be placed, specify only two subnets, each from a different AZ, and Amazon EKS will make cross-account ENIs in those subnets. The Amazon EKS control plane creates up to 4 cross-account ENIs in your VPC for each cluster.

You may choose one set of subnets for the control plane that are specified as part of cluster creation, and a different set of subnets for the worker nodes.

EKS suggests using private subnets for worker nodes.

If you select IPv6 cluster address family, the subnets specified as part of cluster creation must contain an IPv6 CIDR block.

Cluster security group & Additional security groups


Amazon VPC Security groups control communications within the Amazon EKS cluster including between the managed Kubernetes control plane and compute resources in your AWS account such as worker nodes and Fargate pods.

The Cluster Security Group is a unified security group that is used to control communications between the Kubernetes control plane and compute resources on the cluster. The cluster security group is applied by default to the Kubernetes control plane managed by Amazon EKS as well as any managed compute resources created by Amazon EKS. 

EKS automatically creates a cluster security group on cluster creation to facilitate communication between worker nodes and control plane. Description of such SG is: EKS created security group applied to ENI that is attached to EKS Control Plane master nodes, as well as any managed workloadsThe name of this SG is in form: eks-cluster-sg-<cluster_name>-1234567890. It's attached to the same VPC that cluster is in. Its rules are:
  • Inbound: allow all traffic (all protocols and ports) from itself (see https://stackoverflow.com/questions/66917854/aws-security-group-source-of-inbound-rule-same-as-security-group-name)
  • Outbound: allow all IPv4 and IPv6 traffic

Optionally, choose additional security groups to apply to the EKS-managed Elastic Network Interfaces that are created in your control plane subnets. To create a new security group, go to the corresponding page in the VPC console.

Additional cluster security groups control communications from the Kubernetes control plane to compute resources in your account. Worker node security groups are security groups applied to unmanaged worker nodes that control communications from worker nodes to the Kubernetes control plane.

You can apply additional cluster security groups to control communications from the Kubernetes control plane to compute resources in your account.

Worker node security groups are security groups applied to unmanaged worker nodes that control communications from worker nodes to the Kubernetes control plane.

Example:

  • Description: EKS cluster security group
  • Inbound rules:
    • IP version: IPv4
    • Type: HTTPS 
    • Protocol: TCP
    • Port range: 443
    • Source: 192.168.1.0/24
    • Description: Office LAN CIDR (for acccess via Site-to-site VPN)



API server endpoint access


You can limit, or completely disable, public access from the internet to your Kubernetes cluster endpoint.

Amazon Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as kubectl). By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC).

You can, optionally, limit the CIDR blocks that can access the public endpoint. If you limit access to specific CIDR blocks, then it is recommended that you also enable the private endpoint, or ensure that the CIDR blocks that you specify include the addresses that worker nodes and Fargate pods (if you use them) access the public endpoint from.

You can enable private access to the Kubernetes API server so that all communication between your worker nodes and the API server stays within your VPC. You can limit the IP addresses that can access your API server from the internet, or completely disable internet access to the API server.


Cluster endpoint access Info

Configure access to the Kubernetes API server endpoint.
  • Public - The cluster endpoint is accessible from outside of your VPC. Worker node traffic will leave your VPC to connect to the endpoint.
  • Public and private - The cluster endpoint is accessible from outside of your VPC. Worker node traffic to the endpoint will stay within your VPC.
  • Private - The cluster endpoint is only accessible through your VPC. Worker node traffic to the endpoint will stay within your VPC.
If we choose Public or Public and private we get Advanced settings with option to Add/edit sources to public access endpoint. We can add here up to 40 CIDR blocks

Public access endpoint sources - Determines the traffic that can reach the Kubernetes API endpoint of this cluster.

Use CIDR notation to specify an IP address range (for example, 203.0.113.5/32).

If connecting from behind a firewall, you'll need the IP address range used by the client computers.

By default, your public endpoint is accessible from anywhere on the internet (0.0.0.0/0).

If you restrict access to your public endpoint using CIDR blocks, it is strongly recommended to also enable private endpoint access so worker nodes and/or Fargate pods can communicate with the cluster. Without the private endpoint enabled, your public access endpoint CIDR sources must include the egress sources from your VPC. For example, if you have a worker node in a private subnet that communicates to the internet through a NAT Gateway, you will need to add the outbound IP address of the NAT Gateway as part of a allowlisted CIDR block on your public endpoint.


---

Friday, 1 August 2025

Introduction to AWS IAM Identity Center




IAM Identity Center (formerly AWS Single Sign-On, or AWS SSO) enables you to centrally manage workforce access to multiple AWS accounts and applications via single sign-on.



IAM Identity Center setup


(1) Confirm your identity source


The identity source is where you administer users and groups, and it is the service that authenticates your users. By default, IAM Identity Center creates an Identity Center directory.

(2) Manage permissions for multiple AWS accounts


Give users and groups access to specific AWS accounts in your organization.

(3) Set up application user and group assignments


Give users and groups access to specific applications configured to work with IAM Identity Center.

(4) Register a delegated administrator


Delegate the ability to manage IAM Identity Center to a member account in your AWS organization.



AWS SSO Authentication


When you run aws sso login, it initiates an authentication flow that communicates with your organization's configured IAM Identity Center instance to obtain temporary AWS credentials for CLI use. This command does not interact with the legacy AWS SSO service, but with the current IAM Identity Center (the new, official name as of July 2022). The authentication process exchanges your SSO credentials for tokens that allow you to use other AWS CLI commands with the associated permissions.

aws sso login --profile my_profile

The profile named in aws sso login --profile my_profile must be defined in your AWS CLI configuration file, specifically in ~/.aws/config (on Linux/macOS) or %USERPROFILE%.aws\config (on Windows).

To define or create an SSO profile, use the interactive command:

aws configure sso --profile my_profile

This command will prompt you for required details such as the SSO start URL, AWS region, account ID, and role name, and then write them into your ~/.aws/config file.

A typical SSO profile configuration in ~/.aws/config might look like:

[profile my_profile]
sso_session = my-sso
sso_account_id = 123456789012
sso_role_name = AdministratorAccess

[sso-session my-sso]
sso_start_url = https://myorg.awsapps.com/start
sso_region = us-east-1
sso_registration_scopes = sso:account:access


Never define the profile in ~/.aws/credentials; SSO profiles rely on ~/.aws/config.

After defining it, aws sso login --profile my_profile will use the details in ~/.aws/config to initiate login.

The most straightforward method is using aws configure sso with your desired profile name as shown above.

If you already have an SSO session defined, you can reuse it across multiple profiles by referencing the same sso_session.
...

Monday, 21 July 2025

AWS Site-to-Site VPN


How to Setup a VPN Connection between the office router and AWS VPN?

How to setup a IPSEC VPN Connection between our office router e.g. Cisco ASA and the AWS VPN endpoints?

AWS Virtual Private Network solutions establish secure connections between our on-premises networks, remote offices, client devices, and the AWS global network. 

AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. 

Each service provides a highly-available, managed, and elastic cloud VPN solution to protect our network traffic.

In this article we'll talk about AWS Site-to-Site VPN.


AWS Site-to-Site VPN 


Network diagram:


on-premise LAN: 192.168.0.0/16 
-----------------------------------------
/ \                         / \
 |                           |
 |  active tunnel            |  passive (standby) tunnel
 |                           |
\ /                         \ /
-----------------------------------------
Router1                    Router 2

VGW - Virtual Gateway 
VPC: 172.16.0.0/16; Route Table: 192.168.0.0/16 ---> VGW-xxxx


Can VPC CIDR and LAN CIDR overlap?

VPN connection consists of two tunnels:
  • active (up and running)
  • passive (down); if first one goes down, this one will take over

VPC route table will need to be modified so traffic destined for 192.168.0.0/16 to be routed to VGW-xxxx


AWS VPN service consists of 3 components:

Creating and configuring a Customer Gateway


Customer Gateway is a resource that we create in AWS that represents the a (customer) gateway device in our on-premises network.

When we create a customer gateway, we provide information about our device to AWS. We or our network administrator must configure the device to work with the site-to-site VPN connection.


We first need to create a Customer Gateway in AWS. We can do that via AWS console or Terraform provider. 



If we click on Create customer gateway, we'll see this form:



Details

  • Name tag
    • optional
    • Creates a tag with a key of 'Name' and a value that we specify.
    • Value must be 256 characters or less in length.
  • BGP ASN
    • The ASN of our customer gateway device.
    • e.g. 65000
    • Value must be in 1 - 4294967294 range.
    • The Border Gateway Protocol (BGP) Autonomous System Number (ASN) in the range of 1 – 4,294,967,294 is supported. We can use an existing public ASN assigned to our network, with the exception of the following:
      • 7224 - Reserved in all Regions
      • 9059 - Reserved in the eu-west-1 Region
      • 10124 - Reserved in the ap-northeast-1 Region
      • 17943 - Reserved in the ap-southeast-1 Region
    • If we don't have a public ASN, we can use a private ASN in the range of 64,512–65,534 or 4,200,000,000 - 4,294,967,294. The default ASN is 65000.
    • It is required if we want to set up dynamic routing. If we want to use static routing, we can use an arbitrary (default) value.
    • Where to find BGP ASN for e.g. UDM Pro?
    • If we want to use IPSec and dynamic routing, then our router device needs to support BGP over IPSec
    • When to use static and when to use dynamic routing?
  • IP address
    • Specify the IP address for our customer gateway device's external interface. This is internet-routable IP address for our gateway's external interface.
    • The address must be static and can't be behind a device performing Network Address Translation (NAT)
    • If office router is connected to ISP via e.g. WAN1 connection, this is the IP of that WAN connection 
    • Basically, this is the office's public IP address.
  • Certificate ARN
    • optional
    • The ARN of a private certificate provisioned in AWS Certificate Manager (ACM).
    • We can select certificate ARN from a drop-down list
    • How is this certificate used?
    • When to use this certificate?
  • Device
    • optional
    • A name for the customer gateway device.

Creating and configuring a Virtual private gateway


A virtual private gateway is the VPN concentrator on the Amazon side of the site-to-site VPN connection. We create a virtual private gateway and attach it to the VPC we want to use for the site-to-site VPN connection.


A VPN concentrator is a specialized networking device designed to manage numerous secure connections (VPN tunnels) for remote users or sites accessing a central network. It acts as a central point for establishing, processing, and maintaining these connections, enabling large organizations to securely connect many users simultaneously. 

Key Functions:
  • Multiple VPN Tunnel Management: VPN concentrators handle a large number of encrypted VPN tunnels simultaneously, allowing multiple users to securely connect to the network. 
  • Centralized Security: They provide a central point for managing and enforcing security policies for all remote connections, ensuring consistent protection. 
  • Scalability: VPN concentrators are designed to handle a large number of users and connections, making them suitable for large organizations with many remote workers or sites. 
  • Traffic Encryption: They encrypt all data transmitted between the remote user and the central network, ensuring secure communication and protecting sensitive information. 
  • Enhanced Security Posture: By managing and controlling all VPN connections, they help organizations maintain a strong security posture and minimize risks associated with remote access. 
How it Works:
  • 1. Remote User Connection: Remote users initiate a VPN connection, which is then routed to the VPN concentrator. 
  • 2. Authentication and Authorization: The concentrator authenticates and authorizes the user, verifying their identity and permissions. 
  • 3. Tunnel Establishment: If the user is authorized, the concentrator establishes an encrypted VPN tunnel between the user's device and the central network. 
  • 4. Secure Communication: All data transmitted through the tunnel is encrypted, protecting it from eavesdropping or interception. 
  • 5. Traffic Management: The concentrator manages and prioritizes traffic within the network, ensuring efficient and secure communication. 
Use Cases:
  • Large Enterprises: Companies with numerous remote employees often use VPN concentrators to provide secure access to their internal network. 
  • Extranet VPNs: VPN concentrators are also used in extranet setups, where multiple organizations need to securely share resources and information. 
  • Large Scale Remote Access: They are ideal for organizations that need to provide secure remote access to a large number of users from various locations. 
In essence, a VPN concentrator is a robust and scalable solution for managing secure remote access in larger organizations, providing the necessary infrastructure for secure and efficient communication across the network




If we click on Create button we'll get this form to fill:


If we select Custom ASN:



Upon creation, VGW will be in detached state. We want to attach it to VPC.
We can select to which VPC we want to attach it to.

Tuesday, 8 July 2025

How to install MongoDB Shell (mongosh) on Mac

 


The main Homebrew repository no longer includes MongoDB due to licensing changes made by MongoDB Inc. So to install MongoDB-related tools (like mongosh, mongodb-community, or mongod), we need to use their own tap (mongodb/brew), which contains these formulas.

Tap is a package source (formula repository).


Let's add tap maintained by MongoDB to our local Homebrew setup:

% brew tap mongodb/brew

To install mongo shell:

% brew install mongosh

Verification:

% mongosh --version 
2.5.3


If we now run it with no arguments, it will try to connect to the local instance:

% mongosh                    
Current Mongosh Log ID: 6853eadee32b5e6cd3cc5d2f
Connecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.5.3
MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017

Let's explore the arguments:

% mongosh --help

  $ mongosh [options] [db address] [file names (ending in .js or .mongodb)]

  Options:

    -h, --help                                 Show this usage information
    -f, --file [arg]                           Load the specified mongosh script
        --host [arg]                           Server to connect to
        --port [arg]                           Port to connect to
        --build-info                           Show build information
        --version                              Show version information
        --quiet                                Silence output from the shell during the connection process
        --shell                                Run the shell after executing files
        --nodb                                 Don't connect to mongod on startup - no 'db address' [arg] expected
        --norc                                 Will not run the '.mongoshrc.js' file on start up
        --eval [arg]                           Evaluate javascript
        --json[=canonical|relaxed]             Print result of --eval as Extended JSON, including errors
        --retryWrites[=true|false]             Automatically retry write operations upon transient network errors (Default: true)

  Authentication Options:

    -u, --username [arg]                       Username for authentication
    -p, --password [arg]                       Password for authentication
        --authenticationDatabase [arg]         User source (defaults to dbname)
        --authenticationMechanism [arg]        Authentication mechanism
        --awsIamSessionToken [arg]             AWS IAM Temporary Session Token ID
        --gssapiServiceName [arg]              Service name to use when authenticating using GSSAPI/Kerberos
        --sspiHostnameCanonicalization [arg]   Specify the SSPI hostname canonicalization (none or forward, available on Windows)
        --sspiRealmOverride [arg]              Specify the SSPI server realm (available on Windows)

  TLS Options:

        --tls                                  Use TLS for all connections
        --tlsCertificateKeyFile [arg]          PEM certificate/key file for TLS
        --tlsCertificateKeyFilePassword [arg]  Password for key in PEM file for TLS
        --tlsCAFile [arg]                      Certificate Authority file for TLS
        --tlsAllowInvalidHostnames             Allow connections to servers with non-matching hostnames
        --tlsAllowInvalidCertificates          Allow connections to servers with invalid certificates
        --tlsCertificateSelector [arg]         TLS Certificate in system store (Windows and macOS only)
        --tlsCRLFile [arg]                     Specifies the .pem file that contains the Certificate Revocation List
        --tlsDisabledProtocols [arg]           Comma separated list of TLS protocols to disable [TLS1_0,TLS1_1,TLS1_2]
        --tlsFIPSMode                          Enable the system TLS library's FIPS mode

  API version options:

        --apiVersion [arg]                     Specifies the API version to connect with
        --apiStrict                            Use strict API version mode
        --apiDeprecationErrors                 Fail deprecated commands for the specified API version

  FLE Options:

        --awsAccessKeyId [arg]                 AWS Access Key for FLE Amazon KMS
        --awsSecretAccessKey [arg]             AWS Secret Key for FLE Amazon KMS
        --awsSessionToken [arg]                Optional AWS Session Token ID
        --keyVaultNamespace [arg]              database.collection to store encrypted FLE parameters
        --kmsURL [arg]                         Test parameter to override the URL of the KMS endpoint

  OIDC auth options:

        --oidcFlows[=auth-code,device-auth]    Supported OIDC auth flows
        --oidcRedirectUri[=url]                Local auth code flow redirect URL [http://localhost:27097/redirect]
        --oidcTrustedEndpoint                  Treat the cluster/database mongosh as a trusted endpoint
        --oidcIdTokenAsAccessToken             Use ID tokens in place of access tokens for auth
        --oidcDumpTokens[=mode]                Debug OIDC by printing tokens to mongosh's output [redacted|include-secrets]
        --oidcNoNonce                          Don't send a nonce argument in the OIDC auth request

  DB Address Examples:

        foo                                    Foo database on local machine
        192.168.0.5/foo                        Foo database on 192.168.0.5 machine
        192.168.0.5:9999/foo                   Foo database on 192.168.0.5 machine on port 9999
        mongodb://192.168.0.5:9999/foo         Connection string URI can also be used

  File Names:

        A list of files to run. Files must end in .js and will exit after unless --shell is specified.

  Examples:

        Start mongosh using 'ships' database on specified connection string:
        $ mongosh mongodb://192.168.0.5:9999/ships

  For more information on usage: https://mongodb.com/docs/mongodb-shell.

To test connection:

% mongosh "mongodb://myuser:pass@mongo0.example.com:27017,mongo1.example.com:27017,mongo2.example.com:27017"
Current Mongosh Log ID: 6853ed1a745676a15bb62b1f
Connecting to: mongodb://<credentials>@mongo0.example.com:27017,mongo1.example.com:27017,mongo2.example.com:27017/?appName=mongosh+2.5.3
Using MongoDB: 8.0.8-3
Using Mongosh: 2.5.3

For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/


To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.

------
   The server generated these startup warnings when booting
   2025-06-11T15:17:19.361+00:00: While invalid X509 certificates may be used to connect to this server, they will not be considered permissible for authentication
------
[mongos] test>

To test if this is a master cluster:

[mongos] test> db.runCommand({ isMaster: 1 })
{
  ismaster: true,
  msg: 'isdbgrid',
  topologyVersion: {
    processId: ObjectId('68499daa6d644d093f3230a7'),
    counter: Long('0')
  },
  maxBsonObjectSize: 16777216,
  maxMessageSizeBytes: 48000000,
  maxWriteBatchSize: 100000,
  localTime: ISODate('2025-06-19T10:58:20.543Z'),
  logicalSessionTimeoutMinutes: 30,
  connectionId: 7122158,
  maxWireVersion: 25,
  minWireVersion: 0,
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1750330700, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('2yaQ2MNlRYg3aXYhfRzQ4jxXIA0=', 0),
      keyId: Long('7511354796578701336')
    }
  },
  operationTime: Timestamp({ t: 1750330700, i: 1 })
}

To issue hello command (which returns a document that describes the role of the mongod instance): 

[mongos] test> db.runCommand({ hello: 1 })
{
  isWritablePrimary: true,
  msg: 'isdbgrid',
  topologyVersion: {
    processId: ObjectId('68499d52d772b382ee78bcc8'),
    counter: Long('0')
  },
  maxBsonObjectSize: 16777216,
  maxMessageSizeBytes: 48000000,
  maxWriteBatchSize: 100000,
  localTime: ISODate('2025-06-19T10:58:43.289Z'),
  logicalSessionTimeoutMinutes: 30,
  connectionId: 7126567,
  maxWireVersion: 25,
  minWireVersion: 0,
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1750330723, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('4en7J3oSF9fRGUUOHmkq4icWsOQ=', 0),
      keyId: Long('7511354796578701336')
    }
  },
  operationTime: Timestamp({ t: 1750330723, i: 1 })
}
[mongos] test> 


Monday, 30 June 2025

Introduction to Amazon API Gateway


 Amazon API Gateway:

  • fully managed service to create, publish, maintain, monitor, and secure APIs at any scale
    • APIs act as the "front door" for applications to access data, business logic, or functionality from our backend services
  • allows creating:
    • RESTful APIs
      • optimized for serverless workloads and HTTP backends using HTTP APIs
        • they act as triggers for Lambda functions
      • HTTP APIs are the best choice for building APIs that only require API proxy functionality
      • Use REST APIs if our APIs require in a single solution both:
        • API proxy functionality 
        • API management features
    • WebSocket APIs that enable real-time two-way communication applications
  • supports:
    • containerized workloads
    • serverless workloads
    • web applications
  • handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including:
    • traffic management
    • CORS support
    • authorization and access control
    • throttling
    • monitoring
    • API version management
  • has no minimum fees or startup costs. We pay for the API calls we receive and the amount of data transferred out and, with the API Gateway tiered pricing model, we can reduce our cost as our API usage scales


RESTful APIs


What is the difference between REST API endpoints (apiGateway) and HTTP API endpoints (httpApi)?

The difference between REST API endpoints (apiGateway) and HTTP API endpoints (httpApi) in Amazon API Gateway primarily comes down to features, performance, cost, and use cases.


REST API endpoints (apiGateway):
  • Older, feature-rich, supports API keys, usage plans, request/response validation, custom authorizers, and more.
  • More configuration options, but higher latency and cost.
  • Defined under the provider.apiGateway section and function events: http.

HTTP API endpoints (httpApi):
  • Newer, simpler, faster, and cheaper.
  • Supports JWT/Lambda authorizers, CORS, and OIDC, but lacks some advanced REST API features.
  • Defined under provider.httpApi and function events: httpApi.


Friday, 27 June 2025

GitHub Workflows and AWS




GitHub workflow can communicate with our AWS resources, directly (via AWS CLI commands) or indirectly (via e.g. Terraform AWS provider).

Before running AWS CLI commands, deploying AWS infrastructure with Terraform, or interacting with AWS services in any way we need to include a step which configures AWS credentials. It ensures that the workflow runner is authenticated with AWS and knows which region to target.

This step should contain configure-aws-credentials action provided by AWS. This action sets up the necessary environment variables so that AWS CLI commands and SDKs can authenticate with AWS services.

aws-region input sets the default AWS region to us-east-2 (Ohio). All AWS commands run in later steps will use this region unless overridden.

We can use either IAM user or OIDC (temp token) authentication.

IAM User Authentication


If using IAM user authentication, we can store user's credentials in a dedicated GitHub secrets:

env:
    AWS_ACCOUNT_ID: ${{ secrets.AWS_ACCOUNT_ID }}
    AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
    AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
    AWS_REGION: us-east-2

// Define this step before steps which are accessing AWS:

- name: Configure AWS Credentials
     uses: aws-actions/configure-aws-credentials@v2
     with:
        aws-region: ${{ env.AWS_REGION }}

 OpenID Connect (OIDC) Authentication


In this authentication, configure-aws-credentials GitHub Action uses GitHub's OpenID Connect (OIDC) for secure authentication with AWS. It leverages the OIDC token provided by GitHub to request temporary AWS credentials from AWS STS, eliminating the need to store long-lived AWS access keys in GitHub Secrets. 

Note that we now need to grant the workflow run a permissions for write access to the id-token:
id-token: write allows the workflow to request and use OpenID Connect (OIDC) tokens. The write level is required for actions that need to generate or use OIDC tokens to authenticate with external systems. Granting id-token: write is essential for workflows that use OIDC-based authentication, such as securely assuming AWS IAM roles via GitHub Actions. This enables secure, short-lived authentication to AWS and other cloud providers. This permission is a security best practice for modern CI/CD workflows that use OIDC to authenticate with cloud providers, reducing the need for static secrets.


env:
    AWS_REGION: us-east-2

permissions:
  id-token: write # aws-actions/configure-aws-credentials (OIDC)

...
- name: Configure AWS Credentials
    uses: aws-actions/configure-aws-credentials@v4
    with:
        role-to-assume: arn:aws:iam::123456789012:role/github-actions-role
        role-session-name: my-app
        aws-region:  ${{ env.AWS_REGION }}



Here's how it works: 
  1. GitHub OIDC Provider: GitHub acts as an OIDC provider, issuing signed JWTs (JSON Web Tokens) to workflows that request them.
  2. configure-aws-credentials Action: This action, when invoked in a GitHub Actions workflow, receives the JWT from the OIDC provider.
  3. AWS STS Request: The action then uses the JWT to request temporary security credentials from AWS Security Token Service (STS).
  4. Credential Injection: AWS STS returns temporary credentials (access key ID, secret access key, and session token) which the action injects as environment variables into the workflow's execution environment.
  5. AWS SDKs and CLI: AWS SDKs and the AWS CLI automatically detect and use these environment variables for authenticating with AWS services.

Benefits of using OIDC with configure-aws-credentials:
  • Enhanced Security: Eliminates the need to store long-lived AWS access keys, reducing the risk of compromise.
  • Simplified Credential Management: Automatic retrieval and injection of temporary credentials, simplifying workflow setup and maintenance.
  • Improved Auditing: Provides better traceability of actions performed within AWS, as the identity is linked to the GitHub user or organization. 

Before using the action:
  • Configure an OpenID Connect provider in AWS: We need to establish an OIDC trust relationship between GitHub and our AWS account.
  • Create an IAM role in AWS: Define the permissions for the role that the configure-aws-credentials action will assume.
  • Set up the GitHub workflow: Configure the configure-aws-credentials action with the appropriate parameters, such as the AWS region and the IAM role to assume. 

In an OpenID Connect (OIDC) authentication scenario, the aws-actions/configure-aws-credentials action creates the following environment variables when assuming a role with temporary credentials: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN. These variables are used by the AWS SDK and CLI to interact with AWS resources. 

Here's a breakdown:
  • AWS_ACCESS_KEY_ID: This environment variable stores the access key ID of the temporary credentials. 
  • AWS_SECRET_ACCESS_KEY: This environment variable stores the secret access key of the temporary credentials. 
  • AWS_SESSION_TOKEN: This environment variable stores the session token associated with the temporary credentials, which is required for operations with AWS Security Token Service (STS). 

These environment variables are populated by the action after successful authentication with the OIDC provider and assuming the specified IAM role. The action retrieves the temporary credentials from AWS and makes them available to subsequent steps in the workflow. 


Once AWS authentication is done and this env variables are created, the next steps in the workflow can access our AWS resources, e.g. read secrets from AWS Secrets Manager:

- name: Read secrets from AWS Secrets Manager into environment variables
    uses: aws-actions/aws-secretsmanager-get-secrets@v2
    with:
        secret-ids: |
            my-secret
        parse-json-secrets: true

- name: deploy
    run: |
        echo $AWS_ACCESS_KEY_ID
        echo $AWS_SECRET_ACCESS_KEY
    env:
        MY_KEY: ${{ env.MY_SECRET_MY_KEY }}

This example assumes that in AWS secret my-secret we have a key MY_KEY, set to the secret value we want to fetch and use.