Friday, 31 October 2025

Elasticsearch Nodes


Elasticsearch nodes are individual instances of Elasticsearch servers that are part of a cluster. Each node stores data and participates in the cluster’s indexing and search capabilities, playing a critical role in the distributed architecture of Elasticsearch.​

Key Points about Elasticsearch Nodes:


A node is a single server or instance running Elasticsearch, identified by a unique name.

Nodes collectively form a cluster, which is a group of Elasticsearch nodes working together.

Nodes can have different roles:
  • Master Node: Manages the cluster state and handles cluster-wide actions like adding/removing nodes and creating/deleting indices.
  • Data Node: Stores data and executes data-related operations such as searches and aggregations.
  • Client (Coordinating) Node: Routes requests to the appropriate nodes but does not hold data.
  • Other special roles include ingestion and machine learning nodes.

Nodes communicate through TCP ports (commonly 9200 for REST API and 9300 for node-to-node communication).

Elasticsearch distributes data across nodes using shards, enabling horizontal scalability, fault tolerance, and high availability.​

In essence, nodes are the building blocks of an Elasticsearch cluster, with each node running on a server (physical or virtual) and working in coordination to provide fast search and analytics on distributed data.

To list all nodes with their attributes we can run this command in Kibana DevTools:


GET /_cat/nodes?v

Output example:

ip            heap.percent ram.percent cpu load_1m load_5m load_15m    node.role       master name
10.199.43.136           44          61   5    1.69    1.71     1.51 cdfhilmrstw -      default-2
10.199.6.164            38          55   4    0.96    1.40     1.33 cdfhilmrstw -      default-1
10.199.30.70            25          51   9    1.61    1.57     1.06 cdfhilmrstw -      data-0
10.199.38.215           46         100  13    1.69    1.71     1.51 cdfhilmrstw -      data-1
10.199.1.249            81          76  30    0.96    1.40     1.33 cdfhilmrstw *      monitoring-1
10.199.32.134           75         100  27    1.69    1.71     1.51 cdfhilmrstw -      monitoring-0
10.199.23.94            77         100  26    1.61    1.57     1.06 cdfhilmrstw -      monitoring-2
10.199.18.75            23          91  19    1.61    1.57     1.06 cdfhilmrstw -      default-0
10.199.15.193           59          56   5    0.96    1.40     1.33 cdfhilmrstw -      data-2


---

Elasticsearch Indices




An Elasticsearch index is a logical namespace that stores and organizes a collection of related JSON documents, similar to a database table in relational databases but designed for full-text search and analytics. 

Each index is uniquely named and can contain any number of documents, where each document is a set of key-value pairs (fields) representing your data.​

Key Features of an Elasticsearch Index


  • Structure: An index is comprised of one or more shards, which are distributed across nodes in the Elasticsearch cluster for scalability and resilience.​
  • Mapping and Search: Indexes define mappings that control how document fields are stored and searched.
  • Indexing Process: Data is ingested and stored as JSON documents in the index, and Elasticsearch builds an inverted index to allow for fast searches.​
  • Use Case: Indices are used to organize datasets in log analysis, search applications, analytics, or any scenario where rapid search/retrieval is needed.​

In summary, an Elasticsearch index is the foundational storage and retrieval structure enabling efficient search and analytics on large datasets.


Index Lifecycle Policy (ILM)


An Index Lifecycle Management (ILM) policy defines what happens to an index as it ages — automatically. It’s a set of rules for retention, rollover, shrink, freeze, and delete.

Example:

PUT _ilm/policy/functionbeat
{
  "policy": {
    "phases": {
      "hot": {
        "actions": {
          "rollover": { "max_age": "30d", "max_size": "50GB" }
        }
      },
      "delete": {
        "min_age": "90d",
        "actions": { "delete": {} }
      }
    }
  }
}


This says:
  • Keep the index hot (actively written to) until it’s 30 days old or 50 GB big.
  • Then roll over (create a new index and switch writes to it).
  • After 90 days, delete the old index.

ILM be applied to a standard (non–data stream) index. We can attach an ILM policy to any index, not just data streams. However, there’s a big difference:

  • Rollover alias required:
    • Standard Index:Yes. We must manually set up an alias to make rollover work!
    • Data Stream: No (handled automatically - Elastic manages the alias and the backing indices)
  • Multiple backing indices
    • Standard Index: Optional (via rollover)
    • Data Stream: Always (that’s how data streams work)
  • Simplified management
    • Standard Index: Manual setup
    • Data Stream: Built-in

Index Rollover vs Data Stream


If we have a continuous stream of documents (e.g. logs) being written to Elasticsearch, we should not write them to a regular index as its size will grow over time and we'll need to keep increasing a node storage. Instead, we should consider one of the following options:

  1. Data Stream
  2. Index with ILM policy which defines a rollover conditions

What does rollover mean for a standard index?

When a rollover is triggered (by size, age, or doc count):

  • Elasticsearch creates a new index with the same alias.
  • The alias used for writes (e.g. functionbeat-write) is moved from the old index to the new one.
  • Functionbeat or Logstash continues writing to the same alias, unaware that rollover happened.


Example:

# Initially
functionbeat-000001  (write alias: functionbeat-write)

# After rollover
functionbeat-000001  (read-only)
functionbeat-000002  (write alias: functionbeat-write)


This keeps the write flow continuous and allows you to:
  • Manage old data (delete, freeze, move to cold tier)
  • Limit index size for performance

How to apply ILM to a standard index?

Here’s a minimal configuration:

PUT _ilm/policy/functionbeat
{
  "policy": {
    "phases": {
      "hot": {
        "actions": {
          "rollover": { "max_age": "30d", "max_size": "50GB" }
        }
      },
      "delete": {
        "min_age": "30d",
        "actions": { "delete": {} }
      }
    }
  }
}

PUT _template/functionbeat
{
  "index_patterns": ["functionbeat-*"],
  "settings": {
    "index.lifecycle.name": "functionbeat",
    "index.lifecycle.rollover_alias": "functionbeat-write"
  }
}


The following command creates a new index called functionbeat-000001 (if it doesn’t already exist). If the index does exist, it updates the aliases section. It creates an alias named functionbeat-write that points to this index. (Aliases are like virtual index names — you can send reads or writes to the alias instead of a specific index. They’re lightweight and flexible.). is_write_index: true tells Elasticsearch: “When someone writes to this alias, route the write operations to this index.” If you later have: functionbeat-000001, functionbeat-000002 and both share the alias functionbeat-write, then only the one with "is_write_index": true will receive new documents.

PUT functionbeat-000001
{
  "aliases": {
    "functionbeat-write": { "is_write_index": true }
  }
}


ILM rollover works by:
  • Watching the alias (functionbeat-write), not a specific index.
  • When rollover conditions are met (e.g. 50 GB or 30 days), Elasticsearch:
    • Creates a new index (functionbeat-000002)
    • Moves "is_write_index": true from 000001 to 000002. From that moment, all new Functionbeat writes go to the new index — automatically.
After rollover:
  • functionbeat-000001 becomes read-only, but still searchable.
  • ILM will later delete it when it ages out (based on your policy).

So that last command effectively bootstraps the first generation of an ILM-managed index family.
  • ILM policy: Automates rollover, delete, etc.
  • Rollover action: Creates a new index and shifts the alias
  • Alias requirement: Required, used for write continuity
  • Data stream alternative: Better option, handles rollover and aliasing for you

Index Template

Index templates do not retroactively apply to existing indices. They only apply automatically to new indices created after the template exists.

When we define an index template like:

PUT _index_template/functionbeat
{
  "index_patterns": ["functionbeat-*"],
  "template": {
    "settings": {
      "index.lifecycle.name": "functionbeat"
    }
  }
}


That template becomes part of the index creation logic.

So:

When a new index is created (manually or via rollover),
→ Elasticsearch checks all templates matching the name.
→ The matching template(s) are merged into the new index settings.

Existing indices are not touched or updated.

If we already have an index — e.g. functionbeat-8.7.1 — that matches the template pattern, it won’t automatically get the template settings.

We need to apply those manually, for example:

PUT functionbeat-8.7.1/_settings
{
  "index.lifecycle.name": "functionbeat",
  "index.lifecycle.rollover_alias": "functionbeat-write"
}

Now the existing index is under ILM control (using the same settings the template would have applied if it were created fresh).

Elasticsearch treats index templates as blueprints for new indices, not as live configurations.
This is intentional — applying settings automatically to existing indices could cause:
  • unintended allocation moves,
  • mapping conflicts,
  • or lifecycle phase resets.

We want to keep as least as possible data in Elasticsearch. If data stored are logs, we want to:
  • make sure apps are sending only meaningful logs
  • make sure we capture repetitive error messages so the app can be fixed and stop emitting them

Shards and Replicas


We can set the number of shards and replicas per index in Elasticsearch when we create the index, and we can dynamically update the number of replicas (but not the number of primary shards) for existing indices.​

Setting Shards and Replicas on Index Creation


Specify the desired number in the index settings payload:


PUT /indexName
{
  "settings": {
    "index": {
      "number_of_shards": 6,
      "number_of_replicas": 2
    }
  }
}

This creates the index with 6 primary shards and 2 replicas per primary shard.​

Adjusting Replicas After Creation


You can adjust the number of replicas for an existing index using the settings API:


PUT /indexName/_settings
{
  "index": {
    "number_of_replicas": 3
  }
}

Replicas can be changed at any time, but the number of primary shards is fixed for the lifetime of the index.​

Shard and Replica Principles


Each index has a configurable number of primary shards.
Each primary shard can have multiple replica shards (copies).
Replicas improve fault tolerance and can spread search load.​

We should choose shard and replica counts based on data size, node count, and performance needs. Adjusting these settings impacts resource usage and indexing/search performance.


Index Size


To find out the size of each index (shards) we can use the following Kibana DevTools query:


GET /_cat/shards?v&h=index,shard,prirep,state,unassigned.reason,node,store&s=store:desc

The output contains the following columns:
  • index - index name
  • shard - order number of a (primary) shard. If we have 2 shards and 2 replicas, we'd have 4 rows, with shard=0 for first two rows (first primary and replica) and shard=1 for next two rows (second primary and replica)
  • prirep - is shard a primary (p) or replica (r)
  • state - e.g. STARTED
  • unassigned
  • reason
  • node - name of the node
  • store - used storage (in gb, mb or kb)


Each shard should not be larger than 50GB. We can impose this via Index Lifecycle Policy where we can set rollover criteria.

Friday, 24 October 2025

Introduction to Kubernetes CoreDNS



CoreDNS is a DNS server that runs inside Kubernetes and is responsible for service discovery — i.e., translating service names (like my-service.default.svc.cluster.local) into IP addresses.


What CoreDNS Does

In a Kubernetes cluster:
  • Every Pod and Service gets its own DNS name.
  • CoreDNS listens for DNS queries from Pods (via /etc/resolv.conf).
  • It looks up the name in the cluster’s internal DNS records and returns the correct ClusterIP or Pod IP.
So if a Pod tries to reach mysql.default.svc.cluster.local, CoreDNS will resolve it to the IP of the mysql service.

How It Works

Runs as a Deployment in the kube-system namespace.
Has a Service called kube-dns (for backward compatibility).
Uses a ConfigMap (coredns) to define how DNS queries are processed.
Listens on port 53 (UDP/TCP), the standard DNS port.

Example CoreDNS ConfigMap snippet:

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }

Key Plugins


CoreDNS is modular — it uses plugins for specific functionality:
  • kubernetes: handles DNS for cluster Services/Pods.
  • forward: forwards queries to upstream resolvers for external domains.
  • cache: caches responses for faster resolution.
  • prometheus: exposes metrics for monitoring.
  • health: adds a health endpoint.

Why It Matters


Without CoreDNS, Pods can’t resolve service names.
It’s essential for communication between microservices.
It’s a critical cluster component — if it breaks, DNS resolution (and often your workloads) fail.

Common Commands


Check CoreDNS pods:

kubectl get pods -n kube-system -l k8s-app=kube-dns


View CoreDNS logs:

kubectl logs -n kube-system -l k8s-app=kube-dns


Edit CoreDNS config:

kubectl -n kube-system edit configmap coredns


---

Thursday, 16 October 2025

AWS Web Application Firewall (WAF)



A web application firewall is in place to protect public-facing web applications from outside threats.
WAF should be appropriately deployed and configured to appropriately block malicious traffic.
There should be a single Global WAF ACL set.

If WAF is not enabled or there are no rulesets configured:

Recommended Solution:

Enable a Web Application Firewall (WAF) for all public-facing web applications. Configure it with standard security rulesets (e.g., OWASP Core Rule Set) to detect and block common attacks such as SQL injection, XSS, and request smuggling. Regularly update and tune the rules to minimize false positives.

Risks and Side Effects:

Implementing or tightening WAF rules may initially block legitimate traffic or cause application performance degradation if not tuned properly. Continuous monitoring and rule adjustments are required to balance security and usability.

How to Check if Affected Resources Handle Customer Data:

Review application documentation and data flow diagrams to determine which web applications process or serve customer data. Check whether endpoints behind the affected web servers are used for customer authentication, data input forms, or APIs that exchange customer information. If logs or configuration files reference customer databases or CRM integrations, those applications are considered to handle customer data.


What are the resources in aws that should be behind WAF?

In AWS, any internet-facing resource that serves web traffic or APIs should ideally be protected by a Web Application Firewall (WAF). Here’s a breakdown of the main AWS resources that should be behind WAF, depending on how your architecture is set up:

1. Amazon CloudFront Distributions

Why: CloudFront is often the first entry point for global web traffic. Applying WAF at this layer helps block malicious requests before they reach your origin servers.

Use case: Static websites, dynamic web applications, APIs, or content delivery networks (CDNs).

2. Application Load Balancers (ALB)

Why: ALBs route traffic to backend EC2 instances, containers, or Lambda functions. WAF at this level protects internal applications or microservices exposed to the internet.

Use case: Web apps, REST APIs, or services behind an ALB.

3. Amazon API Gateway

Why: API Gateway endpoints are public by default and can expose critical business logic. WAF helps prevent attacks like injection, scraping, and DoS attempts.

Use case: Public or partner-facing APIs.

4. AWS AppSync (GraphQL APIs)

Why: GraphQL APIs are vulnerable to overly complex queries and enumeration attacks. WAF can enforce query depth limits and request validation.

Use case: Mobile or web applications using GraphQL.

5. Amazon Cognito Hosted UI (optional)

Why: If you expose Cognito’s hosted UI for user sign-up or login, WAF can help protect against brute-force or injection attacks.

Use case: Authentication portals for customers or employees.

6. AWS Elastic Beanstalk Environments

Why: Beanstalk apps typically use an ALB or CloudFront. Apply WAF at the load balancer or CloudFront layer to protect the Beanstalk environment.

Use case: Managed web applications deployed through Elastic Beanstalk.

7. Public-Facing EC2 Instances (if not behind ALB)

Why: Directly exposed EC2 web servers are vulnerable entry points. A WAF can protect HTTP/HTTPS traffic through an AWS WAF-enabled CloudFront distribution or ALB placed in front.

Use case: Legacy applications or custom web servers.


Rule of Thumb

If the AWS resource:
  • Accepts HTTP/HTTPS requests directly from the internet, and
  • Handles customer data or serves data to customers
...then it should be behind AWS WAF (via CloudFront, ALB, or API Gateway integration).



AWS WAF Coverage Checklist


Here’s a short, practical checklist you can use to verify that all applicable AWS resources are protected by a WAF — either for internal compliance tracking (e.g., Drata) or automated auditing (e.g., AWS Config, Security Hub).


1. Identify Public-Facing Entry Points

 List all CloudFront distributions, ALBs, API Gateways, and AppSync APIs.

Use the AWS CLI or console:

aws cloudfront list-distributions
aws elbv2 describe-load-balancers
aws apigateway get-rest-apis
aws appsync list-graphql-apis


 Confirm which ones have a public DNS name or public IP (internet-facing).

2. Check for WAF Associations

 For each CloudFront distribution, confirm it has an associated WAF WebACL:

aws wafv2 list-web-acls --scope CLOUDFRONT


 For each Application Load Balancer:

aws wafv2 list-web-acls --scope REGIONAL


 For each API Gateway or AppSync API:

aws wafv2 list-web-acls --scope REGIONAL


 Verify that the WebACLs are actively associated with the resources above.

3. Review WAF Configuration

 Ensure WAF WebACLs use AWS Managed Rules (e.g., AWSManagedRulesCommonRuleSet, SQLiRuleSet).

 Check for custom rules or rate-based rules to block brute force or scraping.

 Verify logging is enabled to CloudWatch Logs or S3 for auditability.

4. Confirm Coverage for Customer-Data Applications

 Identify which web apps/APIs process customer data or serve data to customers (e.g., sign-in pages, dashboards, APIs).

 Ensure those endpoints are behind a CloudFront or ALB with WAF enabled.

 For internal-only services, document why WAF protection is not required (for audit traceability).

5. Ongoing Monitoring

 Enable AWS Config rule: wafv2-webacl-resource-association-check
→ Automatically detects if CloudFront, ALB, or API Gateway resources lack WAF association.

 Integrate findings into AWS Security Hub or Drata evidence collection for continuous compliance.

Extended Arguments (xargs) Unix command




xargs builds and executes command lines from standard input.

While pipe command (|) takes the stdout of the previous command and forwards it to stdin of the next command, xargs takes space-separated strings from that stdin and convert them into arguments of xargs command.

Example:

$ echo "/dirA /dirB" | xargs ls

will be converted to:

ls /dirA /dirB

-n1 causes the command specified by xargs to be executed, taking one argument at a time from its input (arguments can be each in one line or simply separated by spaces or tabs) and running the command separately for each individual item.​ The -n1 option means "use just one argument per command invocation".​ Each item from standard input (for example, a filename from a list) will be passed separately to the command. As a result, the command will be run as many times as there are items in the input, once for each.

Example:

echo -e "a\nb\nc" | xargs -n1 echo

...will produce:

echo a
echo b
echo c

So each invocation receives only one argument from the input.​

This is useful when a command should process only one item at a time, such as deleting files one by one, or when handling commands that cannot accept multiple arguments simultaneously.

-I{} replaces {} in the command with the input item (in the following example that's lambda function name).

In the following example we use -I to replace {} with incoming argument for xarg and then we use $ positional parameters to interpolate inputs for sh:

Let's assume we have previously defined variables like...

AWS_PROFILE=...
AWS_REGION=...
SG_ID=...


aws lambda list-functions \
    --profile "$AWS_PROFILE" \
    --region "$AWS_REGION" \
    --query "Functions[].FunctionName" \
    --output text | \
xargs \
    -n1 \
    -I{} \
    sh -c \
        "aws lambda get-function-configuration \
            --profile \"\$1\" \
            --region \"\$2\" \
            --function-name \"\$3\" \
            --query \"VpcConfig.SecurityGroupIds\" \
            --output text 2>/dev/null | \
        grep \
            -w \"\$4\" && \
        echo \
            \"Found in Lambda function: \$3\"" \
    _ "$AWS_PROFILE" "$AWS_REGION" {} "$SG_ID"


The sh -c command allows passing multiple arguments, which are referenced as $1, $2, $3, and $4 inside the shell script.

The underscore (_) is used as a placeholder for the $0 positional parameter inside the sh -c subshell.

When you use sh -c 'script' arg0 arg1 arg2 ..., the first argument after the script (arg0) is assigned to $0 inside the script, and the rest (arg1, arg2, etc.) are assigned to $1, $2, etc.

In this context, _ is a common convention to indicate that the $0 parameter is not used or is irrelevant. It simply fills the required position so that $1, $2, $3, and $4 map correctly to "$AWS_PROFILE", "$AWS_REGION", {} (the function name), and "$SG_ID".


References:

Monday, 13 October 2025

AWS VPC Endpoint

 

A VPC Endpoint is a network component that enables private connectivity between AWS resources in a VPC and supported AWS services, without requiring public IP addresses or traffic to traverse the public internet.​

When to Use a VPC Endpoint


Use VPC Endpoints when security and privacy are priorities, as it allows your resources in private subnets to access AWS services (like S3, DynamoDB, or other supported services) without exposure to the internet.​

VPC Endpoints can improve performance, reduce latency, and simplify network architecture by removing dependencies on NAT gateways or internet gateways.​

They help in scenarios where compliance or regulatory requirements dictate that traffic must remain entirely within the AWS network backbone.​

Use them to save on NAT gateway or data transfer costs when large amounts of traffic are sent to or from AWS services.​

When Not to Use a VPC Endpoint


They may not be suitable if you require internet access for your workloads (e.g., accessing third-party services).​

If your use case does not require private connectivity and your infrastructure already relies on internet/NAT gateways, VPC Endpoints could add unnecessary complexity.​

There is an additional cost for interface endpoints, charged per hour and data transferred, which may be a consideration for cost-sensitive environments.​

Service support is not universal—gateway endpoints only work for S3 and DynamoDB, and not all AWS services support PrivateLink/interface endpoints.​

Alternatives to VPC Endpoints


NAT Gateway or NAT Instance: Provides private subnets with internet access, but all traffic goes over the public internet and incurs NAT gateway/data transfer costs.​

VPN Connection or AWS Direct Connect: Used for private connectivity between on-premises networks and AWS VPCs. These are more suitable for hybrid cloud requirements and broader connectivity scenarios.​

Internet Gateway: Needed if your resources require general internet access, though this exposes them to the public internet.

---

Policy types in AWS

 


There are several types of AWS policies, but the primary and most commonly referenced categories are identity-based policies and resource-based policies.

Main AWS Policy Types



Identity-based policies are attached to AWS IAM identities (users, groups, or roles) and define what actions those entities can perform on which resources.​


Resource-based policies are attached directly to AWS resources (such as S3 buckets or SNS topics), specifying which principals (identities or accounts) can access those resources and what actions are permitted.​

Resource-based policy example:

{
   "Version": "2012-10-17",
   "Id": "Policy1415115909152",
   "Statement": [
     {
       "Sid": "Access-to-specific-VPCE-only",
       "Principal": "*",
       "Action": "s3:GetObject",
       "Effect": "Allow",
       "Resource": ["arn:aws:s3:::yourbucketname",
                    "arn:aws:s3:::yourbucketname/*"],
       "Condition": {
         "StringEquals": {
           "aws:SourceVpce": "vpce-1a2b3c4d"
         }
       }
     }
   ]
}



Other AWS Policy Types

In addition to the above, AWS supports several other policy types, including:
  • Managed policies (AWS managed and customer managed)
  • Inline policies (directly embedded on a single identity)
  • Permissions boundaries (set maximum permissions for identities)
  • Service Control Policies (SCPs, used in AWS Organizations)
  • Access Control Lists (ACLs, primarily for resources like S3 buckets)
  • Session policies (restrict permissions for sessions created with temporary credentials).​
While identity-based and resource-based are the two fundamental categories most often discussed, the broader IAM ecosystem incorporates additional forms for more advanced governance and restrictions.​




How do resource-based and identity-based policies differ?


Resource-based and identity-based policies in AWS differ primarily in their attachment location and in how they control access permissions to AWS resources.​

Key Differences


Identity-based policies are attached to IAM entities (users, groups, or roles) and specify what actions these identities can perform on which resources. For example, an IAM user can have a policy that permits reading from specific DynamoDB tables or starting EC2 instances.​

Resource-based policies are attached directly to AWS resources (such as S3 buckets, SNS topics, or KMS keys). These policies define which principals (users, roles, accounts) can access the resource and what actions they are allowed to perform. Resource-based policies allow for fine-grained control, including granting access to principals outside of the resource owner’s AWS account.​

Attachment and Usage


Identity-based policies are managed at the IAM level and generally offer broader access control for multiple resources through one principal.​

Resource-based policies are applied specifically to resources and are used when the access control needs to be defined at the resource level, possibly for cross-account or external identity access.​

Policy Evaluation


When a request to access a resource is made, AWS evaluates all applicable identity-based and resource-based policies together.

An explicit 'Deny' found in any policy will override any 'Allow' statements, ensuring comprehensive and secure permission management.​

Both types can be used separately or together, depending on security architecture and the granularity of control required for identities and resources.


If resource based policy allows access to some user, do we need a separate identity-based policy which allows access to that resource to be attached to that user?


If a resource-based policy allows access to a user, there is generally no need to attach a separate identity-based policy for that specific action—access will be granted solely based on the resource-based policy's allowance.​​

Details on Policy Evaluation Logic


When a request is made, AWS checks both identity-based and resource-based policies that apply to the request.​

If any policy contains an explicit 'Deny', access is denied regardless of any 'Allow' statements elsewhere.​

If at least one policy (whether identity-based or resource-based) contains an 'Allow' and there are no explicit Deny statements, the action is permitted.​​

For resource-based policies, the permission is granted directly to the designated principal (user or role), even if the user or role does not have a corresponding identity-based policy that allows the action.​​

In cross-account scenarios, both the identity-based policy in the requesting account and the resource-based policy on the target resource must allow the action; otherwise, access is denied.​

A user can be granted access solely by a resource-based policy, but if finer-grained or multiple permissions are needed across various resources, identity-based policies are beneficial. A resource-based policy alone can grant access to a user, without needing a separate identity-based policy for that resource and action—the access will work as long as the resource-based policy allows it and there is no explicit deny elsewhere. AWS evaluates all applicable identity-based and resource-based policies for a request; if any presents an explicit deny, access is rejected, but if at least one policy allows the action, access is permitted.​​

This means a user with no identity-based permission, but with permission in a resource-based policy, can still access that specific resource unless a deny blocks them. However, in cross-account situations, both a corresponding identity-based policy in the user's account and a resource-based policy in the resource owner's account must allow the action for access to succeed.

Are policies listed in IAM in AWS Console, only identity-based policies?


Yes, the policies listed in the IAM section of the AWS Console are only identity-based policies—specifically managed policies and inline policies that are attached to IAM users, groups, or roles.​

IAM policies you see under "Policies" are either AWS managed, customer managed, or inline identity-based policies.​

Resource-based policies (such as S3 bucket policies, SNS topic policies, or Lambda resource policies) are not centrally listed in IAM “Policies” in the Console; instead, they are managed from the respective resource consoles (e.g., via the S3 or Lambda management screens).​

The IAM Console does not display resource-based policies in the Policies list, since these are stored on resources, not IAM identities.​

To summarize, only identity-based policies—managed and inline—are listed in the IAM policies view in the AWS Console. Resource-based policies are managed and reviewed from the console page of each AWS service resource

---