Serverless Framework is a tool designed to streamline the development and deployment of serverless applications, including functions and infrastructure, by abstracting away the need to manage servers.
We define desired infrastructure in serverless yaml files and then deploy it by executing:
sls deploy
This command parses serverless yaml file into larger AWS CloudFormation template which automatically gets filled with values from the yaml.
The sls deploy command in the Serverless Framework is effectively idempotent at the infrastructure level, but with important nuances:
How it works:
sls deploy packages our service and deploys it via AWS CloudFormation. CloudFormation itself is designed to be idempotent: if we deploy the same stack with the same configuration and code, AWS will detect no changes and will not modify our resources. If there are changes, only those changes are applied.
What this means:
Repeated runs of sls deploy with no changes will not create duplicate resources or apply unnecessary updates.
If we make changes (to code, configuration, or infrastructure), only the differences are deployed.
Side effects in Lambda code: While infrastructure deployment is idempotent, our Lambda functions themselves must be written to handle repeated invocations safely if we want end-to-end idempotency. The deployment command itself does not guarantee idempotency at the application logic level.
Limitations:
If we use sls deploy function (to update a single function without CloudFormation), this command simply swaps out the function code and is also idempotent in the sense that re-uploading the same code does not cause issues.
If we use plugins or custom resources, their behavior may not always be idempotent unless explicitly designed that way.
To conclude:
- sls deploy is idempotent for infrastructure: Re-running it with no changes is safe and does not cause duplicate resources or unintended side effects at the CloudFormation level.
- Application-level idempotency is our responsibility: Ensure our Lambda functions and integrations handle repeated events if that is a requirement for our use case
Serverless Yaml Configuration File
serverless yaml file defines a serverless service. It is a good idea to break up the serverless project into multiple services, each of which is defined by its own serverless yaml file. We don't want to have everything in one big infrastructure stack.
Example:
- database e.g. DynamoDB
- Rest API e.g. which handles the submitted web form and stores data in DynamoDB
- front-end website which e.g. stores React app website in s3 bucket
Services can be deployed in multiple regions. (Multi-region architecture is supported)
serverless.yml example:
service: my-service
frameworkVersion: "3"
useDotenv: true
plugins:
- serverless-plugin-log-subscription
- serverless-dotenv-plugin
provider:
name: aws
runtime: nodejs14.x
region: eu-east-1
memorySize: 512
timeout: 900
deploymentBucket:
name: my-serverless-deployments
vpc:
securityGroupIds:
- "sg-0123cf34f6c6354cb"
subnetIds:
- "subnet-01a23493f9e755207"
- "subnet-02b234dbd7d66d33c"
- "subnet-03c234712e99ae1fb"
iam:
role:
statements:
- Effect: Allow
Action:
- lambda:InvokeFunction
Resource: arn:aws:lambda:eu-east-1:123456789099:function:my-database
package:
patterns:
- "out/**"
- "utils.js"
- "aws-sdk"
functions:
my-function:
handler: lambda.handler
events:
- schedule:
name: "my-service-${opt:stage, self:provider.stage}"
description: "Periodically run my-service lambdas"
rate: rate(4 hours)
inputTransformer:
inputTemplate: '{"Records":[{"EventSource":"aws:rate","EventVersion":"1.0","EventSubscriptionArn":"arn:aws:sns:eu-east-1:{{accountId}}:ExampleTopic","Sns":{"Type":"Notification","MessageId":"95df01b4-1234-5678-9903-4c221d41eb5e","TopicArn":"arn:aws:sns:eu-east-1:123456789012:ExampleTopic","Subject":"example subject","Message":"example message","Timestamp":"1970-01-01T00:00:00.000Z","SignatureVersion":"1","Signature":"EXAMPLE","SigningCertUrl":"EXAMPLE","UnsubscribeUrl":"EXAMPLE","MessageAttributes":{"type":{"Type":"String","Value":"populate_unsyncronised"},"count":{"Type":"Number","Value":"400"}}}}]}'
- sns:
arn: arn:aws:sns:us-east-2:123456789099:trigger-my-service
- http:
custom:
dotenv:
dotenvParser: env.loader.js
logSubscription:
enabled: true
destinationArn: ${env:KINESIS_SUBSCRIPTION_STREAM}
roleArn: ${env:KINESIS_SUBSCRIPTION_ROLE}
- service: - name of the service
- useDotenv: boolean (true|false)
- configValidationMode: error
- frameworkVersion: e.g. "3"
- provider -
- name - provider name e.g. aws
- runtime - e.g. nodejs18.x
- region e.g. us-east-1
- memorySize - how much memory will have the machine on which Lambda will be running e.g. 1024 (MB). It is good to check the actual memory usage and adjust the required memory size - downsizing can lower the costs!
- timeout: (number) e.g. 60 [seconds] - the maximum amount of time, in seconds, that a serverless function (such as an AWS Lambda function) is allowed to run before it is forcibly terminated by the AWS platform. This setting ensures that our function does not run indefinitely. If the function execution exceeds 60 seconds, the serverless platform will automatically stop it and return a timeout error. The timeout property is commonly used to control resource usage and prevent runaway executions. It is especially important for functions that interact with external services or perform long-running tasks. If not specified, most serverless platforms (like AWS Lambda) use a default timeout (for AWS Lambda, the default is 3 seconds, and the maximum is 900 seconds or 15 minutes).
- httpApi:
- id:
- apiGateway:
- minimumCompressionSize: 1024
- shouldStartNameWithService: true
- restApiId: ""
- restApiRootResourceId: ""
- stage: - name of the environment e.g. production;
- iamManagedPolicies: a list of ARNs of policies that will be associated to the Lambda's computing instance e.g. policy which allows access to S3 buckets etc...
- lambdaHashingVersion
- environment: dictionary of environment variable names and values
- vpc
- securityGroupIds: list
- subnetIds - typically a list of private subnets with NAT gateway.
- functions: a dictionary which defines the AWS Lambda functions that are deployed as part of this Serverless service. This is where we define the AWS Lambda functions that our Serverless service will deploy.
- <function_name>: string, a logical name of the function (e.g., my-function). This name is used to reference the function within the Serverless Framework and in deployment outputs. A name of the provisioned Lambda function is in format: <service_name>-<stage>-<function_name>. Each function entry under functions specifies:
- handler - tells Serverless which file and exported function to execute as the Lambda entry point (e.g., src/fn/lambda.handler which points to handler export in the src/fn/lambda module). Specifies the entry point for the Lambda function. When the function is invoked, AWS Lambda will execute this handler.
- events - (optional, array) a list of events that trigger this function
- Some triggers:
- schedule, scheduled events: for periodic invocation (cron-like jobs)
- sns: for invocation via an AWS SNS topic
- HTTP endpoints,
- S3 events
- messages from a Kafka topic in an MSK cluster (msk)
- If the array is empty, that means that the function currently has no event sources configured and will not be triggered automatically by any AWS event.
- plugins: a list of serverless plugins e.g.
- serverless-webpack
- serverless-esbuild
- serverless-offline [https://www.serverless.com/plugins/serverless-offline, https://github.com/dherault/serverless-offline]
- emulates AWS Lambda and API Gateway. It starts an HTTP server that handles the request's lifecycle like APIG does and invokes the handlers.
- sls offline --help
- serverless-plugin-log-subscription
- custom: - section for serverless plugins settings e.g. for esbuild, logSubscription, webpack etc...
- example: serverless-plugin-log-subscription plugin has the settings:
- example: serverless-domain-manager - used to define stage-specific domains.
- logSubscription: {
enabled: true,
destinationArn: process.env.SUBSCRIPTION_STREAM,
roleArn: process.env.SUBSCRIPTION_ROLE,
}
domains: {production: {url: "app.api.example.com",certificateArn: "arn:aws:acm:us-east-2:123456789012:certificate/a8f8f8e2-95fe-4934-abf2-19dc08138f1f",},staging: {url: "app.staging.example.com",certificateArn: "arn:aws:acm:us-east-2:123456789012:certificate/a32e9708-7aeb-495b-87b1-8532a2592eeb",},dev: {
url: "",
certificateArn: ""
},},
Lambda in VPC
A Lambda function needs to be associated with a subnet only when we configure it to run inside a VPC (Virtual Private Cloud). When we enable VPC access for the Lambda function we must specify:
- at least one subnet (for ENI creation)
- security groups (for network rules)
When a Lambda is in a VPC AWS creates an Elastic Network Interface (ENI) in the specified subnet. That ENI allows our Lambda to access private resources, like:
- RDS databases
- EC2 instances
- Internal APIs
- Private endpoints (e.g., S3 via VPC endpoint)
If our Lambda doesn’t need to access private resources, we usually don’t need to assign a VPC.
Assigning a Lambda to a VPC removes its default internet access — unless:
- The subnet is public (with a route to an internet gateway), or
- We have a NAT Gateway in a public subnet, and route private subnets to it
Handler
lambda.handler in the serverless.yml file tells us which code to execute when the Lambda function is triggered.
Specifically, lambda.handler means:
- The Lambda function’s code entry point is in a file named lambda.js (or lambda/index.js if it’s a directory named lambda).
- The exported function named handler inside that file will be invoked by AWS Lambda.
So, AWS Lambda will look for a file called lambda.js (or lambda/index.js), and inside it, the function handler should be exported, for example:
// lambda.js
exports.handler = async (event) => {
// our code here
};
The name of the deployed Lambda function will be constructed using the following pattern:
<service-name>-<function-name>-<stage>
Based on our serverless.yml:
- service name: my-service
- function name: my-function
- stage: If not explicitly defined, it defaults to dev (or whatever is provided at deploy time with --stage).
So, the default deployed Lambda function name will be:
my-service-run-dev
If we deploy with a different stage (e.g., --stage production), the name would be:
my-service-run-production
We can always override this default by providing a name property under the function's configuration, but with our current config, the above naming applies.
npm i -g serverless
npm install
sls deploy --stage <env>
Triggers
serverless.ts config does not need directly to show the triggers for the Lambda functions.
The triggers (such as HTTP endpoints, S3 events, schedules, etc.) are usually defined inside each function's configuration, typically in the @functions/index imports.
In this file, the functions property imports and spreads { functionA, functionB, ...} from @functions/index:
import { functionA, functionB, ... } from '@functions/index';
To see what triggers each Lambda, we need to look at the definitions in those files.
Common triggers include:
- events: [{ http: ... }] for API Gateway REST endpoints
- events: [{ httpApi: ... }] for HTTP API endpoints
- events: [{ s3: ... }] for S3 events
- events: [{ schedule: ... }] for scheduled (cron) events
Example:
functions/functionA/index.ts:
import { handlerPath } from "@libs/handler-resolver";
export default {
handler: `${handlerPath(__dirname)}/handler.main`,
events: [
{
httpApi: {
method: "get",
path: "/xyz/fnA",
},
},
{
http: {
method: "get",
path: "/xyz/fnA",
private: true
},
},
],
};
In our config, both are referenced, but which is used depends on how each Lambda function's event triggers are defined (http vs httpApi in their event definitions).
Example for post method:
events: [
{
http: {
method: "post",
path: "/",
private: true,
},
},
],
AWS Managed Streaming for Kafka (MSK) cluster (msk)
events: [
{
msk: {
consumerGroupId: "my-consumer-group-id",
arn: "arn:aws:kafka:us-east-1:12345678901234:cluster/my-cluster/1702ac54-1755-4175-985f-6f3d3193c4e4-5",
batchSize: 10000,
maximumBatchingWindow: 60,
topic: "mongodb.default.streamhatchet.my-topic",
//@ts-ignore
startingPosition: "AT_TIMESTAMP",
startingPositionTimestamp: "1695945601",
},
},
],
This object configures the Lambda to be triggered by messages from a Kafka topic in an MSK cluster. The properties include:
- consumerGroupId: The Kafka consumer group ID used by the Lambda function.
- arn: The Amazon Resource Name of the MSK cluster.
- batchSize: The maximum number of records to retrieve in a single batch (here, up to 10,000).
- maximumBatchingWindow: The maximum amount of time (in seconds) to wait for the batch to fill before invoking the function (here, 60 seconds).
- topic: The Kafka topic to consume messages from.
- startingPosition: (with a @ts-ignore comment to suppress TypeScript errors) Specifies where to start reading in the stream. "AT_TIMESTAMP" means to start at a specific timestamp.
- startingPositionTimestamp: The Unix timestamp (in seconds) indicating the point in time from which to start consuming messages.
This configuration sets up a Lambda function to process large batches of messages from a specific Kafka topic in an MSK cluster, starting from a given timestamp, with a generous timeout to handle potentially long-running processing. The use of @ts-ignore suggests that startingPosition or startingPositionTimestamp may not be officially supported in the type definitions, but are required for this use case.
Deploying in different environments
In Serverless Framework, the default value of stage in the provider block is:
stage: dev
So, if we don’t explicitly provide a --stage flag when running sls deploy, and we haven’t set a stage: in our serverless.yml, it defaults to:
provider:
stage: dev
Example Behavior
serverless.yml:
service: my-service
provider:
name: aws
runtime: nodejs18.x
# No stage specified
Deploy Command:
sls deploy
Result: Serverless deploys to the dev stage. Resources are named accordingly (e.g., my-service-dev-functionName).
We can override the stage using:
sls deploy --stage staging
Or by setting it explicitly in serverless.yml:
provider:
stage: staging
Or dynamically:
provider:
stage: ${opt:stage, 'dev'}
This makes 'dev' the default, but lets us override it like this:
sls deploy --stage prod
How to use different AWS regions as different environments?
To deploy our Serverless Framework project to different AWS regions based on the stage (e.g., eu-west-2 for staging, us-east-2 for production), we can dynamically set the provider.region using ${opt:stage} and a simple mapping.
1. Update serverless.ts to use per-stage region mapping
Here’s how to configure it:
import type { AWS } from '@serverless/typescript';
const stage = process.env.STAGE || 'dev'; // fallback if needed
const regionMap: Record<string, string> = {
staging: 'eu-west-2',
prod: 'us-east-2',
};
const serverlessConfiguration: AWS = {
service: 'my-service',
frameworkVersion: '3',
provider: {
name: 'aws',
runtime: 'nodejs18.x',
stage: '${opt:stage, "dev"}',
region: regionMap[process.env.STAGE || 'dev'] || 'us-west-1', // fallback region
environment: {
STAGE: '${opt:stage, "dev"}',
},
},
functions: {
hello: {
handler: 'src/handler.hello',
events: [
{
http: {
method: 'get',
path: 'hello',
},
},
],
},
},
};
module.exports = serverlessConfiguration;
2. Set the STAGE env var before deployment
Since the region is computed using process.env.STAGE, pass it in like this:
STAGE=staging sls deploy --stage staging
STAGE=prod sls deploy --stage prod
We can also tweak the logic to use ${opt:stage} directly in the TypeScript file:
Alternative: Use opt:stage in code
const inputStage = process.env.SLS_STAGE || 'dev'; // fallback
const stageFromCLI = (process.argv.find(arg => arg.startsWith('--stage=')) || '').split('=')[1];
const stage = stageFromCLI || inputStage;
const regionMap: Record<string, string> = {
staging: 'eu-west-2',
prod: 'us-east-2',
};
Or for simpler projects:
const stage = process.env.STAGE || 'dev';
const region = stage === 'prod' ? 'us-east-2' : 'eu-west-2';
Then plug into the config:
provider: {
region,
stage,
...
}
Example Deployment
STAGE=staging sls deploy --stage staging
# Deploys to eu-west-2
STAGE=prod sls deploy --stage prod
# Deploys to us-east-2
Optional: Output region in logs
console.log(`Deploying stage: ${stage}, region: ${region}`);
Serverless State
Serverless yaml can define an output, e.g. in ServiceA
service: service-a
custom:
env: test
db_name: my_db_${self:custom.env}
outputs:
db_name: ${self.custom.db_name}
If we have multiple services within the app (multiple serverless yamls), we can share the state of each service if we add to each yaml file an attribute app, set to the same value (app name). If we have both in ServiceA and ServiceB e.g.
app: demo
Then ServiceB can read the state from ServiceA like:
service: service-b
...
${state:service-a.db_name}
...
Testing Lambda Function Locally
The command:
sls invoke local --function main --path tests/test_payload.json
...invokes a Serverless function named "main" locally, using the event payload defined in the tests/test_payload.json file. This is a convenient way to test Lambda functions without deploying them to AWS.
- sls invoke local: This invokes the specified function locally, simulating the AWS Lambda environment.
- --function main: Specifies the function to be invoked locally as "main".
- --path tests/test_payload.json: Indicates the path to a JSON file containing the event data to be passed to the function. This event data will be used as input to the function when it runs locally.
serverless print (sls print)
The command npx serverless print (or sls print) outputs our fully resolved serverless.yml configuration to the terminal, with all variables and references evaluated and replaced by their actual values. This is useful for debugging and verifying what our final configuration looks like before deployment, especially when using environment variables, custom variables, or complex references.
Setting Environment Variables for Serverless Configuration
In Serverless v before 2.26.0
Add serverless-dotenv-plugin to dev dependencies in package.json.
.env (in project's root):
MY_ENV_VAR=value
serverless.yaml:
plugins:
- serverless-dotenv-plugin
provider:
environment:
MY_ENV_VAR: ${env: MY_ENV_VAR}
This is optional:
custom:
dotenv:
path: .env
include:
- MY_ENV_VAR
To find out how the serverless config file looks like after env vars interpolation:
% rm -rf .serverless
% npx serverless print
To test the function locally:
% npx serverless invoke local --function myFunction
In Serverless v2.26.0 and later
useDotenv property was introduced in Serverless Framework v2.26.0 and became the standard way to load .env files for variable resolution in our serverless.yml.
The useDotenv property in our serverless.yml tells the Serverless Framework to automatically load environment variables from .env files, making them available for variable resolution (e.g., ${env:MY_VAR}) inside our serverless.yml configuration.
When we set useDotenv: true at the top level of our serverless.yml, Serverless will look for a .env file (and, if present, a stage-specific file like .env.dev or .env.production) in our service directory and load those variables.
These variables can then be referenced anywhere in our serverless.yml using ${env:VAR_NAME}.
If both a generic .env and a stage-specific .env.{stage} file exist, the stage-specific file takes precedence.
In Serverless v4, we can also set useDotenv to a path to specify a custom location for our .env files.
The variables loaded this way are available for variable resolution in our config file, but are not automatically injected into our Lambda functions’ environment. To do that, we must explicitly map them in provider.environment or functions.environment.
With useDotenv: true, .env files are also automatically excluded from deployment packages to help prevent leaking secrets.
serverless.yaml:
useDotenv: true
provider:
environment:
MY_ENV_VAR: ${env: MY_ENV_VAR}
How to decommission (destroy) Serverless resources?
To decommission (destroy) a Lambda and all associated resources defined in a serverless.yaml, we can use the sls remove command.
sls remove will destroy all resources defined in the serverless.yaml, not just the Lambda function. This includes:
- The Lambda function(s)
- API Gateway endpoints
- CloudWatch Log Groups
- IAM roles/policies created for the service
- S3 buckets created by Serverless (like for deployments)
GitHub Actions Workflow for Decommissioning (Destruction)
Here’s a typical GitHub Actions workflow job that destroys a Serverless service:
destroy:
name: Destroy Serverless Service
runs-on: ubuntu-latest
environment: production # optional, if using GitHub Environments
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Install Serverless
run: npm install -g serverless
- name: Install dependencies
run: yarn install --frozen-lockfile
- name: Configure AWS credentials
run: serverless config credentials --provider aws --key ${{ secrets.AWS_ACCESS_KEY_ID }} --secret ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Remove Serverless service
run: sls remove
What sls remove Does (and Doesn’t Do)
It removes:
- Lambda function(s)
- API Gateway endpoints
- IAM roles created by the service
- CloudWatch log groups
- EventBridge rules / S3 triggers, etc. defined in serverless.yaml
It does NOT remove:
- Resources not created by Serverless, or added manually in AWS Console
- Resources created in other stacks (e.g., shared VPCs, databases, etc.)
- External secrets in AWS Secrets Manager or SSM
Tips
We need to make sure serverless.yaml still matches the deployed version — if we’ve removed resources from the config after deployment, sls remove won’t know about them.
If we’ve deployed with a custom stage, specify it during removal:
sls remove --stage prod
Consider confirming the AWS region if not default:
sls remove --stage prod --region eu-west-2
No comments:
Post a Comment