Showing posts with label Deployment. Show all posts
Showing posts with label Deployment. Show all posts

Thursday, 1 August 2024

AWS CodeDeploy

 



The (Continuous Delivery) goal is to have seamless and automated iterations of:
  • writing the code (updates)
  • testing
  • releasing/deploying to production - across multiple EC2 instances

Amazon CodeDeploy:

  • fully managed deployment service provided by Amazon Web Services (AWS)
  • automates the process of deploying applications to various compute services such as:
    • Amazon EC2 instances
    • AWS Lambda functions
    • on-premises servers
  • helps developers deploy code quickly and efficiently, while avoiding downtime during application updates and maintaining the integrity of the application environment. The same setup is used to release new code to:
    • dev instances for debugging
    • staging instances for testing
    • production - for release to customers
  • helps in maintaining high availability and reliability of applications while simplifying the deployment process, making it an essential tool for DevOps practices and continuous delivery in the AWS ecosystem


Key features and benefits of Amazon CodeDeploy:

  • Automated Deployments: Automates the process of deploying applications, making it easier and faster to release new features and updates.
  • Scalability: Can deploy to a single instance or thousands of instances, allowing it to scale according to the needs of the application.
  • Flexibility: Supports a wide range of deployment types, including:
    • in-place deployments
    • blue/green deployments
  • Minimized Downtime: Ensures minimal disruption to services during deployments by allowing for rolling updates and automated rollback on failure.
  • Monitoring and Reporting: Provides detailed monitoring and reporting of deployment status, enabling developers to track the progress and health of deployments.
  • Integration with CI/CD Pipelines: Easily integrates with other AWS services and third-party tools to create a comprehensive continuous integration and continuous delivery (CI/CD) pipeline.
  • Support for Various Application Types: Can be used to deploy a variety of application types, including serverless applications, containerized applications, and traditional server-based applications.
  • On-Premises Support: Allows for deployments not only to AWS resources but also to on-premises servers, enabling hybrid cloud deployments.


The benefits of using CodeDeploy are:

  • All process is automated, no need to keep track what's been deployed to which instance.
  • The same revision is deployed to all instances in all environments consistently. 
  • Application is kept highly available while performing rolling updates across all EC2 instances. 
  • It prevents downtime.

CodeDeploy >> Deploy >> Applications >> Create application




Application configuration:
  • Application name e.g. echo
  • Compute platform
    • EC2/On-premises - we'll select this one
    • AWS Lambda
    • Amazon ECS


Once application is created, we need to create deployment group(s) in order to be able to deploy it.
We can create multiple deployment groups e.g. for development, staging, production.




Before this, we need to create a role which allows CodeDeploy to access AWS resources. To do this, we need to go IAM >> Access Management >> Roles >> Create role, select CodeDeploy, then select one of 3 use cases:
  • CodeDeploy. Allows CodeDeploy to call AWS services such as Auto Scaling on our behalf
  • CodeDeploy - ECS. Allows CodeDeploy to read S3 objects, invoke Lambda functions, publish to SNS topics and update ECS services on your behalf
  • CodeDeploy for Lambda. Allows CodeDeploy to route traffic to a new version of an AWS Lambda function version on your behalf 



Next, we need to select permissions.



We can filter out only those policies related to CodeDeploy.

For EC2/On-Premises deployments, we need to attach the AWSCodeDeployRole policy. It provides the permissions for your service role to:
  • Read the tags on your instances or identify your Amazon EC2 instances by Amazon EC2 Auto Scaling group names.
  • Read, create, update, and delete Amazon EC2 Auto Scaling groups, lifecycle hooks, and scaling policies.
  • Publish information to Amazon SNS topics.
  • Retrieve information about CloudWatch alarms.
  • Read and update Elastic Load Balancing.
AWSCodeDeployRole:
  • Allows EC2 instances to call AWS services on your behalf.
  • Provides CodeDeploy service access to expand tags and interact with Auto Scaling on your behalf



We can then add tags and review created role where we can set role name e.g. MyCodeDeployRole.





We can now proceed with creating a deployment group:

CodeDeploy >> Applications >> select our application (e.g. echo) >> in Deployment groups tab click on Create deployment group

  • Name: e.g. Dev, Beta, Prod
  • Service role. Choose a service role with CodeDeploy permissions that grants AWS CodeDeploy access to your target instances. We need to type in Service role ARN which can be copied from role's page in IAM e.g. arn:aws:iam::231993119338:role/MyCodeDeployRole
  • Deployment type. Choose how to deploy your application:
    • In-place. Updates the instances in the deployment group with the latest application revisions. During a deployment, each instance will be briefly taken offline for its update
    • Blue/green. Replaces the instances in the deployment group with new instances and deploys the latest application revision to them. After instances in the replacement environment are registered with a load balancer, instances from the original environment are deregistered and can be terminated. 
  • Environment configuration. Select any combination of Amazon EC2 Auto Scaling groups, Amazon EC2 instances and on-premises instances to add to this deployment. 
    • Amazon EC2 Auto Scaling groups. You can select up to 10 Amazon EC2 Auto Scaling groups to deploy your application revision to.
    • Amazon EC2 instances. You can add up to three groups of tags for EC2 instances to this deployment group. We can specify tags and values associated with those EC2 instances we want to be included. e.g. Key = Name and Value = BetaBox or Key = Environment and Value = Beta. 
    • On-premises instances. You can add up to three groups of tags for EC2 instances to this deployment group.

  • Agent configuration with AWS Systems Manager. AWS Systems Manager will install the CodeDeploy Agent on all instances and update it based on the configured frequency. Install AWS CodeDeploy Agent
    • Never
    • Only once
    • Now and schedule updates e.g. every 14 days
  • Deployment settings - Deployment configuration. Choose from a list of default and custom deployment configurations. A deployment configuration is a set of rules that determines how fast an application will be deployed and the success or failure conditions for a deployment. 
    • CodeDeployDefault.AllAtOnce
    • CodeDeployDefault.HalfAtATime
    • CodeDeployDefault.OneAtTime
Here is a more comprehensive list of all Deployment Configurations:




  • Load balancer. Select a load balancer to manage incoming traffic during the deployment process. The load balancer blocks traffic from each instance while it's being deployed to and allows traffic to it again after the deployment succeeds.
    • Enable load balancing - check box. 


When clicking on Create deployment group, you might get the following error:


To fix this:






Now we need to create a deployment.

TBC...

 

CodeDeploy agent

 
 
CodeDeploy agent is an application that needs to be installed and running on the EC2 instance onto which CodeDeploy service deploys our application. 

This agent can already be present on the custom AMI we build or, we can add command which installs it, to user data configuration before launching the EC2 instance.
 

Application specification (AppSpec) file

 
 
During the deployment CodeDeploy agent will unpack the archive and copy its content to the root 

We can control what gets executed during the deployment. 

https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent.html
https://stackoverflow.com/questions/53103139/the-codedeploy-agent-did-not-find-an-appspec-file-within-the-unpacked-revision-d
https://cloudacademy.com/blog/how-to-deploy-application-code-from-s3-using-aws-codedeploy/
https://stackoverflow.com/questions/42000069/deployment-getting-failed-in-aws-code-deploy-before-install
https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html
https://docs.aws.amazon.com/codedeploy/latest/userguide/application-specification-files.html
https://stackoverflow.com/questions/47931381/why-wont-my-aws-codedeploy-run-my-script-after-deployment



Resources:


Deployment Strategies

In this article I want to explore some typical deployment strategies:
  • Blue/Green
  • Canary
  • A/B Tests/Experiments
  • In-place


Blue/Green Deployment


Blue/Green Deployment:

  • involve deploying the new version of the application alongside the old version, allowing a smooth switch between the two
  • running concurrently two production versions but there is only one that gets live traffic
  • e.g. blue is the version that’s live and the latest update (new version) is green
  • since the blue and green deployments will not be alive at the same time, the latest version does not actually have to be backward compatible
    • This is a problem for application dependencies, such as databases, but for them we can also use the blue / green approach
  • Since green is not receiving live traffic yet, until moving it live, we have the chance to conduct real testing in a production environment.
  • For trivia about the name origin, see Blue–green deployment - Wikipedia or Blue-Green deployment ($1846041) · Snippets · GitLab.

Process:

  • Blue Environment: The current live environment (e.g., EC2 instances in an Auto Scaling Group) runs the current version of the application.
  • Green Environment: The new version is deployed to a separate environment (e.g., a new set of EC2 instances or a different target group in Elastic Load Balancer).
  • Switch: Once the new version is verified and tested in the green environment, traffic is switched from the blue environment to the green environment.
  • Rollback: If issues arise, traffic can be switched back to the blue environment, allowing for a quick rollback.

Benefits:

  • Zero Downtime: Allows for continuous operation of the application during the deployment since the switch happens between fully operational environments.
  • Reduced Risk: Provides an opportunity to test the new version in a live environment before fully committing.
  • Easy Rollback: If something goes wrong, reverting to the old version is straightforward by switching traffic back to the blue environment.
  • Instant rollout/rollback
  • Avoid versioning problems, the entire state of the application is modified in one go

Drawbacks:

  • Cost: Higher cost due to the need to maintain two separate environments during the deployment.
  • Complexity: More complex to set up and manage, requiring careful coordination of environment management and traffic switching.
  • Proper testing of the entire platform should be conducted prior to release to production.
  • Handling stateful software can be difficult.

Canary Deployment


Canary Deployment:

  • Software release strategy used to reduce risk and validate new features or updates before they are fully rolled out to all users
  • Involves releasing the new version of software to a small, controlled subset of users (the "canary" group) before making it available to the entire user base
    • Traffic is gradually moved from version A to version B
    • Traffic is generally divided on the basis of weight e.g. 90% of requests go to version A, 10% to version B
    • We redirect a small amount of user traffic to the new version and the rest to the existing version
    • We decide which users will be the first to see the new edition, and we can still change which users we want to have in our test group
  • Helps to identify and address potential issues with the new release in a real-world environment while minimizing the impact of any problems that may arise.
  • Before a complete launch, we get the chance to test modifications with a portion of the real traffic
  • By using canary deployments, organizations can enhance their deployment processes, reduce risks, and ensure a smoother transition when introducing new features or updates.

Key Concepts of Canary Deployment:

  • Gradual Rollout:
    • The new version is initially deployed to a small percentage of users or servers.
    • If the canary release performs well, the rollout is progressively expanded to more users or servers.
  • Monitoring and Feedback:
    • Continuous monitoring of the canary release is performed to track metrics such as performance, error rates, and user feedback.
    • This helps to quickly identify any issues or bugs that may have been missed during testing.
  • Rollback Capability:
    • If significant issues are detected, the deployment can be rolled back to the previous stable version.
    • This minimizes the impact on users and allows for quick remediation of problems.
  • Risk Mitigation:
    • By deploying the new version to a limited user base first, the potential negative impact of any issues is contained.
    • This allows for iterative improvements and fixes before a full-scale rollout.

How Canary Deployment Works:

  • Preparation:
    • Infrastructure: Ensure that your infrastructure supports deploying multiple versions of the application concurrently.
    • Testing: Thoroughly test the new version in a staging environment that mirrors the production environment.
  • Deploy Canary Release:
    • Deploy to Canary Group: Release the new version to a small, controlled group of users or a specific subset of servers.
    • Traffic Routing: Use techniques like load balancers or feature flags to direct a portion of the traffic to the canary release.
  • Monitor and Analyze:
    • Collect Metrics: Monitor application performance, error rates, user feedback, and other key metrics.
    • Analyze Results: Compare the canary release metrics with those of the previous version to identify any issues or performance degradation.
  • Expand or Rollback:
    • Expand Rollout: If the canary release performs well, gradually increase the number of users or servers receiving the new version.
    • Rollback: If issues are detected, halt the rollout and roll back to the previous stable version while addressing the problems.
  • Full Deployment:
    • Once the canary release is verified and issues are resolved, proceed with deploying the new version to the entire user base.

Benefits:

  • Early Detection of Issues: 
    • Identifies potential problems with the new release before it affects all users.
  • Reduced Risk: 
    • Limits the impact of deployment issues to a small subset of users.
  • Improved User Experience: 
    • Allows for gradual introduction of new features, minimizing disruption for end users.
  • Iterative Improvement: 
    • Provides the opportunity to make incremental improvements based on real-world feedback.Released edition to a subset of users.
  • Practical for controlling error rate and efficiency.
  • Fast rollback

Drawbacks:

  • Slow rollout

Example of Canary Deployment


Let's say you have a web application and you want to deploy a new feature. Here’s how you might use canary deployment:

  • Deploy the New Version: Deploy the new feature to 5% of your production servers or users.
  • Monitor Performance: Track the performance of the new feature on these servers or users.
  • Evaluate Results: If everything is running smoothly and no significant issues are detected, increase the percentage of servers or users receiving the new version.
  • Expand Gradually: Continue to monitor and expand the rollout until 100% of users have the new feature.
  • Rollback if Needed: If you encounter major issues, revert the deployment and address the problems before retrying.

Tools and Techniques for Canary Deployment

  • Load Balancers: Route a percentage of traffic to the canary version.
  • Feature Flags: Enable new features for a subset of users without deploying new code.
  • Deployment Platforms: Many cloud providers and deployment platforms (e.g., Kubernetes, AWS Elastic Beanstalk, Google Cloud Platform) offer built-in support for canary deployments.


A/B Testing Deployment


A/B Testing Deployment:
  • Similar to the release of the canary, but how the traffic is divided is a business decision and is typically more difficult
  • Widely used for experiments - e.g. new features, maybe not yet finished
  • We identify groups based on user habits, location, age , gender, or other variables that will tell us which version has the most positive impact on sales. We can then determine if A or B is ready for all of your users.
  • Here is a list of conditions that can be used to distribute traffic amongst the versions:
    • By browser cookie
    • Query parameters
    • Geolocalisation
    • Technology support: browser version, screen size, operating system, etc.
    • Language

Benefits:

  • Several variants run parallel to this
  • Complete control over management of the traffic

Drawbacks:

  • Needs a smart load balancer
  • Difficult to troubleshoot errors, distributed tracing becomes mandatory for a given session

In-Place Deployments


In-place deployments update the application directly on the existing instances. 

Process:

  • CodeDeploy replaces the application code on the existing instances.
  • It follows a rolling update strategy, where instances are updated one at a time or in batches.
  • During the update, CodeDeploy can stop the application, install the new version, and restart the application.

Advantages:

  • Simplicity: Easier to set up and manage since it involves updating the same set of instances.
  • Cost: Generally lower cost as it doesn’t require additional infrastructure.

Disadvantages:

  • Downtime: Can experience downtime or reduced availability during the deployment if not managed carefully.
  • Risk: If a deployment fails, it can affect all instances since they are updated in place.
  • Rollbacks: Rollbacks can be more complex, as the system may need to revert the entire set of instances.

Use Case:

  • Suitable for applications where minimal downtime is acceptable and where the impact of deployment failures is manageable.

References:

3 Best Deployment Strategies for Your Startup | by Praful Dhabekar | DevOps Dudes | Medium
Whats the best practice for Go deployments for a small startup? : r/golang
Canary Deployment: What It Is and Why It Matters
What we can learn from the CrowdStrike outage | by QAComet | Jul, 2024 | Medium