Monday, 12 August 2024

Introduction to GitHub Actions





GitHub Actions (GHA): 
  • Workflow Automation Service offered by the GitHub
  • Allows you to automate all kinds of repository-related processes and actions
  • Service that offers various automations around the code that is stored on GitHub, around these repositories that hold that code.
  • Free for public repositories
Two main areas of processes that GitHub Actions can automate:
  • CI/CD processes (Continuous Integration/Continuous Delivery/Continuous Deployment) - methods for automating app development, testing, building and deployment
    • Continuous Integration is all about automatic handling code changes - integrating new code or code changes into an existing code base by building that code automatically. So that changed code by testing it automatically and by then merging it into existing code.
    • Continuous Delivery or deployment is about publishing new versions of your app or package or website automatically after the code has been tested and integrated
    • Example: After we make a change to the website code, we want to automatically upload and publish a new version of our website
    • GitHub Actions helps setting up, configuring and running such CI/CD workflows
    • It makes it very easy for us to set up processes that do automatically build, test, and publish new versions of our app, website, or package whenever we make a code change.
  • Code and repository management - automating:
    • code reviews
    • issue management

Key Elements


  • Workflows
  • Jobs
  • Steps


Workflows:
  • Attached to GitHub repositories
  • We can add as many workflows to GitHub repository as we wish
  • The first thing we build/create when setting up an automation process with GHA
  • Include one or more jobs
  • Built to set up some automated process that should be executed
  • Not executed all the time but on assigned triggers or events which define when a given workflow will be executed. Here are some examples of events that can be added:
    • an event that requires manual activation of a workflow
    • an event that executes a workflow whenever a new commit is pushed to a certain branch
  • Defined in a YAML file at this path: <repo_root>/.github/workflows/<workflow_name>.y[a]ml
Jobs:
  • Contain one or more steps that will be executed in the order in which they're specified
  • Define a runner
    • Execution environment, the machine and operating system that will be used for executing these steps
    • Can either be predefined by GitHub (runners for Linux, Mac OS, and Windows) or custom,  configured by ourselves
  • Steps will be executed in the specified runner environment/machine
  • If we have multiple jobs they run in parallel by default, but we can also configure them to run in sequential order, one job after another
  • We can also set up conditional jobs which will not always run, but which instead need a certain condition to be met.

Steps:
  • Define the actual things that will be done
  • Example:
    • download the code in the first step
    • install the dependencies in the second step
    • run automated tests in the third step
  • Belong to jobs, and a job can have one or more steps
  • And a step is either:
    • a shell script
    • a command in the command line that should be executed (e.g. for simple tasks), or 
    • an action, which is another important building block
      • predefined scripts that performs a certain task
      • We can build our own actions or use third party actions
  • We must have at least have one step,
  • Steps are then executed in order, they don't run in parallel, but instead, step after step
  • Steps can also be conditional

How to create a Workflow?


Workflow can be created in two ways:
  • directly on the remote, via browser
  • in the local repo, and then pushed to remote
If we use browser, we need to go to our repo's web page and then click on Actions tab. There we can select a default Workflow or choose some other template. Default workflow creates the following file:

my-repo/.github/workflows/blank.yml:

# This is a basic workflow to help you get started with Actions

name: CI

# Controls when the workflow will run
on:
  # Triggers the workflow on push or pull request events but only for the "main" branch
  push:
    branches: [ "main" ]
  pull_request:
    branches: [ "main" ]

  # Allows you to run this workflow manually from the Actions tab
  workflow_dispatch:

# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
  # This workflow contains a single job called "build"
  build:
    # The type of runner that the job will run on
    runs-on: ubuntu-latest

    # Steps represent a sequence of tasks that will be executed as part of the job
    steps:
      # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
      - uses: actions/checkout@v4

      # Runs a single command using the runners shell
      - name: Run a one-line script
        run: echo Hello, world!

      # Runs a set of commands using the runners shell
      - name: Run a multi-line script
        run: |
          echo Add other actions to build,
          echo test, and deploy your project.


---

Sunday, 11 August 2024

Introduction to Microservices


𝐂𝐨𝐦𝐩𝐨𝐧𝐞𝐧𝐭𝐬 𝐨𝐟 𝐌𝐢𝐜𝐫𝐨𝐬𝐞𝐫𝐯𝐢𝐜𝐞𝐬 𝐀𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭𝐮𝐫𝐞


Microservices architecture breaks down applications into smaller, independent services. Here's a rundown of the 𝟏𝟎 𝐤𝐞𝐲 𝐜𝐨𝐦𝐩𝐨𝐧𝐞𝐧𝐭𝐬 in this architecture:


1. 𝐂𝐥𝐢𝐞𝐧𝐭

These are the end-users who interact with the application via different interfaces like web, mobile, or PC.


2. 𝐂𝐃𝐍 (Content Delivery Network)

CDNs deliver static content like images, stylesheets, and JavaScript files efficiently by caching them closer to the user's location, reducing load times.


3. 𝐋𝐨𝐚𝐝 𝐁𝐚𝐥𝐚𝐧𝐜𝐞𝐫

It distributes incoming network traffic across multiple servers, ensuring no single server becomes a bottleneck and improving the application's availability and reliability.


4. 𝐀𝐏𝐈 𝐆𝐚𝐭𝐞𝐰𝐚𝐲

An API Gateway acts as an entry point for all clients, handling tasks like request routing, composition, and protocol translation, which helps manage multiple microservices behind the scenes.


5. 𝐌𝐢𝐜𝐫𝐨𝐬𝐞𝐫𝐯𝐢𝐜𝐞𝐬

Each microservice is a small, independent service that performs a specific business function. They communicate with each other via APIs. 


6. 𝐌𝐞𝐬𝐬𝐚𝐠𝐞 𝐁𝐫𝐨𝐤𝐞𝐫

A message broker facilitates communication between microservices by sending messages between them, ensuring they remain decoupled and can function independently.


7. 𝐃𝐚𝐭𝐚𝐛𝐚𝐬𝐞𝐬

Each microservice typically has its database to ensure loose coupling. This can involve different databases for different microservices


8. 𝐈𝐝𝐞𝐧𝐭𝐢𝐭𝐲 𝐏𝐫𝐨𝐯𝐢𝐝𝐞𝐫

This component handles user authentication and authorization, ensuring secure access to services.


9. 𝐒𝐞𝐫𝐯𝐢𝐜𝐞 𝐑𝐞𝐠𝐢𝐬𝐭𝐫𝐲 𝐚𝐧𝐝 𝐃𝐢𝐬𝐜𝐨𝐯𝐞𝐫𝐲

This system keeps track of all microservices and their instances, allowing services to find and communicate with each other dynamically.


10. 𝐒𝐞𝐫𝐯𝐢𝐜𝐞 𝐂𝐨𝐨𝐫𝐝𝐢𝐧𝐚𝐭𝐢𝐨𝐧 (e.g., Zookeeper)

Tools like Zookeeper help manage and coordinate distributed services, ensuring they work together smoothly.


 

Image source: Adnan Maqbool Khan's post on LinkedIn

Saturday, 10 August 2024

How to perform an initial audit of DevOps pipelines



Performing an initial audit of DevOps pipelines involves evaluating the efficiency, security, and compliance of the processes and tools used to develop, test, deploy, and maintain software. Here’s a step-by-step guide to conducting this audit:


Understand the Existing Environment

  • Inventory of Tools and Technologies
    • List all tools used in the CI/CD pipeline, including :
      • version control systems (e.g., Git)
      • CI/CD platforms (e.g., Jenkins, GitLab CI, CircleCI, TeamCity, GitHub Actions)
      • deployment tools
      • monitoring systems
  • Pipeline Architecture Overview
    • Document the architecture of the current pipeline, including the stages from code commit to production deployment.
  • Team Roles and Responsibilities
    • Identify the roles involved in the DevOps processes, including developers, DevOps engineers, QA, and security teams.

Security Review

  • Access Controls
    • Ensure proper access controls are in place for all tools and environments, enforcing the principle of least privilege.
  • Secrets Management
    • Check how secrets (API keys, passwords) are managed and stored. They should be encrypted and stored securely (e.g., in a vault).
  • Code Scanning and Analysis
    • Verify that the following is integrated into the pipeline:
      • static code analysis
      • vulnerability scanning
      • dependency checks
  • Pipeline Security
    • Assess the security of the CI/CD tools themselves, ensuring they are regularly patched and updated.

Compliance and Governance

  • Regulatory Requirements
    • Identify any industry-specific regulations (e.g., GDPR, HIPAA) and ensure that the pipeline meets compliance requirements.
  • Audit Trails
    • Ensure that all actions within the pipeline are logged and can be audited. Logs should include code changes, deployments, and access logs.
  • Data Handling
    • Review how sensitive data is handled during the build and deployment process, ensuring it is not exposed.

Pipeline Efficiency

  • Build and Deployment Times
    • Evaluate the time taken for builds and deployments, identifying bottlenecks in the process.
  • Resource Utilization
    • Analyze the resource usage of the pipeline, including compute, storage, and bandwidth, looking for inefficiencies.
  • Parallelization and Automation
    • Check if the pipeline leverages parallel execution where possible and whether manual steps can be automated.

Quality Assurance

  • Testing Integration
    • Review the integration of testing frameworks in the pipeline, including unit tests, integration tests, and end-to-end tests.
  • Code Quality Metrics
    • Ensure that code quality metrics (e.g., test coverage, code complexity) are tracked and enforced.
  • Rollback Mechanisms
    • Assess the rollback mechanisms in place in case of deployment failures.

Monitoring and Logging

  • Continuous Monitoring
    • Verify that application and infrastructure monitoring is integrated, with alerts set up for key performance indicators (KPIs).
  • Log Management
    • Ensure logs from various stages of the pipeline are centralized and can be easily accessed for troubleshooting.
  • Incident Response
    • Review the process for responding to incidents detected through monitoring.

Scalability and Flexibility

  • Pipeline Scalability
    • Check if the pipeline can scale with the growing needs of the organization, both in terms of workload and the number of users.
  • Environment Flexibility
    • Assess the ease of managing different environments (e.g., development, staging, production) and the consistency between them.

Documentation and Reporting

  • Pipeline Documentation
    • Ensure that the entire pipeline is well-documented, including the purpose of each stage, tools used, and configuration settings.
  • Reporting
    • Set up regular reporting on pipeline performance, security, and compliance, making it accessible to relevant stakeholders.

Feedback and Continuous Improvement

  • Stakeholder Feedback
    • Gather feedback from all stakeholders, including developers, QA, and operations teams, on pain points and areas for improvement.
  • Continuous Improvement Process
    • Implement a process for regularly updating and improving the pipeline based on audit findings and feedback.

Final Report and Recommendations

  • Compile Findings
    • Prepare a detailed report summarizing the audit findings, highlighting strengths and areas needing improvement.
  • Actionable Recommendations
    • Provide clear, actionable recommendations to address any identified issues, prioritize them based on impact and effort, and set timelines for implementation.


By following these steps, you can comprehensively assess the DevOps pipelines in a SaaS company, ensuring they are secure, efficient, and aligned with best practices and regulatory requirements.

Friday, 9 August 2024

Software Development Lifecycle, Environments and DevOps Metrics


Agile Software Development Lifecycle can be visualised as in the following infogram:


image source: LinkedIn (Brij kishore Pandey)



Why do we need multiple environments?


Developers and testers might not like to work on the same environment as they may use and modify the same data and it may impact the developer's troubleshooting ability or the tester's test result reliability. This is why devops may setup multiples of the same infrastructure stack and call them by different names (environments).


QA vs QC vs Testing


Before we list environments, we need to clarify that these terms are not the same:
  • Quality Assurance - ensures that processes and procedures are in place to achieve quality
  • Quality Control - ensures product quality
  • Testing - validates the product against specifications
    • functional
    • non-functional
    • acceptance testing
This is why QA environment might not be the same as Testing environment.



DevOps Environments


Continuous Testing is performed in at least two environment families:
  • Lower environments - any architecture which is not a direct copy of production; environments with different purposes, which don't necessarily need to replicate the Prod system.
    • Dev/Local development
    • Sandbox environments
    • CI environments
    • Test environments
    • QA environments
    • Nonfunctional testing envs 
  • Production replica environments:
    • Pre-Production / Staging -  test deployment into a Prod replica without Prod data; live environments with non-production data and beta testing
    • NPPD (Non-Production environment with Production Data) is a prod replica with prod data.
    • Customer UAT (User Acceptance Testing) /training environment

Production environment - for end users.





image source: LinkedIn (Brij kishore Pandey)




Thursday, 8 August 2024

Load Balancing Algorithms

Load balancing:
  • Used in distributed systems to distribute incoming network traffic across multiple servers or resources
  • Crucial for optimizing performance and ensuring even distribution of workload
  • Enhances system reliability by ensuring no single server becomes a bottleneck, thus reducing the risk of server overload and potential downtime





 
image source: Post | LinkedIn


Some popular load balancing algorithms:

  • Round Robin
    • distributes incoming requests sequentially to each server in a circular manner
    • simple and easy to implement but may not take into account server load or capacity
    • most used
  • Weighted Round Robin
    • similar to Round Robin, but with the ability to assign different weights to servers based on their capacity or performance
    • Servers with higher weights receive more requests
  • IP Hash
    • Uses the client's IP address to determine which server to send the request to
    • Requests from the same IP address are consistently routed to the same server
  • Least Connections
    • directs incoming requests to the server with the fewest active connections at the time
    • helps distribute the load evenly among servers based on their current workload
  • Least Response Time
    • Routes requests to the server with the lowest response time or latency
    • Aims to optimize performance by sending requests to the fastest server.
  • Random
    • Randomly selects a server from the pool to handle each request
    • While simple, it may not ensure even distribution of load across servers

Each load balancing algorithm has its own advantages and considerations.
The choice of algorithm depends on the specific requirements of the system and the desired load distribution strategy.



Disclaimer:

All credits for the inspiration for the article, an infograph image and part of the content go to Sina Riyahi [https://www.linkedin.com/in/sina-riyahi/].

Monday, 5 August 2024

Introduction to Amazon Simple Queue Service (SQS)



Amazon Simple Queue Service (SQS) is a fully managed message queuing service provided by Amazon Web Services (AWS). It enables decoupling and scaling of microservices, distributed systems, and serverless applications. 


Here's an overview of how Amazon SQS works:

Key Concepts


  • Queue:
    • A queue is a temporary storage location for messages waiting to be processed. There are two types of queues in SQS:
      • Standard Queue: Offers maximum throughput, best-effort ordering, and at-least-once delivery.
      • FIFO Queue: Ensures exactly-once processing and preserves the exact order of messages.
  • Message:
    • A message is the data that is sent between different components. It can be up to 256 KB in size and contains the information needed for processing.
  • Producer:
    • The producer (or sender) sends messages to the queue.
  • Consumer:
    • The consumer (or receiver) retrieves and processes messages from the queue.
  • Visibility Timeout:
    • A period during which a message is invisible to other consumers after a consumer retrieves it from the queue. This prevents other consumers from processing the same message concurrently.
  • Dead-Letter Queue (DLQ):
    • A queue for messages that could not be processed successfully after a specified number of attempts. This helps in isolating and analyzing problematic messages.

Workflow


  • Sending Messages:
    • A producer sends messages to an SQS queue using the SendMessage action. Each message is assigned a unique ID and placed in the queue.
  • Receiving Messages:
    • A consumer retrieves messages from the queue using the ReceiveMessage action. This operation can specify:
      • number of messages to retrieve (up to 10) 
      • duration to wait if no messages are available
  • Processing Messages:
    • After receiving a message, the consumer processes it. The message remains invisible to other consumers for a specified visibility timeout.
  • Deleting Messages:
    • Once processed, the consumer deletes the message from the queue using the DeleteMessage action. If not deleted within the visibility timeout, the message becomes visible again for other consumers to process.
  • Handling Failures:
    • If a message cannot be processed successfully within a specified number of attempts, it is moved to the Dead-Letter Queue for further investigation.


Additional Features

  • Long Polling:
    • Reduces the number of empty responses by allowing the ReceiveMessage action to wait for a specified amount of time until a message arrives in the queue.
  • Message Attributes:
    • Metadata about the message that can be used for filtering and routing.
  • Batch Operations:
    • SQS supports batch sending, receiving, and deleting of messages, which can improve efficiency and reduce costs.


Security and Access Control


  • IAM Policies:
    • Use AWS Identity and Access Management (IAM) policies to control access to SQS queues.
  • Encryption:
    • Messages can be encrypted in transit using SSL/TLS and at rest using AWS Key Management Service (KMS).

Use Cases


  • Decoupling Microservices:
    • SQS allows microservices to communicate asynchronously, improving scalability and fault tolerance.
  • Work Queues:
    • Distributing tasks to multiple workers for parallel processing.
  • Event Sourcing:
    • Storing a series of events to track changes in state over time.

Example Scenario


Order Processing System:

  • An e-commerce application has separate microservices for handling orders, inventory, and shipping.
  • The order service sends an order message to an SQS queue.
  • The inventory service retrieves the message, processes it (e.g., reserves stock), and then sends an updated message to another queue.
  • The shipping service retrieves the updated message and processes it (e.g., ships the item).


By using Amazon SQS, these microservices can operate independently and scale as needed, ensuring reliable and efficient order processing.


Message Queuing Service - Amazon Simple Queue Service - AWS

Saturday, 3 August 2024

Running Helm as Docker container

I tend to run tools as Docker containers, if their Docker images are provided. Such image for Helm is: alpine/helm - Docker Image | Docker Hub.


To run the container (and its default command which is helm -h):

docker run --rm alpine/helm 
The Kubernetes package manager

Common actions for Helm:

- helm search:    search for charts
- helm pull:      download a chart to your local directory to view
- helm install:   upload the chart to Kubernetes
- helm list:      list releases of charts

Environment variables:

| Name                               | Description                                                                                                |
|------------------------------------|------------------------------------------------------------------------------------------------------------|
| $HELM_CACHE_HOME                   | set an alternative location for storing cached files.                                                      |
| $HELM_CONFIG_HOME                  | set an alternative location for storing Helm configuration.                                                |
| $HELM_DATA_HOME                    | set an alternative location for storing Helm data.                                                         |
| $HELM_DEBUG                        | indicate whether or not Helm is running in Debug mode                                                      |
| $HELM_DRIVER                       | set the backend storage driver. Values are: configmap, secret, memory, sql.                                |
| $HELM_DRIVER_SQL_CONNECTION_STRING | set the connection string the SQL storage driver should use.                                               |
| $HELM_MAX_HISTORY                  | set the maximum number of helm release history.                                                            |
| $HELM_NAMESPACE                    | set the namespace used for the helm operations.                                                            |
| $HELM_NO_PLUGINS                   | disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins.                                                 |
| $HELM_PLUGINS                      | set the path to the plugins directory                                                                      |
| $HELM_REGISTRY_CONFIG              | set the path to the registry config file.                                                                  |
| $HELM_REPOSITORY_CACHE             | set the path to the repository cache directory                                                             |
| $HELM_REPOSITORY_CONFIG            | set the path to the repositories file.                                                                     |
| $KUBECONFIG                        | set an alternative Kubernetes configuration file (default "~/.kube/config")                                |
| $HELM_KUBEAPISERVER                | set the Kubernetes API Server Endpoint for authentication                                                  |
| $HELM_KUBECAFILE                   | set the Kubernetes certificate authority file.                                                             |
| $HELM_KUBEASGROUPS                 | set the Groups to use for impersonation using a comma-separated list.                                      |
| $HELM_KUBEASUSER                   | set the Username to impersonate for the operation.                                                         |
| $HELM_KUBECONTEXT                  | set the name of the kubeconfig context.                                                                    |
| $HELM_KUBETOKEN                    | set the Bearer KubeToken used for authentication.                                                          |
| $HELM_KUBEINSECURE_SKIP_TLS_VERIFY | indicate if the Kubernetes API server's certificate validation should be skipped (insecure)                |
| $HELM_KUBETLS_SERVER_NAME          | set the server name used to validate the Kubernetes API server certificate                                 |
| $HELM_BURST_LIMIT                  | set the default burst limit in the case the server contains many CRDs (default 100, -1 to disable)         |
| $HELM_QPS                          | set the Queries Per Second in cases where a high number of calls exceed the option for higher burst values |

Helm stores cache, configuration, and data based on the following configuration order:

- If a HELM_*_HOME environment variable is set, it will be used
- Otherwise, on systems supporting the XDG base directory specification, the XDG variables will be used
- When no other location is set a default location will be used based on the operating system

By default, the default directories depend on the Operating System. The defaults are listed below:

| Operating System | Cache Path                | Configuration Path             | Data Path               |
|------------------|---------------------------|--------------------------------|-------------------------|
| Linux            | $HOME/.cache/helm         | $HOME/.config/helm             | $HOME/.local/share/helm |
| macOS            | $HOME/Library/Caches/helm | $HOME/Library/Preferences/helm | $HOME/Library/helm      |
| Windows          | %TEMP%\helm               | %APPDATA%\helm                 | %APPDATA%\helm          |

Usage:
  helm [command]

Available Commands:
  completion  generate autocompletion scripts for the specified shell
  create      create a new chart with the given name
  dependency  manage a chart's dependencies
  env         helm client environment information
  get         download extended information of a named release
  help        Help about any command
  history     fetch release history
  install     install a chart
  lint        examine a chart for possible issues
  list        list releases
  package     package a chart directory into a chart archive
  plugin      install, list, or uninstall Helm plugins
  pull        download a chart from a repository and (optionally) unpack it in local directory
  push        push a chart to remote
  registry    login to or logout from a registry
  repo        add, list, remove, update, and index chart repositories
  rollback    roll back a release to a previous revision
  search      search for a keyword in charts
  show        show information of a chart
  status      display the status of the named release
  template    locally render templates
  test        run tests for a release
  uninstall   uninstall a release
  upgrade     upgrade a release
  verify      verify that a chart at the given path has been signed and is valid
  version     print the client version information

Flags:
      --burst-limit int                 client-side default throttling limit (default 100)
      --debug                           enable verbose output
  -h, --help                            help for helm
      --kube-apiserver string           the address and the port for the Kubernetes API server
      --kube-as-group stringArray       group to impersonate for the operation, this flag can be repeated to specify multiple groups.
      --kube-as-user string             username to impersonate for the operation
      --kube-ca-file string             the certificate authority file for the Kubernetes API server connection
      --kube-context string             name of the kubeconfig context to use
      --kube-insecure-skip-tls-verify   if true, the Kubernetes API server's certificate will not be checked for validity. This will make your HTTPS connections insecure
      --kube-tls-server-name string     server name to use for Kubernetes API server certificate validation. If it is not provided, the hostname used to contact the server is used
      --kube-token string               bearer token used for authentication
      --kubeconfig string               path to the kubeconfig file
  -n, --namespace string                namespace scope for this request
      --qps float32                     queries per second used when communicating with the Kubernetes API, not including bursting
      --registry-config string          path to the registry config file (default "/root/.config/helm/registry/config.json")
      --repository-cache string         path to the file containing cached repository indexes (default "/root/.cache/helm/repository")
      --repository-config string        path to the file containing repository names and URLs (default "/root/.config/helm/repositories.yaml")

Use "helm [command] --help" for more information about a command.

We'd get the same output if we run:

docker run --rm alpine/helm -h


Let's check the Helm version:

docker run --rm alpine/helm version
version.BuildInfo{Version:"v3.15.3", GitCommit:"3bb50bbbdd9c946ba9989fbe4fb4104766302a64", GitTreeState:"clean", GoVersion:"go1.22.5"}

Let's say we want to list releases in the local Minikube cluster. Let's assume kubectl's current context is set to minikube. 

If we run:

docker run --rm alpine/helm list
Error: Kubernetes cluster unreachable: Get "http://localhost:8080/version": dial tcp [::1]:8080: connect: connection refused

The error tells that Helm didn't find the right kubeconfig and it used default cluster API server url. We need to provide Helm the correct kubeconfig. In our case it's ~/.kube/config so we need to mount ~/.kube as a volume:

docker run --rm -v ~/.kube:/root/.kube alpine/helm list
Error: Kubernetes cluster unreachable: invalid configuration: [unable to read client-cert /home/bojan/.minikube/profiles/minikube/client.crt for minikube due to open /home/bojan/.minikube/profiles/minikube/client.crt: no such file or directory, unable to read client-key /home/bojan/.minikube/profiles/minikube/client.key for minikube due to open /home/bojan/.minikube/profiles/minikube/client.key: no such file or directory, unable to read certificate-authority /home/bojan/.minikube/ca.crt for minikube due to open /home/bojan/.minikube/ca.crt: no such file or directory]

This error shows Helm could not access certificates that are references in kubeconfig:

cat ~/.kube/config 

or 

kubectl config view

...give the following output:

apiVersion: v1
clusters:
...
- cluster:
    certificate-authority: /home/bojan/.minikube/ca.crt
    extensions:
    - extension:
        last-update: Sat, 03 Aug 2024 00:16:58 BST
        provider: minikube.sigs.k8s.io
        version: v1.33.0
      name: cluster_info
    server: https://192.168.59.100:8443
  name: minikube
contexts:
...
- context:
    cluster: minikube
    extensions:
    - extension:
        last-update: Sat, 03 Aug 2024 00:16:58 BST
        provider: minikube.sigs.k8s.io
        version: v1.33.0
      name: context_info
    namespace: default
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
...
- name: minikube
  user:
    client-certificate: /home/bojan/.minikube/profiles/minikube/client.crt
    client-key: /home/bojan/.minikube/profiles/minikube/client.key


Helm in container needs to access both kubeconfig file and also all files referenced in it, in our case:
  • /home/bojan/.minikube/ca.crt
  • /home/bojan/.minikube/profiles/minikube/client.crt
  • /home/bojan/.minikube/profiles/minikube/client.key
We can mount /home/bojan/.minikube/ as a volume (see docker run | Docker Docs), at the same path (which will be created in container).

docker run --rm -v ~/.kube:/root/.kube -v  /home/bojan/.minikube:/home/bojan/.minikube alpine/helm list
NAME    NAMESPACE       REVISION        UPDATED STATUS  CHART   APP VERSION

This shows we don't have any releases in our Minikube cluster.



Converting the original (non-templated) manifests to Helm chart


Let's look at  we have the following original (non-templated) manifest files:



If we run Helm as a Docker container and want to use it to create deployment in the Minikube cluster we can do:

docker run --rm \
-v ~/.kube:/root/.kube \
-v $(pwd)/minikube/php-fmp-nginx-demo:/apps \
-v /home/bojan/.minikube:/home/bojan/.minikube \
alpine/helm


Creating helm-chart

This creates a helm-chart directory:
 


Note that Docker's default user is root so this new directory and all its content will have root as an owner and any future local action on these objects will require elevated privileges:

$ ls -la ./minikube/php-fmp-nginx-demo/helm-chart/
total 28
drwxr-xr-x 4 root  root  4096 Aug  3 13:36 .
drwxrwxr-x 4 bojan bojan 4096 Aug  3 13:36 ..
drwxr-xr-x 2 root  root  4096 Aug  3 13:36 charts
-rw-r--r-- 1 root  root  1146 Aug  3 13:36 Chart.yaml
-rw-r--r-- 1 root  root   349 Aug  3 13:36 .helmignore
drwxr-xr-x 3 root  root  4096 Aug  3 13:36 templates
-rw-r--r-- 1 root  root  2363 Aug  3 13:36 values.yaml

To prevent this we can tell Docker to use non-root user.

$ id
uid=1000(bojan) gid=1000(bojan) groups=1000(bojan),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),116(lpadmin),126(sambashare),142(libvirt),999(docker)

$ sudo rm -rf ./minikube/php-fmp-nginx-demo/helm-chart/

$ docker run --rm -v ~/.kube:/root/.kube -v $(pwd)/minikube/php-fmp-nginx-demo:/apps -v /home/bojan/.minikube:/home/bojan/.minikube --user 1000:1000 alpine/helm create helm-chart
Creating helm-chart

$ ls -la ./minikube/php-fmp-nginx-demo/helm-chart/
total 28
drwxr-xr-x 4 bojan bojan 4096 Aug  3 14:24 .
drwxrwxr-x 4 bojan bojan 4096 Aug  3 14:24 ..
drwxr-xr-x 2 bojan bojan 4096 Aug  3 14:24 charts
-rw-r--r-- 1 bojan bojan 1146 Aug  3 14:24 Chart.yaml
-rw-r--r-- 1 bojan bojan  349 Aug  3 14:24 .helmignore
drwxr-xr-x 3 bojan bojan 4096 Aug  3 14:24 templates
-rw-r--r-- 1 bojan bojan 2363 Aug  3 14:24 values.yaml


minikube/php-fmp-nginx-demo/helm-chart/values.yaml:

deployment-nginx:
  name: nginx-deployment


minikube/php-fmp-nginx-demo/helm-chart/templates/nginx-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Values.deployment-nginx.name }}


$ docker run --rm -v ~/.kube:/root/.kube -v $(pwd)/minikube/php-fmp-nginx-demo:/php-fmp-nginx-demo -v /home/bojan/.minikube:/home/bojan/.minikube  alpine/helm install --dry-run --debug php-fmp-nginx-demo-release-v1.0 /php-fmp-nginx-demo/helm-chart
install.go:222: [debug] Original chart version: ""
install.go:239: [debug] CHART PATH: /php-fmp-nginx-demo/helm-chart

Error: INSTALLATION FAILED: parse error at (helm-chart/templates/nginx-deployment.yaml:4): bad character U+002D '-'
helm.go:84: [debug] parse error at (helm-chart/templates/nginx-deployment.yaml:4): bad character U+002D '-'
INSTALLATION FAILED




https://stackoverflow.com/questions/75375090/merge-annotations-in-helm
https://github.com/helm/helm/issues/2192
https://v2.helm.sh/docs/chart_best_practices/
https://github.com/helm/helm-www/issues/1272
https://stackoverflow.com/questions/63853679/helm-templating-doesnt-let-me-use-dash-in-names
https://stackoverflow.com/questions/47844377/how-can-i-create-a-volume-for-the-current-user-home-directory-in-docker-compose
https://helm.sh/docs/helm/helm_install/
https://github.com/roboll/helmfile/issues/176

https://controlplane.com/community-blog/post/kubeconfig-file-for-the-aws-eks-cluster
https://discuss.kubernetes.io/t/the-connection-to-the-server-localhost-8080-was-refused-did-you-specify-the-right-host-or-port/1464/4
https://stackoverflow.com/questions/63066604/error-kubernetes-cluster-unreachable-get-http-localhost8080-versiontimeou
https://k21academy.com/docker-kubernetes/the-connection-to-the-server-localhost8080-was-refused/
https://docs.docker.com/reference/cli/docker/container/run/#volume



References: