Written in Continuous on a Cd

In this article, you will learn how to use Datree with Argo CD to validate Kubernetes manifests in your continuous delivery process. I have already introduced the Datree tool in one of my previous articles about CI/CD with Tekton here. So, if you need to learn about basics please refer to that article or to the Datree quickstart.

With Datree you can automatically detect misconfigurations in Kubernetes manifests. Argo CD allows you to manage the continuous delivery process on Kubernetes declaratively using the GitOps approach. It can apply the manifests stored in a Git repository to a Kubernetes cluster. But, what about validation? Argo CD doesn't have any built-in mechanisms for validating configuration taken from Git. This is where Datree comes in. Let's see how to integrate both of these tools into your CD process.

Source Code

If you would like to try it by yourself, you can always take a look at my source code. In order to do that you need to clone my GitHub repository. This repository contains sample Helm charts, which Argo CD will use as Deployment templates. After that, you should just follow my instructions.

Introduction

We will deploy our sample image using two different Helm charts. Therefore, we will have two applications defined in ArgoCD per each Deployment. Our goal is to configure Argo CD to automatically use Datree as a Kubernetes manifest validation tool for all applications. To achieve that, we need to create the ArgoCD plugin for Datree. It is quite a similar approach to the case described in this article. However, there we used a ready plugin for integration between Argo CD and HashiCorp's Vault.

There are two different ways to run a custom plugin in Argo CD. We can add the plugin config to the Argo CD main ConfigMap or run it as a sidecar to the argocd-repo-server Pod. In this article, we will use the first approach. This approach requires us to ensure that relevant binaries are available inside theargocd-repo-server pod. They can be added via volume mounts or using a custom image. We are going to create a custom image of the Argo CD that contains the Datree binaries.

Build a custom Argo CD that contains Datree

We will use the official Argo CD Helm chart for installation on Kubernetes. This chart is based on the following Argo CD image: quay.io/argoproj/argocd. In fact, the image is used by several components including argocd-server and argocd-repo-server. We are going to replace the default image for the argocd-repo-server:

In the first step, we will create a Dockerfile that extends a base image. We will install the Datree CLI there. Since Datree requires curl and unzip we also need to add them to the final image. Here's our Dockerfile based on the latest version of the Argo CD image:

          FROM quay.io/argoproj/argocd:v2.4.11  USER root  RUN apt-get update RUN apt-get install -y curl RUN apt-get install -y unzip RUN apt-get clean RUN curl https://get.datree.io | /bin/bash  LABEL release=1-with-curl  USER 999        

I have already built and published the image on my Docker Hub account. You can pull it from the repository piomin/argocd under the v2.4.11-datree tag. If you would like to build it by yourself just run the following command:

          $ docker build -t piomin/argocd:v2.4.11-datree .        

Here's the output:

Install Argo CD with the Datree Plugin on Kubernetes

We will use Helm to install Argo CD on Kubernetes. Instead of the default image, we need to use our custom Argo CD image built in the previous step. We also need to enable and configure our plugin in the Argo CD ConfigMap. You would also have to set the DATREE_TOKEN environment variable containing your Datree token. Fortunately, we can set all those things in a single place. Here's our Helm values.yaml that overrides default configuration settings for Argo CD.

          server:   config:     configManagementPlugins: |       - name: datree         generate:           command: ["bash", "-c"]           args: ['if [[ $(helm template $ARGOCD_APP_NAME -n $ARGOCD_APP_NAMESPACE -f <(echo "$ARGOCD_ENV_HELM_VALUES") . > result.yaml | datree test -o json result.yaml) && $? -eq 0 ]]; then cat result.yaml; else exit 1; fi']  repoServer:   image:     repository: piomin/argocd     tag: v2.4.11-datree   env:     - name: DATREE_TOKEN       value: <YOUR_DATREE_TOKEN>        

You can obtain the token from the Datree dashboard after login. Just go to your account settings:

Then, you need to add the Helm repository with Argo CD charts:

          $ helm repo add argo https://argoproj.github.io/argo-helm        

After that, just install it with the custom settings provided in values.yaml file:

          $ helm install argocd argo/argo-cd \     --version 5.1.0 \     --values values.yaml \     -n argocd        

Once you installed Argo CD in the argocd namespace, you can display the argocd-cm ConfigMap.

          $ kubectl get cm argocd-cm -n argocd -o yaml        

The argocd-cm ConfigMap contains the definition of our plugin. Inside the args parameter, we need to pass the command for creating Deployment manifests. Each time we call the helm template command we pass the generated YAML file to the Datree CLI, which will run a policy check against it. If all of the rules pass the datree test command will return exit code 0. Otherwise, it will return a value other than 0. If the Datree policy check finishes successfully we need to send the YAML manifest to the output. Thanks to that, Argo CD will apply it to the Kubernetes cluster.

argo-cd-datree-config-plugin

Create Helm charts

The instance of ArgoCD with a plugin for Datree is ready and we can now proceed to the app deployment phase. Firstly, let's create Helm templates. Here's a very basic Helm template for our sample app. It just exposes ports outside the container and sets environment variables. It is available under the simple-with-envs directory.

          apiVersion: apps/v1 kind: Deployment metadata:   name: {{ .Values.app.name }} spec:   replicas: {{ .Values.app.replicas }}   selector:     matchLabels:       app: {{ .Values.app.name }}   template:     metadata:       labels:         app: {{ .Values.app.name }}     spec:       containers:         - name: {{ .Values.app.name }}           image: {{ .Values.image.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}           ports:           {{- range .Values.app.ports }}             - containerPort: {{ .value }}               name: {{ .name }}           {{- end }}           {{- if .Values.app.envs }}           env:           {{- range .Values.app.envs }}             - name: {{ .name }}               value: {{ .value }}           {{- end }}           {{- end }}        

We can test it locally with Helm and Datree CLI. Here's a set of test values:

          image:   registry: quay.io   repository: pminkows/sample-kotlin-spring   tag: "1.0"  app:   name: sample-spring-boot-kotlin   replicas: 1   ports:     - name: http       value: 8080   envs:     - name: PASS       value: example        

The following command generates the YAML manifest using test-values.yaml and performs a Datree policy check.

          $ helm template --values test-values.yaml . > result.yaml | \   datree test result.yaml        

Here's the result of our test analysis. As you can see, there are a lot of violations reported by Datree:

argo-cd-datree-test

For example, we should add liveness and readiness probes, and disable root access to the container. Here is another Helm template that fixes all the problems reported by Datree:

          apiVersion: apps/v1 kind: Deployment metadata:   name: {{ .Values.app.name }}   labels:     app: {{ .Values.app.name }}     env: {{ .Values.app.environment }}     owner: {{ .Values.app.owner }} spec:   replicas: {{ .Values.app.replicas }}   selector:     matchLabels:       app: {{ .Values.app.name }}   template:     metadata:       labels:         app: {{ .Values.app.name }}         env: {{ .Values.app.environment }}     spec:       containers:         - name: {{ .Values.app.name }}           image: {{ .Values.image.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}           resources:             requests:               memory: {{ .Values.app.resources.memoryRequest }}               cpu: {{ .Values.app.resources.cpuRequest }}             limits:               memory: {{ .Values.app.resources.memoryLimit }}               cpu: {{ .Values.app.resources.cpuLimit }}           livenessProbe:             initialDelaySeconds: {{ .Values.app.liveness.initialDelaySeconds }}             httpGet:               port: {{ .Values.app.liveness.port }}               path: {{ .Values.app.liveness.path }}             failureThreshold: {{ .Values.app.liveness.failureThreshold }}             successThreshold: {{ .Values.app.liveness.successThreshold }}             timeoutSeconds: {{ .Values.app.liveness.timeoutSeconds }}             periodSeconds: {{ .Values.app.liveness.periodSeconds }}           readinessProbe:             initialDelaySeconds: {{ .Values.app.readiness.initialDelaySeconds }}             httpGet:               port: {{ .Values.app.readiness.port }}               path: {{ .Values.app.readiness.path }}             failureThreshold: {{ .Values.app.readiness.failureThreshold }}             successThreshold: {{ .Values.app.readiness.successThreshold }}             timeoutSeconds: {{ .Values.app.readiness.timeoutSeconds }}             periodSeconds: {{ .Values.app.readiness.periodSeconds }}           ports:           {{- range .Values.app.ports }}             - containerPort: {{ .value }}               name: {{ .name }}           {{- end }}           {{- if .Values.app.envs }}           env:           {{- range .Values.app.envs }}             - name: {{ .name }}               value: {{ .value }}           {{- end }}           {{- end }}           securityContext:             runAsNonRoot: true        

You can perform the same test as before for the new Helm chart. Just go to the full-compliant directory.

Create Argo CD Applications with Helm charts

Finally, we can create ArgoCD applications that use our Helm charts. Let's create an app for the simple-with-envs chart (1). Instead of a typical helm type, we should set the plugin type (2). The name of our plugin is datree (3). With this type of ArgoCD app we have to set Helm parameters inside the HELM_VALUES environment variable (4).

          apiVersion: argoproj.io/v1alpha1 kind: Application metadata:   name: simple-failed   namespace: argocd spec:   destination:     name: ''     namespace: default     server: 'https://kubernetes.default.svc'   source:     path: simple-with-envs # (1)     repoURL: 'https://github.com/piomin/sample-generic-helm-charts.git'     targetRevision: HEAD     plugin: # (2)       name: datree # (3)       env:         - name: HELM_VALUES # (4)           value: |             image:               registry: quay.io               repository: pminkows/sample-kotlin-spring               tag: "1.4.30"              app:               name: sample-spring-boot-kotlin               replicas: 1               ports:                 - name: http                   value: 8080               envs:                 - name: PASS                   value: example   project: default        

Here's the UI of our ArgoCD Application:

Let's verify what happened:

ArgoCD performs a policy check using Datree. As you probably remember this Helm chart does not meet the rules defined in Datree. Therefore the process exited with error code 1.

Now, let's create an ArgoCD Application for the second Helm chart:

          apiVersion: argoproj.io/v1alpha1 kind: Application metadata:   name: full-compliant-ok   namespace: argocd spec:   destination:     name: ''     namespace: default     server: 'https://kubernetes.default.svc'   source:     path: full-compliant     repoURL: 'https://github.com/piomin/sample-generic-helm-charts.git'     targetRevision: HEAD     plugin:       name: datree       env:         - name: HELM_VALUES           value: |             image:               registry: quay.io               repository: pminkows/sample-kotlin-spring               tag: "1.4.30"              app:               name: sample-spring-boot-kotlin               replicas: 2               environment: test               owner: piomin               resources:                 memoryRequest: 128Mi                 memoryLimit: 512Mi                 cpuRequest: 500m                 cpuLimit: 1               liveness:                 initialDelaySeconds: 10                 port: 8080                 path: /actuator/health/liveness                 failureThreshold: 3                 successThreshold: 1                 timeoutSeconds: 3                 periodSeconds: 5               readiness:                 initialDelaySeconds: 10                 port: 8080                 path: /actuator/health/readiness                 failureThreshold: 3                 successThreshold: 1                 timeoutSeconds: 3                 periodSeconds: 5               ports:                 - name: http                   value: 8080               envs:                 - name: PASS                   value: example   project: default        

This time the ArgoCD Application is created successfully:

That's because Datree analysis finished successfully:

argo-cd-datree-result-ok

Finally, we can just synchronize the app to deploy our image on Kubernetes.

argo-cd-datree-ui

Final Thoughts

ArgoCD allows us to install custom plugins. Thanks to that we can integrate the ArgoCD deployment process with the Datree policy check, which is performed each time a new version of Kubernetes manifest is processed by ArgoCD. It applies to all the apps that refer to the plugin.

leroytais1937.blogspot.com

Source: https://piotrminkowski.com/2022/08/29/continuous-delivery-on-kubernetes-with-argo-cd-and-datree/

0 Response to "Written in Continuous on a Cd"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel