How to create a basic Pipeline on Gitlab to deploy on Kubernetes
NOTE: If you are only interested in a example pipeline, or if you want a more commented pipeline, go directly on the github
Continuous Delivery is on of the keystone of the work of a SRE,and so, automatize the deployment of any repository should be on your learning list if you want to gain knowledge of DevOps.
Quick insight : Did you know that 75 percent of company that deploy containerized application, use Kubernetes in a production environment ? When I say Kubernetes, all the variants (AKS, EKS, GKS, you get it) are taken into account.
So, having the knowledge to setup a Pipeline to deploy workload to Kubernetes is a concept you must know how to address.
Recently, I had to setup a pipeline in Gitlab to build and deploy an application to a Kubernetes Cluster.
Since I think that CI/CD is a very important concept, I put all my docs together, corrected the typos and other spelling mistake and created this blog post, so I can show you how to setup a basic but effective and modular CI/CD pipeline on gitlab.
Gitlab CI is a very good solution, effective, easy to setup and you can be interfaced with any tool that exist, with more or less bravery. Modularity is a very important point, because Gitlab will be able to follow the life-cycle of your app, and evolve with it.
Also, Gitlab provide a lot of tools to complement your environment, a package registry, a container registry, that we will use in this pipeline. I won’t go into the creation of an application, as it is not the goal of my blog post.
If you want to setup a CI/CD on gitlab and deploy to a Kubernetes cluster, please be sure that you have a Gitlab Account and a Kubernetes Cluster with sufficient authorizations.
Having private runner is not mandatory (but if you are in a production environment, it is really recommended)
I will explain the stage i put in my pipeline and then describe how it’s deployed in Kubernetes
I did not used gitlab integration on Kubernetes, as I am working on bare-metal cluster, I encountered issues linked to this.
NOTE: This blog is written for a nodejs app, but can easily be adapted to any language, you can refer to gitlab examples
What to put in a Basic Pipeline
A Continuous Delivery Pipeline, in a very minimal setup should prepare the code (build or compile depending on your language), publish the code and/or container image to the registry of your choice and deploy the prepared code to the wished destination.
Of course, as your application grow your pipeline will too, and you might, for example implement some security on you application and its dependencies.
I won’t go deep into the topic, as I want to provide a practical introduction, but I do recommend this blog post of red-hat and the reddit community r/devops, where you will find question/answer, testimony of SRE and assistance when you are in trouble.
Creation of the .gitlab-ci.yaml
To create a pipeline in gitlab, you have to create in your project a .gitlab-ci.yaml file.
The pipeline described in this file will be executed when a push is made to the repository or when trigerred by any other mean( As said before Gitlab is very modular, and can be integrated with a lot of other solutions).
In this said file, you will define all the stage of your pipeline, more info on the official doc.
Our pipeline will look like this:
- before_script: The code provided in the before_script will be executed before every stage
- build: The code is builded, and passed to further stage as an artifact
- publish_registry : In this stage the app will be builded/pushed into the gitlab registry
- publish_registry_production Only executed on the production branch, the image will be build/pushed with the “latest” tag.
- deploy: Depending on the branch we are in the newly created container will be pushed into a cluster
I won’t the full yaml into the article, but only the stages we are working on with.
before_script
As explained above, the before_script will be executed before any stage, you can expand the way you want it to, in the example below, as we are using nodejs code, before every stage, we populate the .npmrc file so that we can push to our private registry (in our case we use the gitlab one).
WARNING: if you want to use the gitlab package registry, your package need to be scoped Please note that the var NAME_OF_YOUR_ORG need to be set by you, all other var are populated by gitlab, here is a list of gitlab pre-defined vars.
before_script:
— docker_tag=${CI_COMMIT_SHORT_SHA}_${CI_COMMIT_REF_NAME} — echo “@{NAME_OF_YOUR_ORG}:registry=https://gitlab.com/api/v4/packages/npm/" >> .npmrc — echo “//gitlab.com/api/v4/packages/npm/:_authToken=${CI_JOB_TOKEN}” >> .npmrc — echo “//gitlab.com/api/v4/projects/${CI_PROJECT_ID}/packages/npm/:_authToken=${CI_JOB_TOKEN}” >> .npmr
The docker_tag is composed of a short-sha of the commit and the name of the branch
build
The build stage will heavily depends on the language you chose, but, for a NodeJs code, the stage might look like this:
build:
stage: build
script:
— npm install
— npm run build
— npm test
— npm run package artifacts:
name: “$CI_JOB_STAGE-$CI_COMMIT_REF_NAME”
paths:
— package/
— deploy/
except:
— master
The artifacts define folder or files that you can download when needed after the job is finished and that can be passed onto the next stage More info on artifacts
publish_registry
In the first stage, we build the image using kaniko.
Kaniko is a tool that (to make a quick resume), will build the a docker image in the user space. meaning that it can build an image in an non-privilegied environment (like a Kubernetes cluster or a Gitlab runner).
publish_registry:
stage: publish_registry image: e
name: gcr.io/kaniko-project/executor:debug
entrypoint: [“”]script:
— mkdir -p /kaniko/.docker — echo “{\”auths\”:{\”$CI_REGISTRY\”:{\”username\”:\”$REGISTRY_USER\”,\”password\”:\”$REGISTRY_USER_TOKEN\”}}}” > /kaniko/.docker/config.json — /kaniko/executor — context $CI_PROJECT_DIR — dockerfile $CI_PROJECT_DIR/Dockerfile — destination $CI_REGISTRY_IMAGE:$docker_tag
This stage will be executed on all branches, like so, you will have a backup of every push, If you are environmental conscious you can activate this only on important branches (preprod/prod) to save on storage.
publish_registry_production
This stage is identical to the previous, but we tag the image as latest, and it only executed in production:
publish_registry_production:
stage: publish_registry_production image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [“”]
before_script:
— docker_tag=${CI_COMMIT_SHORT_SHA}_${CI_COMMIT_REF_NAME} script:
— mkdir -p /kaniko/.docker — echo “{\”auths\”:{\”$CI_REGISTRY\”:{\”username\”:\”$REGISTRY_USER\”,\”password\”:\”$REGISTRY_USER_TOKEN\”}}}” > /kaniko/.docker/config.json — /kaniko/executor — context $CI_PROJECT_DIR — dockerfile $CI_PROJECT_DIR/Dockerfile — destination $CI_REGISTRY_IMAGE:latestonly:
- production
deploy
The deploy stage will be a little bit different, we will define a “template” and extend it depending on the branch we are in.
This enable you to have a deployment process, that you will define in the template, but the deployment variables (deployment endpoint, user, password, key, ect…) can change depending on the environment you wish to deploy to.
First the template:
# .deploy is our “template” we will define a deployment
.deploy: stage: deploy
image: bitnami/kubectl
dependencies:
— publish_registry
script:
— echo $KUBECONFIG64 | base64 -d > $KUBECONFIG
— kubectl set image deployment/$NAMESPACE-$CI_PROJECT_TITLE $NAMESPACE-$CI_PROJECT_TITLE-container=$CI_REGISTRY_IMAGE:$docker_tag
As you see, we pass the base64 encoded kubeconfig in the gitlab-runner, decode it and set it in the $KUBECONFIG environment variable, when executing commands, kubectl will first look for environment variables and will after look in the folder ~/.kube/ .
This is the template, and you can after that define as many stage extending this one as you want.
#Each stage marked as deploy_{something} is constructed like so:
deploy_dev:
stage: deploy_dev
extends: .deploy # We refer to the template stage, and we will extend it with variables variables:
NAMESPACE: DEV # definition of the namespace , usually the name of the branch , the master branch being the only exception , as it is the dev
KUBECONFIG64: $DEV_KUBECONFIG64
only:
— master
I also define the deploy_preprod and deploy_prod in my pipeline, but didn’t put it in here to avoid cluttering of this post, but please go on the github repo.
The pipeline is created !
You can extend it the way you want and need, add security test on you docker image, deploy via ssh to an EC2 (example provided in the github), you name it, that’s why gitlab is a good solution: as long as you can script it, and run it in a container or via SSH you can do it!
Creation of the Deployment on the Cluster
With the pipeline finished we need to create the deployment and a service to expose it.
As said at the beginning, I use heavily use the gitlab environment variables, so be careful to respect the name you set in the pipeline.
In the deploy stage, kubectl will update the image of the deployment named $NAMESPACE-$CI_PROJECT_TITLE.
So, in my case the gitlab runner will seek the deployment named: dev-nodejstest. The $NAMESPACE is set in the stage, and the $CI_PROJECT_TITLE is the name of the project.
The full yaml is available in the github.
apiVersion: apps/v1
kind: Deployment
name: dev-nodejstest
[…]
spec:
containers:
— image: popopame/nodetest:latest
imagePullPolicy: Always
name: dev-nodejstest-container
ports:
— containerPort: 8080
Kubernetes will detect system failure (for example if the container wont start, of it is in a CrashLoopBackOff state), and will rollback to a previous version.
But, It will not detect if a component in your app isn’t working properly, but you can set probes to detect if you app is ready, alive and started properly (ReadinessProbes, LivenessProbes and StartupProbes respectively), the documentation about is here:
For example to attest that our app started properly, we can add a readinessProbe like so:
apiVersion: apps/v1
kind: Deployment
name: dev-nodejstest
[…]
spec:
containers:
— image: popopame/nodetest:latest
imagePullPolicy: Always
name: dev-nodejstest-container
ports:
— containerPort: 8080
readinessProbe:
httpGet: #The kubernetes will make a http request
path: / #path on wich the http request will be made
port: 8080 #port on wich the http request will be made
initialDelaySeconds: 10 #kubernetes will wait 10 seconds before making an http call
periodSeconds: 30 #Kubernetes will make an http request every 30seconds
Once this is done, you can push your code, and the pipeline, once finished will output your code to the deployment.
Please note that the addition of a probe is purely optional, but I do recommend to set it, as well as limitation.
Where to go from here ?
Congratulation, you should have a working pipeline, whether it is your first pipeline, or you needed some example to complete your already existing pipeline, you can be proud of your work !
But, and as said before, the pipeline we made is really simple, and adding more stage can (and will be needed) as your app evolve.
For example you can add security test on you code, gitlab provide Static Application Security Testing, and if you have a Gold or ultimate level account, you can add container scanning, dependency scanning and more
I can’t put a list of thing to add, talk with your dev team, document yourself on security to discover what stage and test might be the most relevant !.
Hope this post have helped someone!
Godspeed!