Comparison between Helm and Kustomize for Kubernetes yaml management

Masato Naka
6 min readJul 4, 2021

--

Helm and Kustomize are often compared with each other in the context of managing Kubernetes manifest file. Although those two tools have similar features, they are fundamentally different. In this post, I’ll compare them from several points of view with a sample application.

Basic Comparison

Although I compared them, they are quite different. So, I can understand that some people would think Helm and Kustomize are not things to compare.

Helm (Package manager)

Pros:

  1. Easy to distribute
  2. Not only templating but also have hook, rollback, packaging
  3. Extensible
  4. Support if condition and loop
  5. Support cusom-defined template in helpers
  6. There already exist many official charts
  7. Restrict changeable parts only to the variables in Values.yaml
  8. Lint and test feature

Cons:

  1. Low readability of template
  2. Overhead for simple deployment
  3. Extra learning cost due to the additional abstract layer
  4. Need to update the chart if you need to configure what the existing chart doesn’t allow to change.

Deployment for multiple environments:

  • Prepare value file for each environment

Kustomize

Pros:

  1. Included in kubectl (kubernetes-1–14-release-announcement)
  2. Easy to validate as it’s just plain yaml files
  3. Simply patch multiple yaml files
  4. Overlays can update any fields

Cons:

  1. kustomize version in kubectl is very old (v2.0.3) (CHANGELOG-1.21.md#kustomize-updates-in-kubectl supported v4.1.2)
  2. Hard to know what would be the final change on the target resource just from the change in base or overlays.
  3. Not following DRY principle.
  4. No versioning feature.

Deployment for multiple environments:

  • overwrite base with `overlays/<env>`, which is created for each environment

Comparison in creating Helm and Kustomize

Basic Helm chart usage (create a helm chart)

Initialize a helm chart. (e.g. helm create helm-example)

Edit auto-created Values.yaml and files under templates to deploy your application.

You can also use _helpers.tpl to define a custom logic to be used in your manifest file under templates.

Use Values.yaml

After editing templates and Values.yaml, you can test with dryrun feature to confirm if the new helm chart is valid.

helm install helm-example — debug — dry-run ./helm-example

You can also check lint with:

helm lint helm-example

And finally, you can actually deploy your application with the created helm (deploying an application with a helm chart is called install a helm chart.) with the following command. You can also specify a namespace if you want to deploy it in a specific namespace with --namespace <namespace>

helm install helm-example — debug ./helm-example

Now you can check the installed helm with the following helm command.

helm ls

You can also write a test for your helm application by writing what to test under thetest directory. When you run the test, a new pod is created and the test is executed agianst the install helm application,

helm test helm-example

In the last step, you package the tested helm application with the following helm command:

helm package helm-example

This would generate a package called helm-example-0.1.0.tgz.

You can publish your helm chart with a repository. In this post, I use a Github repository for it (e.g. https://github.com/nakamasato/helm-charts-repo)

helm repo index ./ — url https://nakamasato.github.io/helm-charts-repo

With this command, index.yaml will be created, which contains all the information about the helm charts in this helm repository. Next, you push the index.yaml and helm-example-0.1.0.tgz generated the above step, together to the chart repository.

You can installl your published helm chart with the following command:

helm repo add nakamasato https://nakamasato.github.io/helm-charts-repo
helm repo update # update the repository info

The first command is to add the information about your repository to your local helm and the second command is to update the repository information.

Now, you can search for your own chart by helm search command:

helm search repo naka

And install by

helm install example-from-my-repo nakamasato/helm-example

If you want to specify some variables in Value.yaml, you can prepare your own and specify it in the command to install:

helm upgrade -f values-prod.yaml helm-example nakamasato/helm-example -n helm-prod

Basic Kustomize usage

The first step is to create a directory. In this case, I use kustomize-example .

mkdir -p kustomize-example/{base,overlays/dev,overlays/prod} && cd kustomize-example

If you check the created directories by tree command:

tree
.
├── base
└── overlays
├── dev
└── prod
4 directories, 0 files

We’ll put the common files in base directory and environment specific files that are used to override the common files in each environment directory under overlays directory.

Firstly, we start with creating common resource files in base directory and specify them in kustomization.yaml.

Let’s see a few frequently used resource types in kustomization.yaml

  1. commonLabels can set common labels that will be added to all the resources that kustomize expands.
  2. resources is a list of yaml files to manage by Kustomize.
  3. configMapGeneratorand secretGenerator are also useful to manage configmaps and secrets in Kustomize (For more details: Secrets and ConfigMaps
    ). One common usage of configMapGenerator is to generate a configmap from a file with hash value in the configmap object name, which triggers rollout of its workload resource (e.g. Deployment) when the content of the configmap is changed.

The next step is to create overlays for each environment ( dev and prod in this example)

Example 1 : Specify an image for each environment.

Example2: Change the number of replicas.

  1. Add deployment.yaml in overlays/<envdir>
  2. Include it in kustomization.yaml with patches.

Finally, we deploy the application with kustomize for two environments dev and prod.

Dev:

kubectl apply -k overlays/dev

Prod:

kubectl apply -k overlays/prod

Continuous Delivery (CD)

ArgoCD supports both Helm and Kustomize. In this section, I’ll introduce a very basic ArgoCD configuration for Helm and Kustomize.

ArgoCD Application configuration for Helm chart.

Sample Code: https://github.com/nakamasato/kubernetes-training/tree/master/helm-vs-kustomize/argocd/helm

  1. Specify the code repository, directory, revision of the Helm Chart to spec.source.repoURL, spec.source.path and targetRevision respectively. One point that we need to be aware of is that ArgoCD doesn’t install a package, Chart, from the Chart repository but applying the original yaml files.
  2. You can specify an environment-specific value file by spec.source.helm.valueFiles. (I thought it’d be nice if we can specify by chart repo and value file just as the helm command supports)
  3. destination and syncPolicy are common in Helm and Kustomize.

ArgoCD Application configuration with Kustomize

Sample code: https://github.com/nakamasato/kubernetes-training/tree/master/helm-vs-kustomize/argocd/kustomize

  1. Specify the repo which contains the target kustomization, path and revision to spec.source.repoURL, spec.source.path, spec.source.targetRevision respectively.
  2. destination and syncPolicy are common in Helm and Kustomize.

Final Example: Deploy a sample application to multiple environments (dev and prod) with Helm and Kustomize using ArgoCD

The application to deploy here consists of Deployment , ConfigMap and Secret. As the main objective of this post is to compare Helm and Kustomize, the credentials in Secret are not encrypted. The overview is as follows:

1. Install ArgoCD

kubectl create namespace argocd 
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.0.3/manifests/install.yaml

2. Prepare application dependency (Deployment and Service for MySQL)

kubectl create ns database
kubectl -n database apply -f https://raw.githubusercontent.com/nakamasato/kubernetes-training/master/helm-vs-kustomize/dependencies/mysql/mysql.yaml

3. Apply ArgoCD application with Helm

Sample Code: https://github.com/nakamasato/kubernetes-training/tree/master/helm-vs-kustomize/argocd/helm

kubectl apply -f argocd/helm

4. Apply ArgoCD application with Kustomize

Sample code: https://github.com/nakamasato/kubernetes-training/tree/master/helm-vs-kustomize/argocd/kustomize

argocd/kustomizea contains ArgoProject called kustomize

and two argocd Applicationwith kustomize-dev and kustomize-prod

kubectl apply -f argocd/kustomize

5. Confirm the applications on ArgoCD

Get Argocd default login password.

kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath=’{.data.password}’ | base64 — decode

Connect to ArgoCD using PortForward

kubectl port-forward svc/argocd-server 8080:80 -n argocd

Personal Preference after comparison

Once you create a Helm chart, it seems easier to develop and deploy an application.

Good points:

  1. Easy to distribute. → Much easier for a new member to catch up with the developing application.
  2. There are several CD tools like ArgoCD that support Helm. If you’re already using any of those supported CD tools, we don’t need to change the existing release flow much.
  3. As Helm is a package manager, you can manage application versions including yaml file changes.
  4. You just need to review Chart update, only when we need dramatic change in the chart, which requires to upgrade a chart version.
  5. Follow DRY principle. We can reduce very similar files in overlays which are necessary for Kustomize.

Need to consider:

  1. Secret management. (In this post, I didn’t cover proper credential management in Helm and Kustomzie. I haven’t studied the best practices in Helm. Possible solution for kustomize is sealed-secrets, which cannot be directly used in Helm. helm-secrets is possible solution for Helm.)
  2. ConfigMapGenerator and SecretGenerator, very useful features in Kustomize are not available in Helm.

Possible solutions for secret management:

  1. kubernetes-external-secrets
  2. helm-secrets
  3. aws-secret-operator
  4. HashcorpVault

--

--

Masato Naka
Masato Naka

Written by Masato Naka

An SRE, mainly working on Kubernetes. CKA (Feb 2021). His Interests include Cloud-Native application development, and machine learning.

No responses yet