5 minutes
Refact Helm Chart
Let’s get charting! I honestly wasn’t expecting to be writing this post quite so soon, but it turns out that the library chart we adapted from the k8s-at-home is extremely versatile and that their old github actions translate very well to my new Teapot instance. So, let’s have a look at what it takes to create an application chart and publish it with our awesome new setup.
Starting point
The starting point is our hand-written set of kubernetes templates that we wrapped into a helm chart to deploy in our kubes and gpus tutorial.
# Source: refact/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: refact
labels:
helm.sh/chart: refact-0.1.0
app.kubernetes.io/name: refact
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "latest"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
ports:
- port: 8008
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: refact
app.kubernetes.io/instance: release-name
---
# Source: refact/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: refact
labels:
helm.sh/chart: refact-0.1.0
app.kubernetes.io/name: refact
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "latest"
app.kubernetes.io/managed-by: Helm
spec:
serviceName: refact
revisionHistoryLimit: 3
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: refact
app.kubernetes.io/instance: release-name
template:
metadata:
labels:
app.kubernetes.io/name: refact
app.kubernetes.io/instance: release-name
spec:
hostNetwork: true
runtimeClassName: nvidia # IMPORTANT!
dnsPolicy: ClusterFirstWithHostNet
enableServiceLinks: true
containers:
- name: refact
image: "smallcloud/refact_self_hosting:latest"
imagePullPolicy: IfNotPresent
securityContext:
privileged: false
env:
- name: "TZ"
value: "UTC+02:00"
ports:
- name: http
containerPort: 8008
protocol: TCP
volumeMounts:
- name: data
mountPath: /perm_storage
affinity: # This is how we tell it to only spawn on the node with our GPU
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu
operator: Exists
volumes:
- name: data
persistentVolumeClaim:
claimName: refact
---
# Source: refact/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: refact
labels:
helm.sh/chart: refact-0.1.0
app.kubernetes.io/name: refact
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "latest"
app.kubernetes.io/managed-by: Helm
annotations:
kubernetes.io/ingress.class: # insert your own ingress class to use
# insert any other annotations you need for your ingress controller
spec:
tls:
- hosts:
- refact.example.com
secretName: "wildcard-cert"
rules:
- host: refact.example.com
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: refact
port:
number: 8008
The new base
So, to host my helm charts based on the common library chart I made a new repository: https://teapot.octopusx.de/octocloudlab/chart-catalog. Nothing special here, basically just a charts
folder where all of our, yes you guessed it, charts will live. So we create a folder in that folder, and it is called refact.
.
├── ATTRIBUTION.md
├── charts
│ └── refact
│ ├── Chart.lock
│ ├── charts
│ │ └── common-4.5.2.tgz
│ ├── Chart.yaml
│ ├── README.md
│ ├── templates
│ │ ├── common.yaml
│ │ └── NOTES.txt
│ └── values.yaml
├── LICENSE
└── README.md
What I normally did in the past when I was including a downstream chart I would add this to my Chart.yaml
:
dependencies:
- name: common
version: 4.5.2
repository: https://git.octopusx.de/api/packages/octopusx/helm
Then I would add a common
(in this case) block in my values.yaml
file, and all of the configs related to this dependency would live inside:
common:
image:
repository: smallcloud/refact_self_hosting
Inspecting the original charts in the k8s-at-home repo however shows us that they aren’t doing that. All of the parent chart values are defined in the top scope. How do they do that? The answer is in that extra templates/common.yaml
file, let’s take a look.
{{ include "common.all" . }}
This simple template basically declares that it will render the common.all
template declared in the common
child chart, which itself consumes the .Values
scope and we can skip the extra scope from the import. Neat.
# common: <- no longer needed
image:
repository: smallcloud/refact_self_hosting
Publishing the chart
When I was starting work on this part I expected complications. Mostly because I am not very familiar with the actions CI/CD model and I wasn’t sure what I need to do to get it to deploy every chart in the charts
directory, whenever there is a change. Aaaaaand… it wasn’t a problem at all. The same action I used in the Teapot blog post can be passed list of directories as a wildcard, and it will iterate over them and run the release flow on them all.
name: Publishing Charts
run-name: Publishing Helm Charts to the Teapot Registry
on:
workflow_dispatch:
push:
branches:
- main
paths:
- "charts/**" # So we updated the trigger slightly
jobs:
Explore-Gitea-Actions:
runs-on: ubuntu-22.04:docker://node:18-bullseye
steps:
- name: Check out repository code
uses: actions/checkout@v4
- name: Push Helm Chart to Gitea Registry
uses: bsord/helm-push@v4.1.0
with:
username: ${{ secrets.USERNAME }}
password: ${{ secrets.PUBLIC_PACKAGE_WRITE }}
registry-url: 'https://teapot.octopusx.de/api/packages/octocloudlab/helm'
force: true
chart-folder: charts/* # And another change in this line, and that's it
Achievement unlocked!
Now, we can use the standardized values.yaml
as long as we are using the common
library chart. The resulting set of kubernetes templates is functionally identical, with a few extra bits that I hadn’t bothered to include before, such as a full complement of labels across all the objects as well as status probes on in the deployment.
/chart-catalog/c/refact main ❯ helm template .
---
# Source: refact/templates/common.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-refact
labels:
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: refact
app.kubernetes.io/version: v1.1.0
helm.sh/chart: refact-0.1.0
annotations:
spec:
type: ClusterIP
ports:
- port: 8008
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: refact
app.kubernetes.io/instance: release-name
---
# Source: refact/templates/common.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-refact
labels:
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: refact
app.kubernetes.io/version: v1.1.0
helm.sh/chart: refact-0.1.0
spec:
revisionHistoryLimit: 3
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: refact
app.kubernetes.io/instance: release-name
template:
metadata:
labels:
app.kubernetes.io/name: refact
app.kubernetes.io/instance: release-name
spec:
serviceAccountName: default
automountServiceAccountToken: true
runtimeClassName: nvidia
dnsPolicy: ClusterFirst
enableServiceLinks: true
containers:
- name: release-name-refact
image: "smallcloud/refact_self_hosting:latest"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8008
protocol: TCP
livenessProbe:
tcpSocket:
port: 8008
initialDelaySeconds: 0
failureThreshold: 3
timeoutSeconds: 1
periodSeconds: 10
readinessProbe:
tcpSocket:
port: 8008
initialDelaySeconds: 0
failureThreshold: 3
timeoutSeconds: 1
periodSeconds: 10
startupProbe:
tcpSocket:
port: 8008
initialDelaySeconds: 0
failureThreshold: 30
timeoutSeconds: 1
periodSeconds: 5
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu
operator: Exists
---
# Source: refact/templates/common.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: release-name-refact
labels:
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: refact
app.kubernetes.io/version: v1.1.0
helm.sh/chart: refact-0.1.0
annotations:
kubernetes.io/ingress.class: traefik-210
traefik.ingress.kubernetes.io/router.entrypoints: websecure,web
traefik.ingress.kubernetes.io/router.middlewares: default-redirect-https@kubernetescrd
spec:
tls:
- hosts:
- "refact.octopusx.de"
secretName: "wildcard-cert"
rules:
- host: "refact.octopusx.de"
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: release-name-refact
port:
number: 8008
If you want to use my new Refact
chart in your own Kubernetes cluster, feel free to add my new Teapot repository like so:
helm repo add teapot https://teapot.octopusx.de/api/packages/octocloudlab/helm
helm repo update
Then install it with:
helm install refact teapot/refact --values <your own values file>.yaml
Enjoy!
1059 Words
2023-11-06 18:19