5 minutes
Remember Ingress
Hello back! Long winter evenings are really conducive to just taking a seat and tinkering with things while having a bright computer screen shine right in your face to keep you awake way past your bedtime. It is also the time for me to realise I let a chunk of my homelab duties fall by the wayside, and yet again I find myself in a situation where it will be easier (and probably safer) to just bootstrap a new K8S cluster from scratch, rather than try patching the existing one to make it up to day. Like, you know, the pros do it in the cloud. Pets vs cattle, and all that… My current cluster had a really good run, my oldest node is well over 2 years old at this point!
~ ❯ k get nodes 8s ○ k3s
NAME STATUS ROLES AGE VERSION
k8s01 Ready control-plane,master 2y51d v1.27.2+k3s1
k8s03 Ready worker 2y50d v1.27.2+k3s1
k8s05 Ready storage 2y50d v1.27.2+k3s1
k8s11 Ready control-plane,master 594d v1.27.2+k3s1
k8s13 Ready storage 242d v1.27.2+k3s1
k8s14 Ready worker 14d v1.27.2+k3s1
k8s21 Ready control-plane,master 46d v1.27.2+k3s1
k8s22 Ready worker 45d v1.27.2+k3s1
k8s23 Ready storage 45d v1.27.2+k3s1
Still, I am running k3s version 1.27.2. Which is a few major versions behind, latest one currently being v1.31.4.
Memory
Now, because the last time I set up a new cluster was over two years ago, I don’t remember exactly what steps I needed to take to configure certain aspect of the cluster. Things like ingress and load balancers you only really configure once at the beginning, then you don’t need to touch them pretty much ever again. In my current cluster I am using MetalLB as the L2 load balancer, then Traefik as the ingress controller. Both of these components have also moved on since the last time I set them up, and despite my best efforts I have also fallen behind a few versions on those. Well, and here we are. At least this time I am taking notes, so the next time I do it I can at least refer to this page. This is a gift from me to my future self.
MetalLB
I won’t copy-paste the entire values.yaml
for metallb. I am using the official chart in its vanilla state. I basically am not changing any configuration whatsoever. Basically keeping all default values as found here.
As I am writing this, I am using version 0.14.9 of the metallb chart.
apiVersion: v2
name: metallb
description: Metallb chart, configuring bare metal load balancer
type: application
version: 0.1.0
dependencies:
- name: metallb
version: 0.14.9
repository: https://metallb.github.io/metallb
It seems to me that unlike in older versions of metallb, the actual configuration for the load balancer gets installed via a couple of custom resources, which are not generated via changes to the values.yaml
. Therefore I defined them as following to mach my use case:
I created addresspool.yaml
which defines a default pool of IP addresses that I want to use for my LoadBalancer-type K8S service entries. Since I am using a non-standard IP block, I added each address separate as a list item, masked at /32
.
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default
namespace: metallb-system
spec:
addresses:
- 192.168.0.10/32
- 192.168.0.11/32
- 192.168.0.12/32
- 192.168.0.13/32
- 192.168.0.14/32
At first I though that was it, but it turns out that MetalLB will not configure the speaker pods to advertise my service IPs if I don’t explicitly tell it to. For that I created another custom resource of kind L2Advertisement
. It enables the virtual IP address advertisement feature for each address pool linked. In this case we are linking the default
address pool, as defined in the previous step.
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
spec:
ipAddressPools:
- default
Traefik
In order of messing around and figuring out what to do next, at this point I started to configure Traefik and its dashboard, to make sure that the LB services are working correctly and MetalLB IPs are getting advertised. But knowing what I know now, I wanted to make sure the dashboard is exposed over https and has a proper certificate. So, if you’re following step-by-step, this is the time to insert a TLS cert secret to use. I am using a wildcard cert, which I call wildcard-cert
like so:
kubectl create secret tls wildcard-cert --cert ./fullchain.pem --key ./privkey.pem -n default
I won’t tell you how to obtain your certs, that’s its own topic for another time…
Just as with MetalLB I am using the official Traefik helm chart, this time version 33.2.1.
apiVersion: v2
name: traefik-private
description: Traefik chart, configuring private ingress controller
type: application
version: 0.1.0
dependencies:
- name: traefik
version: 33.2.1
repository: https://traefik.github.io/charts
Again, basing my chart on the default configs. I only made a couple of minor changes to the values.yaml
. (The full file can be seen here.)
I configured this Traefik instance to be the default ingress. Calling it private as this one will be resolvable only on the local network. I usually create a second ingress dedicated to external traffic for services exposed to the internet.
traefik:
{...}
ingressClass:
enabled: true
isDefaultClass: true
name: "private"
I modified the dashboard ingress route block. I added the “websecure” entrypoint both to test it, and to have a TLS-protected endpoint where I can check the dashboard. The “web” endpoint is left there too, though I am yet to set up the https redirect middleware. I am linking the TLS cert that I described in the previous step.
traefik:
{...}
ingressRoute:
dashboard:
enabled: true
annotations: {}
labels: {}
matchRule: Host(`treafik-private-dashboard.domain.com`) && (PathPrefix(`/dashboard`) || PathPrefix(`/api`))
services:
- name: api@internal
kind: TraefikService
entryPoints: ["websecure","web"]
middlewares: []
tls:
secretName: wildcard-cert
domains:
- main: treafik-private-dashboard.domain.com
Lastly I enable the dashboard using additional args block.
traefik:
{...}
additionalArguments:
- "--api.dashboard=true"
- "--log.level=INFO"
- "--api.insecure=false"
963 Words
2024-12-26 17:30