10 minutes
Funky Traefik Dashboards
I have used Traefik for a number of years as my go-to ingress controller for K8S deployments big and small. At the same time, I always feel like I barely know the tool, and the more intricate and arcane parts of its mechanisms elude my understanding. I must admit, that a part of it comes from the general unwillingness to rely on CRDs where standard K8S objects, in my opinion, should suffice. Otherwise, why bother with the common interface?
Nevertheless its stability and performance can’t be denied, and so here I am, taking to this text-based medium once again to jot down some notes, for the benefits of my future self and anyone else who may stumble upon this page.
Splitting access
In my homelab I host a single multi-node instance of K8S that hosts services available both, exclusively on my lan, as well as from the Internet. To achieve this I create separate ingress controllers, one for each of the use cases. This allows me to forward traffic from the internet to one of them, on its dedicated IP address, but not the other, hosted behind a different load balancer IP on my local net.
Be default the Traefik K8S helm chart will create a service, which needs to be set to type LoadBalancer. As mentioned in the previous blog entry, this will trigger MetalLB to assign our new ingress instance a dedicated virtual IP address to our new service. Let’s say we did that and called this instance of the Traefik helm chart traefik-private
. All examples bellow will use the official Traefik helm chart.
~/Code/k3s/traefik-private main ❯ tree .
.
├── Chart.lock
├── charts
│ └── traefik-34.1.0.tgz
├── Chart.yaml
├── templates
│ └── dashboard.yaml
└── values.yaml
3 directories, 5 files
Since we will be creating a second ingress controller alongside it, we need to take care of a few configuration steps in the values.yaml
. We need to make this ingress unique, so that we can assign different applications in our cluster to one or the other ingress controller. We can do this by naming each ingress class and setting one of them to be the default one. In my case, my private ingress will be default.
traefik:
{..}
# Make service type LoadBalancer to give it an IP address on our local network
service:
enabled: true
single: true
type: LoadBalancer
# Configure the ingress class kubernetes object itself
ingressClass:
enabled: true
isDefaultClass: true
name: "traefik-private"
# Also configure the kubernetesIngress provider, otherwise Traefik won't consume the relevant ingress entries
providers:
kubernetesIngress:
enabled: true
ingressClass: "traefik-private"
Now we need to make a copy of this chart, and rename it to something like traefik-public
, modify the values.yaml
slightly, then install it on our cluster as well, and boom we have two ingresses, each with its own IP address.
traefik:
{..}
# Configure the ingress class kubernetes object itself
ingressClass:
enabled: true
isDefaultClass: false
name: "traefik-public"
# Also configure the kubernetesIngress provider, otherwise Traefik won't consume the relevant ingress entries
providers:
kubernetesIngress:
enabled: true
ingressClass: "traefik-public"
Voila, this is what we’ve got so far in graphical form. Two targets that we have the freedom to configure routing to and from as we please.
Getting in
So now to the whole reason why we bother splitting these ingresses on a network level. When I am out and about, I still want to have access to the services that my homelab provides. This usually is done with the use of some sort of a VPN. The basic (or actually not so basic) outline of how to achieve Wireguard-based VPN access to your infra can be, yet again, found in one of my previous posts. This works very well for the vast majority of services. What it does not work for, are services which need a public presence. For example, a Matrix Synapse server, a Mastodon server, or even a Nextcloud server in case you want to be able to share links to files with people like you can do with popular commercial cloud-based alternatives. The solutions is definitely not to just expose everything, so let’s be smart about it.
This rather complicated graph (don’t bite, it was late at night when I drew it) hopefully allows you to digest the next few paragraphs a little easier. In order to allow public access to services running inside your homelab, you need to find a way to give them public IP addresses. Since we are not opening ports or doing any nonsense that would require your ISP to give you a publicly routable IP, we are going to take advantage of the infra we’ve already got plus a little extra.
In this setup we have 3 important entities. The subnet router within your home network, the Wireguard server in the cloud somewhere and a reverse proxy server, also in the cloud, where the last 2 have public IPs. Logically, you could combine the last two, but I like to keep them separate, both to make the setup cleaner and to separate the two concerns for security reasons.
The Wireguard network in question is a star and spokes design, meaning we have a central server that every other node connects to, and via this node they can therefore reach every other node. The subnet router and reverse proxy are, although special types of nodes, still just nodes. Addressable directly on the 10.10.0.0/24 subnet as per the diagram. As a VPN user I also get assigned an address on this subnet, and therefore can access all of the spokes once connected. How does my roadwarrior machine know how to find the homelab subnet, where all of my services reside, i.e. 10.0.0.0/24? Two steps. The Wireguard machine advertises that the 10.0.0.0/24 subnet is available via the subnet router node, in this case 10.10.0.5. This is configured using IPTables. The subnet router in turn is configured to forward all traffic coming from the Wireguard host (10.10.0.1) targeted at the subnet 10.0.0.0/24 via its interface on said subnet, so 10.0.0.7. As a result of all this, my roadwarrior node can access both ingresses (10.0.0.5 and 10.0.0.6), and any other hosts on this subnet, including the local DNS server I host, which will translate the URLs to their local addresses.
The reverse proxy node is more specialised. It is designed for a single function, to forward only select requests coming from the internet on its port 443 to the public ingress IP address only. We can do this by deploying HAproxy on it. Following is an example of a simple HAproxy config file, matching what the diagram above presented:
# Configure the frontend, so how we receive traffic on the reverse proxy
frontend traefik_public
# This needs to be your actual IP address you're trying to bind, the Xs are only for this example...
bind X.X.X.6:443
default_backend traefik_public_server
# Enable send X-Forwarded-For header
option forwardfor
# Now configure backend, so where do we send the requests
backend traefik_public_server
balance roundrobin
# We select the single IP as a backend, corresponding with our traefik public instance in the homelab
server backend01 10.10.0.6:443 check
The config file is usually located at /etc/haproxy/haproxy.conf
. To find out more you can look up the HAProxy website and the official configuration documentation.
This configuration is simple because we are telling HAproxy to forward all of the HTTPS traffic it receives down to our homelab traefik instance designated as the “public” one. It can do that because, just like my roadwarrior laptop, it itself is also a node on the wireguard network, and it also knows how to reach 10.10.0.0/24 subnet.
Finally to the point. We can advertise our public services using a public DNS provider, and tell it to point to the HAproxy, which will happily forward those inwards via the Wireguard net. Therefore, as per this example. we can set test1.example.com to point to X.X.X.6. Likewise, since we don’t want people to connect to test2.example.com, so we just won’t set that in public DNS right? So, that is correct, however anyone can host their own DNS server or just override their local DNS config on their machine. This means that, because HAproxy in with this configuration forwards all HTTPS traffic indiscriminately to the downstream server, if we only had one ingress in the homelab for all services, we would be exposing them one way or another. So, by splitting them, we have network segregation and an easily configurable infrastructure, where changing the service from public to private and vice versa is as simple as tagging its Kubernetes ingress object with the correct ingress label, then publishing or deleting the public DNS entry. Now, that’s neat.
I wanna see it!
Something I always struggled with in the past was configuring the Traefik dashboard correctly. By correctly I mean so that it actually works. This is double annoying, because we have two ingress controllers, so on average I fail twice per cluster on this task. This is where I normally engage you with a semi-whity story about how I embarked on a journey of discovery and what hardship I overcame to get to where we are not. I ain’t doing that now, I just fixed my dashboards, and I ain’t breaking them again just to take screenshots for you. Deal with it.
So, what do I want? I want access to dashboards from both the ingress controllers AND I don’t want the whole freaking internet to have access to either of them. That’s not so much to ask, is it? The defaults in the Traefik chart enable the dashboards using their fancy CRD ingressRoute
. All I will say is, NO. Ok, not really. So we will still need to enable the dashboard ingressRoute
item but we will not configure anything but the most basic stuff for it, and we will use a normal honest to God ingress object to actually expose this dashboard.
traefik:
{...}
ingressRoute:
dashboard:
enabled: true
annotations: {}
labels: {}
matchRule: Host(`dashboard-public.example.com`)
services:
- name: api@internal
kind: TraefikService
entryPoints: ["traefik"]
middlewares: []
tls: {}
We will have to do this in both the charts for both ingress controllers, obviously changing the match rule to match the hostname you’ve chosen. You will notice that we are selecting the “traefik” entrypoint, this is on purpose. We need to enable this entrypoint now, like this:
traefik:
{...}
ports:
traefik:
port: 8080
expose:
default: true
exposedPort: 8080
protocol: TCP
This will make sure that the ingress controllers will not respond to our dashboard-public.example.com
or whatever on the normal ports that they serve traffic to real users, like 443 or 80. This is important for the public ingress especially, as that would mean people on the internet could mess with your dashboard if they could figure out the hostname you gave it, and we don’t want that. What do we want? We want the private ingress controller to serve both dashboards, so that I can view it when I am authenticated on my VPN damn it!
I started digging and found this: https://github.com/traefik/traefik-helm-chart/issues/143. Traefik devs basically said, ain’t nobody got time for this BS, just make a generic service and ingress entries and begone demon! So off I went.
Since we already set the dashboards to be available on the traefik
port 8080, and both Traefik ingresses will have created a service each for this, I have foregone that part, and instead went for just an extra ingress object per controller.
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: traefik-public-dashboard
annotations:
# This here annotation is the key btw:
kubernetes.io/ingress.class: traefik-private
traefik.ingress.kubernetes.io/router.entrypoints: websecure,web
spec:
tls:
- hosts:
- dashboard-public.example.com
secretName: wildcard-cert
rules:
- host: dashboard-public.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: traefik-public
port:
number: 8080
Both ingress object will look the same, you just need to change the name, host rule and target service name. Importantly the annotations need to stay the same. This will mean that only the ingress controller that I called traefik-private
(in this case available via 10.0.0.5) will configure entries for both dashboards, and put them on its websecure
and web
entrypoints, which are the standard port 443 and 80 bois. Here you go Hilmar, I know you struggled with this, now you know how to do it and I can always refer back to this page also. Phew…
2046 Words
2025-01-22 22:08