Symptoms
While reconfiguring my traefik setup in K3S, I noticed that the installer ended up in error very often.
Checking the logs, this is the error message
+ helm install --set-string global.systemDefaultRegistry= traefik :443/static/charts/traefik-25.0.2+up25.0.0.tgz --values /config/values-01_HelmChart.yaml --values /config/values-10_HelmChartConfig.yaml
Error: INSTALLATION FAILED: failed to fetch :443/static/charts/traefik-25.0.2+up25.0.0.tgz : 404 Not Found
As you can see, the Kubernetes API returns a 404.
Running
kubectl get --raw /static/charts/traefik-25.0.2+up25.0.0.tgz
does, however, return the intended file.
Setup
My setup is made of 4 different nodes:
➜ k3s-cluster git:(jo/longhorn) ✗ k get nodes
NAME STATUS ROLES AGE VERSION
n1 Ready control-plane,etcd,master 358d v1.28.6+k3s2
n2 Ready control-plane,etcd,master 106d v1.30.5+k3s1
lb1 Ready <none> 85d v1.30.6+k3s1
lb2 Ready control-plane,etcd,master 85d v1.30.6+k3s1
Symptoms
While reconfiguring my traefik setup in K3S, I noticed that the installer ended up in error very often.
Checking the logs, this is the error message
+ helm install --set-string global.systemDefaultRegistry= traefik https://10.43.0.1:443/static/charts/traefik-25.0.2+up25.0.0.tgz --values /config/values-01_HelmChart.yaml --values /config/values-10_HelmChartConfig.yaml
Error: INSTALLATION FAILED: failed to fetch https://10.43.0.1:443/static/charts/traefik-25.0.2+up25.0.0.tgz : 404 Not Found
As you can see, the Kubernetes API returns a 404.
Running
kubectl get --raw /static/charts/traefik-25.0.2+up25.0.0.tgz
does, however, return the intended file.
Setup
My setup is made of 4 different nodes:
➜ k3s-cluster git:(jo/longhorn) ✗ k get nodes
NAME STATUS ROLES AGE VERSION
n1 Ready control-plane,etcd,master 358d v1.28.6+k3s2
n2 Ready control-plane,etcd,master 106d v1.30.5+k3s1
lb1 Ready <none> 85d v1.30.6+k3s1
lb2 Ready control-plane,etcd,master 85d v1.30.6+k3s1
Share
Improve this question
asked 20 hours ago
Jo ColinaJo Colina
1,9248 gold badges29 silver badges49 bronze badges
1 Answer
Reset to default 0After some back and forth, I realized the obvious solution, the kube API is served by a Service under the IP 10.43.0.1
. This service hits whatever kube server it can, which means that more often than not, it was hitting another node.
As you can see from the setup, the nodes are running in different versions of k3s. After checking out the location of the files: /var/lib/rancher/k3s/server/static/charts
, you can see that different versions of the Chart are present in every node.
In my case the solution was to copy the chart from the 1st node to all the rest, to ensure that the pod would end up finding the file, no matter what node it ends up hitting.
I have not tried aligning the k3s versions, but should solve the problem as well.