Deploying Kubernetes Dashboard to a K3S Kubernetes cluster with Helm

Posted on Thu 19 September 2024 in helm

Deploying apps to Kubernetes clusters is easy with Helm. Or so they say...

What I want to achieve:

Deploying Helm

Deploying Helm is quite simple, the Install Helm from apt has a very good explanation:

curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

But, unfortunately, this is not all... During my tests I quickly found out I missed a couple of pieces of the puzzle, which I found in the K3S Cluster Access Documentation: all upstream kubernetes tools expect the location of KUBECONFIG to be at $HOME/.kube/config, which is not the case for K3S. So it is important to point KUBECONFIG to /etc/rancher/k3s/k3s.yaml. Add the following code to ~/.bashrc or /etc/profile.d/k3s.sh:

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

Now, Helm will work as expected.

Deploy Kubernetes Dashboard

Using Helm to install Kubernetes Dashboard really is simple! According to the documentation, I just have to perform the following steps:

  1. Add the kubernetes-dashboard repository:
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
  1. Deploy kubernetes-dashboard through helm:
helm upgrade --install kubernetes-dashboard \
  kubernetes-dashboard/kubernetes-dashboard \
  --create-namespace --namespace kubernetes-dashboard

Returns:

Congratulations! You have just installed Kubernetes Dashboard in your cluster.

To access Dashboard run:
kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443

NOTE: In case port-forward command does not work, make sure that kong service name is correct.
    Check the services in Kubernetes Dashboard namespace using:
        kubectl -n kubernetes-dashboard get svc

Dashboard will be available at:
https://localhost:8443

Yay!!!

Kubernetes Dashboard is now installed in a dedicated namespace called kubernetes-dashboard. This means that if you want to inspect the resources created with kubectl you need to specify --namespace=kubernetes-dashboard or -n kubernetes-dashboard with all of your kubectl commands.

What this does not do, is create the necessary resources to access it through your favorite browser. As you can see from the output, you need to perform one additional step to forward the management port to your K3S node.

Unfortunately, this is a manual action, and won't persist when restarting the K3S servers...

After digging a bit, I discovered, the kubernetes-dashboard-kong deployment resource provides the web ui, so the only thing I need to do is expose it as a LoadBalancer:

kubectl -n kubernetes-dashboard expose deployment kubernetes-dashboard-kong \
  --port=8443 \
  --name=kubernetes-dashboard-kong-lb \
  --type=LoadBalancer

This command will create me a LoadBalancer on port 8443 over all my cluster nodes, so Kubernetes Dashboard can be accessed from anywhere.

Alternatively I could have created an Ingress resource including a certificate, but I manage my certificates on my HAProxy server.

Adding this to my HAProxy config, will make my life easier. This is the backend I defined:

backend kubernetes-dashboard-backend
    mode http
    option httpclose
    option forwardfor
    balance roundrobin
    default-server inter 10s downinter 5s
    redirect scheme https unless { ssl_fc }
    server node00 <ip of n00.yavin.home>:8443 ssl verify none check
    server node01 <ip of n01.yavin.home>:8443 ssl verify none check
    server node02 <ip of n02.yavin.home>:8443 ssl verify none check

Create the viewer user

Kubernetes Dashboard does not have a built-in user management system, authorization is handled by Kubernetes API server. Which means I need to create a Kubernetes Service Account, and a Role or Kubernetes ClusterRole

Let's get started with the Role:

While the kubectl CLI can manage resources easily, I find it easier to create yaml files which describe the resource. This is also practical for building a reference library of resources for future use.

For my ClusterRole, I will use this:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
    name: cluster-viewer
rules:
  - apiGroups:
      - '*'
    resources:
      - '*'
    verbs:
      - get
      - list
      - watch
  - nonResourceURLs:
      - '*'
    verbs:
      - get
      - list
      - watch

This role wil essentially bestow the necessary viewer rights on all resources. Granted, I have no clue at this moment if this is a good idea or not. To be discovered!

To import this, one merely has to execute following command:

kubectl create --filename=<ClusterRole filename>

Returns:

clusterrole.rbac.authorization.k8s.io/cluster-viewer created

I will create the viewer ServiceAccount in the same manner. Create a yaml file with all the necessary information:

apiVersion: v1
kind: ServiceAccount
metadata:
    name: viewer-user
    namespace: kubernetes-dashboard

While ClusterRole are system-wide, ServiceAccount s are not. I will create this user inside the namespace of the Kubernetes Dashboard, as it is related to that.

kubectl create --filename=<ServiceAccount filename>

Returns:

serviceaccount/viewer-user created

Finally, we need to assign the ServiceAccount to the ClusterRole. This is done by using a ClusterRoleBinding. Again, we use the same modus operandus, and we'll create a yaml file:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-viewer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-viewer
subjects:
- kind: ServiceAccount
name: viewer-user
namespace: kubernetes-dashboard

Let's import this:

kubectl create -f ClusterRoleBinding-cluster-viewer.yaml

Returns:

clusterrolebinding.rbac.authorization.k8s.io/cluster-viewer created

And that's it...

Logging on to Kubernetes Dashboard

The dashboard does not support regular user/password logins. You can either login using the Authorization header or a Bearer Token. In this case, I will use the Bearer Token. The process is simple:

kubectl create token cluster-viewer

This returns the token you can copy/paste into the login form of Kubernetes Dashboard:

eyJhbGciOiJSUzI1NiIsImt...

By default your server will determine how long your token is valid. But if you want to specify how long it is valid, you can add --duration=<time> to your command.

For example:

kubectl create token cluster-viewer --duration=24h

But I would strongly advise against it. It's not a big deal for a viewer-only user, but it is for a user with more privileges. It's good practice to generate tokens when you need them...

And that concludes the Logging on part to Kubernetes Dashboard

What did I learn?

Helm wise not so much, as I only copy/pasted commands I found floating on the internet.

Kubernetes wise a lot. I learned creating users is not as straight forward as it looks. In order to actually be able to use that user, it needs to be connected to the resources by means of Roles (hence RBAC).

What's next?

This was quite a simple deployment, as it used the base Helm chart to deploy Kubernetes Dashboard.

Next step will most likely be trying to get a simple app (like NGINX) up and running serving files from storage, preferably networked storage. Possibly migrate this blog to this app...