Provisioning K3s and systemk

January 17, 2021

k8s

Due to a previous job I’m calling nodes, “machines”, also because this is about systemk, it’s more likely you are actually using a real machine. So I’ll keep on using “machines” in this post.

First up: I needed an easy way to build packages of the software I’m using. For this I’ve setup a small CI using GitHub workflows that builds Debian packages for me: https://github.com/miekg/debian. (A Debian package repository would even be better, so I can more easily do upgrades).

Next I manually need to setup each machine and create a Kubernetes cluster. Some (minimal) form of config management might help here, but for now I don’t care yet. Maybe cloud-init can help here - get the machine into an initial, good state is where it needs to get, there is no use for config management beyond that.

Installation Steps

  1. Get Machine.
  2. Add user, copy ssh key ID, clean out other keys.
  3. Fix sshd config, restart ssh.
  4. addgroup $USER sudo
  5. Install tailscale and get an address.
  6. Installed (my) Debian package: k3s, systemk, kubectl from https://github.com/miekg/debian/releases.

The debian configuration for K3S writes an admin.kubeconfig file to /var/lib/k3s, for systemk to work it’s copied to /etc/systemk/admin.kubeconfig. For machines without k3s, it’s referenced from the systemk directory or copied to those system. Automatic TLS certificate rollovers are left as an exercise for the reader.

After these boring steps we have as master machine:

miek@moon:~$ sudo kubectl --kubeconfig /etc/systemk/admin.kubeconfig  get nodes -o wide
NAME  STATUS  ROLES  AGE VERSION INTERNAL-IP   EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION   CONTAINER-RUNTIME
moon  Ready   <none> 11m v1.18.4 xxx.aa.4.13   yyy.zz.81.191 Ubuntu 20.04.1 LTS 5.4.0-52-generic systemd 245 (245.4-4ubuntu3.2)

On my desktop machine I can now “simply” install tailscale and kubectl, copy the admin.kubeconfig and manage the (1 machine) Kubernetes cluster. I’m also planning on creating a second machine “io” to be able to run an actual deployment consisting out of more than 1 pod.

For pod configs I just setup a private GitHub repo with some yamls; need to figure if things like kustomize and the like are useful or overkill in this situation.

Cluster DNS is real DNS

Because we don’t have a private network for the pods we can just delegate a subdomain and use that as the cluster domain - no need for external DNS. Configuring CoreDNS to do this is on the TODO list, but shouldn’t be too hard.

k8s  debian  systemk