How I meet your Cluster (II): Kubernetes on Arm (Raspberry Pi 3)

In the last post I described the construction of the Death Star a 3xRaspberry Pi3 cluster and how I did the basic communication between them. In this post, I’ll describe the software construction by deploying a working Kubernetes cluster. I wanted to accomplish some goals along:

  • Use the latest Raspbian Image available (2018-04-18)
  • Use the latest Kubernetes version available for arm (1.10.2)
  • Use the latest Docker-CE client available for  arm (18.03.1-ce)
  • All truth stored in Git (all steps done are stored in Git)

I had to recreate the cluster lots of times until I achieved a stable situation with no CrashLoopBackOff. And that is a lot of SD image burning… So after burning the image and creating the ssh file in /boot to allow ssh remote connections, I didn’t configure any static IP every time. Instead, I added all MAC address from the Raspberries to obtain always the same ips from the DHCP server present in the router (Tplink MR-3020). That saved me a lot of time!

mac_address

Now We will start. With Ansible.

Ansible is software that automates software provisioning, configuration management, and application deployment. Following the Devops Gitops path, it allows us to convert every step we have to do to configure the Raspberries into code, thus giving us the power to repeat every step, paralellizing on different computers if needed and removing manual mistakes. So after we know what we have to do to create our cluster we will convert all steps into ansible playbooks so we can launch the cluster provisioning from a central computer.

After some search about similar projects (Kubernetes on Arm with Ansible) I liked the Ro14nd approach, so I have based my project on his work, but lots of things had to be changed for me to run it to the latest version, plus all the modifications I was planning to do. So I created https://github.com/stealthizer/pi-kubernetes-lab to store all tasks that I want to run for this project.

The objective: The Kubernetes Cluster

The best way to describe this project is by looking at this slide:

Kubernetes High Level component architecture
Kubernetes High Level component architecture | Slide from this presentation from Lucas Käldström (@kubernetesonarm)

So we are going to create a Kubernetes Master, that is a set of containers running the etcd-master, the kube-apiserver, the kube-controller, the kube-dns, the kube-scheduler, a kube-proxy and a CNI of our choice (in this case I will use Flannel). But first we will install some common software to all nodes in the cluster.

Basic Common Setup

The first thing that we will do is to customize the hosts file that we will use in all steps. It describes how Ansible will locate all your servers and act accordingly for each role described. My hosts file will be:

[raspberries]
192.168.2.100 name=master
192.168.2.101 name=slave1
192.168.2.102 name=slave2

[master]
192.168.2.100

[slaves]
192.168.2.101
192.168.2.102

Once described in ther hosts file, to setup the basic (common) configuration for all Raspberries we will execute:

$ ansible-playbook -k -i hosts base.yml

That will ask us for the default password in raspbian for user pi (raspberry) and will begin to perform all tasks described in the manifests for the base role. The -k is for Ansible to ask us for a ssh password. As the first task that will execute is to copy your ssh key to the authorized hosts on all raspberries it will not be needed anymore. The base tasks will also disable the wifi and bluetooth modules (they won’t be needed in this setup), disable the hdmi output (we are running in headless mode), change the timezone, the hosts file, disable the swap, minimizing the amount of shared memory that the gpu will use to be available for the system, update the base system, add the repositories needed to install docker-ce and kubernetes and reboot. Once they are available again we are ready to continue.

The Master Node

The continue with the Master node creation we will execute:

$ ansible-playbook -i hosts kubeadm-master.yml

This task will configure the Master node. It includes installing the kubernetes version (specified in the roles/kubernetes/defaults/main.yml file, for me its 1.10.1), configure iptables to allow the interface cni0 to be able to forward, configure the environment fot the kubectl executions and execute the command to create the master node

kubeadm --init --config /etc/kubernetes/kubeadm.yml

It is wise to use the kubeadm.yml file. Of course you can pass parameters to the executable.

The podSubnet in the config file (or –pod-network-cidr parameter) needs to be specified ot the CNI will not run. It defaults to 10.244.0.0/16. Also the ServiceSubnet (or –service-cidr) if not specified will default to 10.96.0.0/12.

After the init process (can take a long time, more than 15 minutes if you don’t have the Docker images already downloaded) will install the CNI (Flannel in this example) and download the config file to use this cluster to /deployment/run/admin.conf.

If everything was ok, we can copy the admin.conf to your ~/.kube/config and use kubectl to review the status of the cluster. Some example output for debugging:

$ kubectl get nodes
NAME   STATUS ROLES  AGE VERSION
master Ready  master 31s v1.10.2
$ kubectl get po --all-namespaces 
NAMESPACE   NAME                          READY STATUS   RESTARTS AGE
kube-system etcd-master                    1/1  Running  7        1m
kube-system kube-apiserver-master          1/1  Running  7        2m
kube-system kube-controller-manager-master 1/1  Running  0        1m
kube-system kube-dns-686d6fb9c-68h5q       3/3  Running  0        2m
kube-system kube-flannel-ds-d5m8d          1/1  Running  1        2m
kube-system kube-proxy-626jw               1/1  Running  0        2m
kube-system kube-scheduler-master          1/1  Running  0        1m
$ kubectl get all 
NAME       TYPE      CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1  <none>      443/TCP 57s

If the node Master is in status Ready we can proceed to create the worker nodes.

But if there is any problem we can use the kubernetes-full-reset ansible role to wipe out all Kubernetes configuration and try again

$ ansible-playbook -i hosts kubernetes-full-reset.yml

The Worker Nodes

The Worker nodes are the one that will run the pods in the Cluster. To add them to the existing cluster we will execute:

$ ansible-playbook -i hosts kubeadm-slaves.yml

This will execute kubeadm with the join action on all slaves declared in the hosts file using a token created in the Master creation step.

$ kubeadm join --token=<token> --discovery-token-unsafe-skip-ca-verification master:6443

Once this process is finished (a few minutes) you can check the status of your cluster with:

$ kubectl get nodes
NAME   STATUS ROLES  AGE VERSION
master Ready  master 7m  v1.10.2
slave1 Ready  <none> 1m  v1.10.2
slave2 Ready  <none> 2m  v1.10.2

And we can see all the needed pods to run the Kubernetes Cluster:

$ kubectl get pods --all-namespaces
NAMESPACE   NAME                           READY STATUS  RESTARTS AGE
kube-system etcd-master                    1/1   Running 7        39m
kube-system kube-apiserver-master          1/1   Running 7        40m
kube-system kube-controller-manager-master 1/1   Running 0        39m
kube-system kube-dns-686d6fb9c-68h5q       3/3   Running 0        40m
kube-system kube-flannel-ds-4qffk          1/1   Running 0        33m
kube-system kube-flannel-ds-d5m8d          1/1   Running 1        40m
kube-system kube-flannel-ds-qbgrd          1/1   Running 2        34m
kube-system kube-proxy-5h7qj               1/1   Running 0        34m
kube-system kube-proxy-626jw               1/1   Running 0        40m
kube-system kube-proxy-m8wlt               1/1   Running 0        33m
kube-system kube-scheduler-master          1/1   Running 0        39m

And all services running:

kubectl describe services --all-namespaces
Name: kubernetes
Namespace: default
Labels: component=apiserver
 provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.96.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 192.168.2.100:6443
Session Affinity: ClientIP
Events: <none>

Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
 kubernetes.io/cluster-service=true
 kubernetes.io/name=KubeDNS
Annotations: <none>
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.96.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.244.0.2:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.244.0.2:53
Session Affinity: None
Events: <none>

At this point we have a Kubernetes Cluster running the latest available version.

How I meet your Cluster (I)

Raspberry pi Kubernetes Cluster Custom

Kids, I’m gonna tell you an incredible story. The story of how I meet your Cluster. It was 2017 and Kubernetes was something definitively worth to learn. Major cloud providers began to present their products ready for production loads, hiding under a big curtain of magic how Kubernetes works under the hood, and costing too much for personal learning, leaving ourselves with solutions like Minikube or Docker Client Kubernetes solutions. This solution may fit for a developer whose only preoccupation for him is to run a certain code inside Kubernetes, no matter what Kubernetes is. But as Sysadmin/Devops we want to know how Kubernetes works, no matter what code runs over it. This post is about building the infrastructure to create the cluster, not the cluster itself. This will be described in a separate post. Let’s begin!

The Idea

Searching in Google for a Raspberry pi Kubernetes Cluster gave a lot of results. But as they all were valuable in their own way, I wanted to imagine which would be the best setup for me. What I wanted to accomplish was:

  • Run a Kubernetes cluster atop of Raspberries pi (rpi3)
  • A portable solution
  • In a separate network, with wireless super
  • Using only 1 wall plug

Something like this

c8e539c6-c847-46e6-9165-b24cd7bda4c5-12895-000008db842c2e70_file

The Layered Setup

I wanted it to be modular, so I thinked about a layered setup. There were lots of solutions out there but in the end, none of the comercial available solutions liked me. Either there was not enough units available to purchase, or they were too small to fit all the needs, or they were ugly as hell. And then I found the solution on Thingiverse.

Just searching kubernetes in Thingiverse gave me the solution. A printable platform that has all the holes needed to fit a raspberry pi and ready to be inside a stackable solution. Of course, you need a 3D printer for this (that I don’t have myself). A friend of mine offered to make all the prints, plus a customization of the top layer (that already had the Kubernetes logo) adding the company logo (Schibsted). The layers were a perfect fit.

img_1617

So I had the layers ready. I just needed to order all other things…

The Shop List

In order to accomplish all the needs, I ordered:

  • 3 x Raspberry pi 3. At the moment of creating the cluster the pi3B+ didn’t exist.
  • 3 x 16 GB SD cards
  • 1 x USB Powered network Switch. Yes, they exist! not very common…
  • 1 x Portable wireless router. I’m using TPLink MR3020, and I’m very happy with this in that setup!
  • 1 x Multiport USB power adapter with at least 5 USB ports.

To stick all the layers I used some nylon pieces, as follows:

  • 24 x 30mm M3 hexagonal male-female spacers
  • 12 x 6mm M2.5 hexagonal male-female spacers (fit for the Raspberry Pi)
  • 12 x M2.5 4mm nylon 6/6 philips screws
  • 12 x M2.5  Hexagonal nylon nuts

img_1785

The 30mm spacers may be insufficient depending the size of the power adapter, the router or the network switch you choose.

There is some invisible material that you need too. I’m talking about of all the cabling setup that you will need:

  • 5 x USB cables: I’ve salvaged some sold microUSB short cables. The router uses a miniUSB cable and the shortest I have is too long for this setup…
  • 4 x Network Cables: As the Switch and Raspberries I use are 10/100 I don’t need special cables, but as I need them to be really short I did them myself.

To stick the power adapter and the switch to the layer I used double-sided tape, as I’m not planning to remove them, at least, for a long time.

To stick the router to the layer I’ve used velcro strips, so it is easy to remove it if needed.

The first working prototype

A computer cluster, by definition, is a set of loosely or tightly connected computers that work together (Wikipedia). As a result of that, my first working setup was not a cluster itself but a proof of concept of all the elements chosen working together, with a single raspberry pi as a minimal viable setup to test the network connectivity. What I basically did is connect the raspberry pi and the router together with the switch. The Raspberry Pi alredy had a burned SD card with Raspbian and had a static ip configured.

kubecluster_simple

What I basically had to do in this step is configure the network (router). Configured as WISP, it allows you to connect to the network as it acts as an Access Point, and in the same time the router will connect to the Access Point that you choose. Think about your mobile phone sharing the internet connection! Tested connectivity from my macbook, I had both connectivity to the Raspberry and internet connection. And the Raspberry had internet connection, too. This was working!

img_1751

The Final Setup

After testing that the networking infrastructure was working as intended I added the 2 missing Raspberry pi and the top layers. Hard to see but it has both the Kubernetes logo and in the bottom-right the Schibsted logo.

img_1759

Once all layers were in place I could put all the custom cabling. All networking cables had custom size and the microUSB were the shortest I had. The miniUSB cable for the router (white cable vertical on the left) and the switch cable (black one on the bottom) were too long…

img_1761

The final setup has the router with the 192.168.2.1 static address and the 3 Raspberry Pi nodes have 192.168.2.100, 192.168.2.101 and 192.168.2.102 respectively.

kubecluster

Some manual paint on the Kubernetes and Schibsted logo was done at the end.

img_1787

We are ready to install the Kubernetes Cluster, but this, Kids, will be another day’s story…

Burning SD images for Raspberry Pi

For a recent project, I had to burn several Raspbian images into some SD cards. A friend of mine recommended me to use an image burner that I didn’t know, and since I had some alternatives to perform the burn operation I’ve decided to write a little post about it. It is not a post about burning performance, nor the best SD card to buy, it’s just a post about showing you some of the alternatives that you have to burn your SD cards for your Raspberry Pi (maybe some other platforms that uses the same booting mechanism too).

For this, I’ll be burning the image base Raspbian Stretch Lite from 2017-11-29 (latest when this post is being written) that you can download from here.

Once the image is downloaded (about 348 MB) We will be facing the question: which method We will be using to burn that image to an SD card?

I’ll present 3 alternatives ordered by difficulty, beginning from the easiest.

Easy mode: Etcher

Etcher is the easiest way I’ve found to burn an image to an SD, only beaten by ordering the SD already burned online… A clean interface, intuitive steps and is able to perform the job well. Once burned verifies that the copy has been done correctly and then you have the possibility to auto eject the SD card when finished.

Captura de pantalla 2018-03-08 a las 19.58.04

Etcher can be downloaded from here.

Normal to Hard: ApplePi-Baker

With a less intuitive interface (compared to Etcher) this tool is able to do more than burning. It allows you to burn your already downloaded images, of course, but also prepare the SD card to be compatible for NOOBS, an operating system installer assistant that shows you a gui to choose what operating system do you want to install. Also you can perform a backup from your already burned SD card.

Captura de pantalla 2018-03-09 a las 16.02.06

ApplePi Baker can be downloaded from here.

I’m the death incarnate

You are probably questioning yourself what those burners really do. Using a GUI software is OK but you must know what you are doing first. Well, you can burn your SD cards without any extra software if you want, just use the tools that your operating system (Linux/macosx) already have:

First, we have to know which device is your sd card reader. In Macosx you have to do:

$ diskutil list

/dev/disk2 (internal, physical):
 #:                   TYPE NAME      SIZE      IDENTIFIER
 0: FDisk_partition_scheme          *15.9 GB   disk2
 1:             DOS_FAT_32 SDCARD    15.9 GB   disk2s1

So we know that our device will be /dev/disk2. We will have to unmount it first, by doing:

$ diskutil unmountDisk /dev/disk2
Unmount of all volumes on disk2 was successful

In Linux a lsblk command will show you which device is the one to use:

NAME     MAJ:MIN RM  SIZE   RO TYPE MOUNTPOINT
sdb      8:16    1   15.9G  0  disk  
└─sdb1   8:17    1   15.9G  0  part /media/SDCARD

To unmount a partition in Linux use the command umount:

$ umount /media/SDCARD

Once the disk is unmounted we can burn the image using the command dd:

$ sudo dd if=2017-11-29-raspbian-stretch-lite.img of=/dev/disk2 bs=1m

1772+0 records in
1772+0 records out
1858076672 bytes transferred in 461.479067 secs (4026351 bytes/sec)

Note that the previous alternatives tested the burned image integrity on the SD, and by this method, you only burn the image onto the SD card.

Bonus

If you don’t have ejected the SD card from the reader or you have put the SD card on the reader again you will see a new mount point (from a FAT32 partition) called Boot. Maybe you have never booted using this SD card, but you can do some interesting things prior to booting at this point:

  • If you create a file in the root of this partition called ssh, the secure shell service will start at running.
  • If you put a file called cmdline.txt with the next content:
ip=192.168.1.200::192.168.1.1:255.255.255.0:rpi:eth0:off

Your Raspberry will boot with the assigned ip (192.168.1.200) and gateway 192.168.1.1.

You can find more documentation about the cmdline.txt and RPIConfig options in https://elinux.org/RPiconfig and https://elinux.org/RPi_cmdline.txt.