In the last post I described the construction of the Death Star a 3xRaspberry Pi3 cluster and how I did the basic communication between them. In this post, I’ll describe the software construction by deploying a working Kubernetes cluster. I wanted to accomplish some goals along:
- Use the latest Raspbian Image available (2018-04-18)
- Use the latest Kubernetes version available for arm (1.10.2)
- Use the latest Docker-CE client available for arm (18.03.1-ce)
- All truth stored in Git (all steps done are stored in Git)
I had to recreate the cluster lots of times until I achieved a stable situation with no CrashLoopBackOff. And that is a lot of SD image burning… So after burning the image and creating the ssh file in /boot to allow ssh remote connections, I didn’t configure any static IP every time. Instead, I added all MAC address from the Raspberries to obtain always the same ips from the DHCP server present in the router (Tplink MR-3020). That saved me a lot of time!

Now We will start. With Ansible.
Ansible is software that automates software provisioning, configuration management, and application deployment. Following the Devops Gitops path, it allows us to convert every step we have to do to configure the Raspberries into code, thus giving us the power to repeat every step, paralellizing on different computers if needed and removing manual mistakes. So after we know what we have to do to create our cluster we will convert all steps into ansible playbooks so we can launch the cluster provisioning from a central computer.
After some search about similar projects (Kubernetes on Arm with Ansible) I liked the Ro14nd approach, so I have based my project on his work, but lots of things had to be changed for me to run it to the latest version, plus all the modifications I was planning to do. So I created https://github.com/stealthizer/pi-kubernetes-lab to store all tasks that I want to run for this project.
The objective: The Kubernetes Cluster
The best way to describe this project is by looking at this slide:

So we are going to create a Kubernetes Master, that is a set of containers running the etcd-master, the kube-apiserver, the kube-controller, the kube-dns, the kube-scheduler, a kube-proxy and a CNI of our choice (in this case I will use Flannel). But first we will install some common software to all nodes in the cluster.
Basic Common Setup
The first thing that we will do is to customize the hosts file that we will use in all steps. It describes how Ansible will locate all your servers and act accordingly for each role described. My hosts file will be:
[raspberries] 192.168.2.100 name=master 192.168.2.101 name=slave1 192.168.2.102 name=slave2 [master] 192.168.2.100 [slaves] 192.168.2.101 192.168.2.102
Once described in ther hosts file, to setup the basic (common) configuration for all Raspberries we will execute:
$ ansible-playbook -k -i hosts base.yml
That will ask us for the default password in raspbian for user pi (raspberry) and will begin to perform all tasks described in the manifests for the base role. The -k is for Ansible to ask us for a ssh password. As the first task that will execute is to copy your ssh key to the authorized hosts on all raspberries it will not be needed anymore. The base tasks will also disable the wifi and bluetooth modules (they won’t be needed in this setup), disable the hdmi output (we are running in headless mode), change the timezone, the hosts file, disable the swap, minimizing the amount of shared memory that the gpu will use to be available for the system, update the base system, add the repositories needed to install docker-ce and kubernetes and reboot. Once they are available again we are ready to continue.
The Master Node
The continue with the Master node creation we will execute:
$ ansible-playbook -i hosts kubeadm-master.yml
This task will configure the Master node. It includes installing the kubernetes version (specified in the roles/kubernetes/defaults/main.yml file, for me its 1.10.1), configure iptables to allow the interface cni0 to be able to forward, configure the environment fot the kubectl executions and execute the command to create the master node
kubeadm --init --config /etc/kubernetes/kubeadm.yml
It is wise to use the kubeadm.yml file. Of course you can pass parameters to the executable.
The podSubnet in the config file (or –pod-network-cidr parameter) needs to be specified ot the CNI will not run. It defaults to 10.244.0.0/16. Also the ServiceSubnet (or –service-cidr) if not specified will default to 10.96.0.0/12.
After the init process (can take a long time, more than 15 minutes if you don’t have the Docker images already downloaded) will install the CNI (Flannel in this example) and download the config file to use this cluster to /deployment/run/admin.conf.
If everything was ok, we can copy the admin.conf to your ~/.kube/config and use kubectl to review the status of the cluster. Some example output for debugging:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 31s v1.10.2
$ kubectl get po --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-master 1/1 Running 7 1m kube-system kube-apiserver-master 1/1 Running 7 2m kube-system kube-controller-manager-master 1/1 Running 0 1m kube-system kube-dns-686d6fb9c-68h5q 3/3 Running 0 2m kube-system kube-flannel-ds-d5m8d 1/1 Running 1 2m kube-system kube-proxy-626jw 1/1 Running 0 2m kube-system kube-scheduler-master 1/1 Running 0 1m
$ kubectl get all NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 57s
If the node Master is in status Ready we can proceed to create the worker nodes.
But if there is any problem we can use the kubernetes-full-reset ansible role to wipe out all Kubernetes configuration and try again
$ ansible-playbook -i hosts kubernetes-full-reset.yml
The Worker Nodes
The Worker nodes are the one that will run the pods in the Cluster. To add them to the existing cluster we will execute:
$ ansible-playbook -i hosts kubeadm-slaves.yml
This will execute kubeadm with the join action on all slaves declared in the hosts file using a token created in the Master creation step.
$ kubeadm join --token=<token> --discovery-token-unsafe-skip-ca-verification master:6443
Once this process is finished (a few minutes) you can check the status of your cluster with:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 7m v1.10.2 slave1 Ready <none> 1m v1.10.2 slave2 Ready <none> 2m v1.10.2
And we can see all the needed pods to run the Kubernetes Cluster:
$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-master 1/1 Running 7 39m kube-system kube-apiserver-master 1/1 Running 7 40m kube-system kube-controller-manager-master 1/1 Running 0 39m kube-system kube-dns-686d6fb9c-68h5q 3/3 Running 0 40m kube-system kube-flannel-ds-4qffk 1/1 Running 0 33m kube-system kube-flannel-ds-d5m8d 1/1 Running 1 40m kube-system kube-flannel-ds-qbgrd 1/1 Running 2 34m kube-system kube-proxy-5h7qj 1/1 Running 0 34m kube-system kube-proxy-626jw 1/1 Running 0 40m kube-system kube-proxy-m8wlt 1/1 Running 0 33m kube-system kube-scheduler-master 1/1 Running 0 39m
And all services running:
kubectl describe services --all-namespaces Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: <none> Selector: <none> Type: ClusterIP IP: 10.96.0.1 Port: https 443/TCP TargetPort: 6443/TCP Endpoints: 192.168.2.100:6443 Session Affinity: ClientIP Events: <none> Name: kube-dns Namespace: kube-system Labels: k8s-app=kube-dns kubernetes.io/cluster-service=true kubernetes.io/name=KubeDNS Annotations: <none> Selector: k8s-app=kube-dns Type: ClusterIP IP: 10.96.0.10 Port: dns 53/UDP TargetPort: 53/UDP Endpoints: 10.244.0.2:53 Port: dns-tcp 53/TCP TargetPort: 53/TCP Endpoints: 10.244.0.2:53 Session Affinity: None Events: <none>
At this point we have a Kubernetes Cluster running the latest available version.













