How I meet your Cluster (II): Kubernetes on Arm (Raspberry Pi 3)

In the last post I described the construction of the Death Star a 3xRaspberry Pi3 cluster and how I did the basic communication between them. In this post, I’ll describe the software construction by deploying a working Kubernetes cluster. I wanted to accomplish some goals along:

  • Use the latest Raspbian Image available (2018-04-18)
  • Use the latest Kubernetes version available for arm (1.10.2)
  • Use the latest Docker-CE client available for  arm (18.03.1-ce)
  • All truth stored in Git (all steps done are stored in Git)

I had to recreate the cluster lots of times until I achieved a stable situation with no CrashLoopBackOff. And that is a lot of SD image burning… So after burning the image and creating the ssh file in /boot to allow ssh remote connections, I didn’t configure any static IP every time. Instead, I added all MAC address from the Raspberries to obtain always the same ips from the DHCP server present in the router (Tplink MR-3020). That saved me a lot of time!

mac_address

Now We will start. With Ansible.

Ansible is software that automates software provisioning, configuration management, and application deployment. Following the Devops Gitops path, it allows us to convert every step we have to do to configure the Raspberries into code, thus giving us the power to repeat every step, paralellizing on different computers if needed and removing manual mistakes. So after we know what we have to do to create our cluster we will convert all steps into ansible playbooks so we can launch the cluster provisioning from a central computer.

After some search about similar projects (Kubernetes on Arm with Ansible) I liked the Ro14nd approach, so I have based my project on his work, but lots of things had to be changed for me to run it to the latest version, plus all the modifications I was planning to do. So I created https://github.com/stealthizer/pi-kubernetes-lab to store all tasks that I want to run for this project.

The objective: The Kubernetes Cluster

The best way to describe this project is by looking at this slide:

Kubernetes High Level component architecture
Kubernetes High Level component architecture | Slide from this presentation from Lucas Käldström (@kubernetesonarm)

So we are going to create a Kubernetes Master, that is a set of containers running the etcd-master, the kube-apiserver, the kube-controller, the kube-dns, the kube-scheduler, a kube-proxy and a CNI of our choice (in this case I will use Flannel). But first we will install some common software to all nodes in the cluster.

Basic Common Setup

The first thing that we will do is to customize the hosts file that we will use in all steps. It describes how Ansible will locate all your servers and act accordingly for each role described. My hosts file will be:

[raspberries]
192.168.2.100 name=master
192.168.2.101 name=slave1
192.168.2.102 name=slave2

[master]
192.168.2.100

[slaves]
192.168.2.101
192.168.2.102

Once described in ther hosts file, to setup the basic (common) configuration for all Raspberries we will execute:

$ ansible-playbook -k -i hosts base.yml

That will ask us for the default password in raspbian for user pi (raspberry) and will begin to perform all tasks described in the manifests for the base role. The -k is for Ansible to ask us for a ssh password. As the first task that will execute is to copy your ssh key to the authorized hosts on all raspberries it will not be needed anymore. The base tasks will also disable the wifi and bluetooth modules (they won’t be needed in this setup), disable the hdmi output (we are running in headless mode), change the timezone, the hosts file, disable the swap, minimizing the amount of shared memory that the gpu will use to be available for the system, update the base system, add the repositories needed to install docker-ce and kubernetes and reboot. Once they are available again we are ready to continue.

The Master Node

The continue with the Master node creation we will execute:

$ ansible-playbook -i hosts kubeadm-master.yml

This task will configure the Master node. It includes installing the kubernetes version (specified in the roles/kubernetes/defaults/main.yml file, for me its 1.10.1), configure iptables to allow the interface cni0 to be able to forward, configure the environment fot the kubectl executions and execute the command to create the master node

kubeadm --init --config /etc/kubernetes/kubeadm.yml

It is wise to use the kubeadm.yml file. Of course you can pass parameters to the executable.

The podSubnet in the config file (or –pod-network-cidr parameter) needs to be specified ot the CNI will not run. It defaults to 10.244.0.0/16. Also the ServiceSubnet (or –service-cidr) if not specified will default to 10.96.0.0/12.

After the init process (can take a long time, more than 15 minutes if you don’t have the Docker images already downloaded) will install the CNI (Flannel in this example) and download the config file to use this cluster to /deployment/run/admin.conf.

If everything was ok, we can copy the admin.conf to your ~/.kube/config and use kubectl to review the status of the cluster. Some example output for debugging:

$ kubectl get nodes
NAME   STATUS ROLES  AGE VERSION
master Ready  master 31s v1.10.2
$ kubectl get po --all-namespaces 
NAMESPACE   NAME                          READY STATUS   RESTARTS AGE
kube-system etcd-master                    1/1  Running  7        1m
kube-system kube-apiserver-master          1/1  Running  7        2m
kube-system kube-controller-manager-master 1/1  Running  0        1m
kube-system kube-dns-686d6fb9c-68h5q       3/3  Running  0        2m
kube-system kube-flannel-ds-d5m8d          1/1  Running  1        2m
kube-system kube-proxy-626jw               1/1  Running  0        2m
kube-system kube-scheduler-master          1/1  Running  0        1m
$ kubectl get all 
NAME       TYPE      CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1  <none>      443/TCP 57s

If the node Master is in status Ready we can proceed to create the worker nodes.

But if there is any problem we can use the kubernetes-full-reset ansible role to wipe out all Kubernetes configuration and try again

$ ansible-playbook -i hosts kubernetes-full-reset.yml

The Worker Nodes

The Worker nodes are the one that will run the pods in the Cluster. To add them to the existing cluster we will execute:

$ ansible-playbook -i hosts kubeadm-slaves.yml

This will execute kubeadm with the join action on all slaves declared in the hosts file using a token created in the Master creation step.

$ kubeadm join --token=<token> --discovery-token-unsafe-skip-ca-verification master:6443

Once this process is finished (a few minutes) you can check the status of your cluster with:

$ kubectl get nodes
NAME   STATUS ROLES  AGE VERSION
master Ready  master 7m  v1.10.2
slave1 Ready  <none> 1m  v1.10.2
slave2 Ready  <none> 2m  v1.10.2

And we can see all the needed pods to run the Kubernetes Cluster:

$ kubectl get pods --all-namespaces
NAMESPACE   NAME                           READY STATUS  RESTARTS AGE
kube-system etcd-master                    1/1   Running 7        39m
kube-system kube-apiserver-master          1/1   Running 7        40m
kube-system kube-controller-manager-master 1/1   Running 0        39m
kube-system kube-dns-686d6fb9c-68h5q       3/3   Running 0        40m
kube-system kube-flannel-ds-4qffk          1/1   Running 0        33m
kube-system kube-flannel-ds-d5m8d          1/1   Running 1        40m
kube-system kube-flannel-ds-qbgrd          1/1   Running 2        34m
kube-system kube-proxy-5h7qj               1/1   Running 0        34m
kube-system kube-proxy-626jw               1/1   Running 0        40m
kube-system kube-proxy-m8wlt               1/1   Running 0        33m
kube-system kube-scheduler-master          1/1   Running 0        39m

And all services running:

kubectl describe services --all-namespaces
Name: kubernetes
Namespace: default
Labels: component=apiserver
 provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.96.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 192.168.2.100:6443
Session Affinity: ClientIP
Events: <none>

Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
 kubernetes.io/cluster-service=true
 kubernetes.io/name=KubeDNS
Annotations: <none>
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.96.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.244.0.2:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.244.0.2:53
Session Affinity: None
Events: <none>

At this point we have a Kubernetes Cluster running the latest available version.

How I meet your Cluster (I)

Raspberry pi Kubernetes Cluster Custom

Kids, I’m gonna tell you an incredible story. The story of how I meet your Cluster. It was 2017 and Kubernetes was something definitively worth to learn. Major cloud providers began to present their products ready for production loads, hiding under a big curtain of magic how Kubernetes works under the hood, and costing too much for personal learning, leaving ourselves with solutions like Minikube or Docker Client Kubernetes solutions. This solution may fit for a developer whose only preoccupation for him is to run a certain code inside Kubernetes, no matter what Kubernetes is. But as Sysadmin/Devops we want to know how Kubernetes works, no matter what code runs over it. This post is about building the infrastructure to create the cluster, not the cluster itself. This will be described in a separate post. Let’s begin!

The Idea

Searching in Google for a Raspberry pi Kubernetes Cluster gave a lot of results. But as they all were valuable in their own way, I wanted to imagine which would be the best setup for me. What I wanted to accomplish was:

  • Run a Kubernetes cluster atop of Raspberries pi (rpi3)
  • A portable solution
  • In a separate network, with wireless super
  • Using only 1 wall plug

Something like this

c8e539c6-c847-46e6-9165-b24cd7bda4c5-12895-000008db842c2e70_file

The Layered Setup

I wanted it to be modular, so I thinked about a layered setup. There were lots of solutions out there but in the end, none of the comercial available solutions liked me. Either there was not enough units available to purchase, or they were too small to fit all the needs, or they were ugly as hell. And then I found the solution on Thingiverse.

Just searching kubernetes in Thingiverse gave me the solution. A printable platform that has all the holes needed to fit a raspberry pi and ready to be inside a stackable solution. Of course, you need a 3D printer for this (that I don’t have myself). A friend of mine offered to make all the prints, plus a customization of the top layer (that already had the Kubernetes logo) adding the company logo (Schibsted). The layers were a perfect fit.

img_1617

So I had the layers ready. I just needed to order all other things…

The Shop List

In order to accomplish all the needs, I ordered:

  • 3 x Raspberry pi 3. At the moment of creating the cluster the pi3B+ didn’t exist.
  • 3 x 16 GB SD cards
  • 1 x USB Powered network Switch. Yes, they exist! not very common…
  • 1 x Portable wireless router. I’m using TPLink MR3020, and I’m very happy with this in that setup!
  • 1 x Multiport USB power adapter with at least 5 USB ports.

To stick all the layers I used some nylon pieces, as follows:

  • 24 x 30mm M3 hexagonal male-female spacers
  • 12 x 6mm M2.5 hexagonal male-female spacers (fit for the Raspberry Pi)
  • 12 x M2.5 4mm nylon 6/6 philips screws
  • 12 x M2.5  Hexagonal nylon nuts

img_1785

The 30mm spacers may be insufficient depending the size of the power adapter, the router or the network switch you choose.

There is some invisible material that you need too. I’m talking about of all the cabling setup that you will need:

  • 5 x USB cables: I’ve salvaged some sold microUSB short cables. The router uses a miniUSB cable and the shortest I have is too long for this setup…
  • 4 x Network Cables: As the Switch and Raspberries I use are 10/100 I don’t need special cables, but as I need them to be really short I did them myself.

To stick the power adapter and the switch to the layer I used double-sided tape, as I’m not planning to remove them, at least, for a long time.

To stick the router to the layer I’ve used velcro strips, so it is easy to remove it if needed.

The first working prototype

A computer cluster, by definition, is a set of loosely or tightly connected computers that work together (Wikipedia). As a result of that, my first working setup was not a cluster itself but a proof of concept of all the elements chosen working together, with a single raspberry pi as a minimal viable setup to test the network connectivity. What I basically did is connect the raspberry pi and the router together with the switch. The Raspberry Pi alredy had a burned SD card with Raspbian and had a static ip configured.

kubecluster_simple

What I basically had to do in this step is configure the network (router). Configured as WISP, it allows you to connect to the network as it acts as an Access Point, and in the same time the router will connect to the Access Point that you choose. Think about your mobile phone sharing the internet connection! Tested connectivity from my macbook, I had both connectivity to the Raspberry and internet connection. And the Raspberry had internet connection, too. This was working!

img_1751

The Final Setup

After testing that the networking infrastructure was working as intended I added the 2 missing Raspberry pi and the top layers. Hard to see but it has both the Kubernetes logo and in the bottom-right the Schibsted logo.

img_1759

Once all layers were in place I could put all the custom cabling. All networking cables had custom size and the microUSB were the shortest I had. The miniUSB cable for the router (white cable vertical on the left) and the switch cable (black one on the bottom) were too long…

img_1761

The final setup has the router with the 192.168.2.1 static address and the 3 Raspberry Pi nodes have 192.168.2.100, 192.168.2.101 and 192.168.2.102 respectively.

kubecluster

Some manual paint on the Kubernetes and Schibsted logo was done at the end.

img_1787

We are ready to install the Kubernetes Cluster, but this, Kids, will be another day’s story…

Personal Kubernetes (II): Docker Client new native orchestration

The official Docker CE client (Edge track) now can use Kubernetes for orchestration. I’m sure that by now you have installed Docker (Stable), and the about window looks like this:

Captura de pantalla 2018-02-23 a las 14.07.44

Switching to Edge track requires you to download a new docker client from Edge. Once installed, the about window will look like this

Captura de pantalla 2018-02-23 a las 12.33.32

Now that we have the Docker CE Edge client we have to enable Kubernetes in preferences (has a dedicated tab for it). We will Enable Kubernetes, and I will activate the show system containers to show extra information.

Captura de pantalla 2018-02-23 a las 12.33.04

It takes a while to change the status to running as it has to download some images for the Kubernetes roles.

$ docker images
REPOSITORY                                             TAG        IMAGE ID     CREATED       SIZE
docker/kube-compose-controller                         v0.3.0-rc1 d099699fac52 4 weeks ago   25.8MB
docker/kube-compose-api-server                         v0.3.0-rc1 6c13a6358efa 4 weeks ago   39MB
gcr.io/google_containers/kube-apiserver-amd64          v1.9.2     7109112be2c7 5 weeks ago   210MB
gcr.io/google_containers/kube-proxy-amd64              v1.9.2     e6754bb0a529 5 weeks ago   109MB
gcr.io/google_containers/kube-controller-manager-amd64 v1.9.2     769d889083b6 5 weeks ago   138MB
gcr.io/google_containers/kube-scheduler-amd64          v1.9.2     2bf081517538 5 weeks ago   62.7MB
gcr.io/google_containers/etcd-amd64                    3.1.11     59d36f27cceb 2 months ago  194MB
gcr.io/google_containers/k8s-dns-sidecar-amd64         1.14.7     db76ee297b85 4 months ago  42MB
gcr.io/google_containers/k8s-dns-kube-dns-amd64        1.14.7     5d049a8c4eec 4 months ago  50.3MB
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64   1.14.7     5feec37454f4 4 months ago  41MB
gcr.io/google_containers/pause-amd64                   3.0        99e59f495ffa 22 months ago 747kB

Once the state of the Kubernetes service is in running, let’s interact with it! As usual, we will use kubectl. Remember how important is to have the same version in the client and the server? Well, there is more this time. As I have both Minikube and Docker CE Kubernetes running in my machine, something is not right when I launch my kubectl

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: dial tcp 192.168.99.101:8443: i/o timeout

Every Kubernetes configuration has a context, to see what contexts are available we have the get-contexts option:

$ kubectl config get-contexts
CURRENT NAME               CLUSTER                    AUTHINFO              NAMESPACE
        docker-for-desktop docker-for-desktop-cluster docker-for-desktop
*       minikube           minikube                   minikube

As you can see there are 2 different contexts and the current (default) that the kubectl is using is the minikube one (as expected). To switch context we can use the use-context option, using the name of the context that we want to use.

$ kubectl config use-context docker-for-desktop
Switched to context "docker-for-desktop".

As we are now using the docker-for-desktop context, let’s now try kubectl again

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

Works! Note that the Client and Server versions Matches, and it’s v1.9.2

$ kubectl get nodes
NAME                  STATUS  ROLES   AGE  VERSION
docker-for-desktop    Ready   master  20m  v1.9.2

Some more information about the cluster (where are the services running)

$ kubectl cluster-info
Kubernetes master is running at https://localhost:6443
KubeDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

And we have our local Kubernetes from Docker CE running and Ready. Easy to obtain and install, but is as versatile as Minikube?

Changing the Kubernetes version

As we saw in the about dialog after installing the Edge version of Docker client and in the kubectl version output we are running a Kubernetes v1.9.2. But what if we want to change it as we did in Minikube?

There is nothing in the interface that allows us to switch the kubernetes version. Some searching in google shows us that the Kubernetes version is (for now) tied to the Docker (Edge) Client version that you have installed. Well, that leaves us with only one way to change the Kubernetes version: download an older client.

By looking in the release notes page we can find information about latest Docker Clients. Searching by Kubernetes we can find that the experimental support for Kubernetes was added in the Docker Community Edition 17.12.0-ce-mac45 (Edge) version, with a v1.8.2. But as we can see, in the release notes page there is no download link for any of the Edge releases.  How do we obtain such version if we want? As this information is not widely available, I’ve found in this thread a script (credits to Jonathan Sokolowski) that bruteforces the download link by guessing the build number. So putting “edge” in the channel and pushing the build number up to 22000…

channel=edge
build=22000
for build in $(seq $build 1800); do
 if curl -X HEAD -fsSL -I https://download.docker.com/mac/$channel/$build/Docker.dmg >/dev/null 2>/dev/null; then
 echo "Found build: $build"
 curl -X HEAD -fsSL -I https://download.docker.com/mac/$channel/$build/Docker.dmg
 fi
done

Outputs lots of versions in the 18.01 branch, but after a while a 17.12 version is out…

Found build: 21620
HTTP/1.1 200 OK
Content-Type: binary/octet-stream
Content-Length: 348671743
Connection: keep-alive
Date: Sun, 25 Feb 2018 13:07:11 GMT
x-amz-meta-humanversion: 17.12.0-ce-mac45 (21620)
x-amz-meta-sha1: 64ea38dcf45d5107582d0db71540320b1e81d687
x-amz-meta-channel: edge
x-amz-meta-version: 17.12.0-ce-mac45
x-amz-meta-checksum: 05f8c934ebb975133374accbd12197ccc4c6e8c921e18135a084f8b474ef7aeb
x-amz-meta-arch: mac
x-amz-meta-build: 21620
Last-Modified: Thu, 04 Jan 2018 09:34:55 GMT
x-amz-version-id: hJLLaKrBYXIekjaIRMMhzTGleBC2J_zt
ETag: "eac95b06d547b2d6b02364fe8b448dd9-67"
Server: AmazonS3
X-Cache: Hit from cloudfront
Via: 1.1 d7f531af10bfff5400817f213f0b7761.cloudfront.net (CloudFront)
X-Amz-Cf-Id: zQPHhfJGvJlhSXlCmOau4GaF1w_kKUTdO6ZDNa6C61GzJPXOzq5hzA==

So using the 21620 build number we finally can create the download link

https://download.docker.com/mac/edge/21620/Docker.dmg

Installing it results in obtaining the v1.8.2 Kubernetes version.

Captura de pantalla 2018-02-25 a las 14.22.07

There is a more advanced version scrapper in https://github.com/jsok/docker-for-mac-versions written in python that outputs the information in json and can limit the maximum number of builds to scan.

Maybe in the future, it will be easier to perform this operation. Meanwhile, I think the effort is not worth it… For the moment, this seems like siren calls…

Personal Kubernetes (I): Minikube

When developing the code and pushing it to production goes hand in hand things are much easy. And when you have Kubernetes as your orchestration engine you have the opportunity to simulate production locally. As there are several alternatives to do this, I’ll first talk about Minikube.

Minikube is a tool ready to launch a local kubernetes cluster using your existing docker installation, and expects you to have the docker working for this. This, that may be seen initially as a disadvantage, is probably that gives this alternative strength, allowing it to be run in some scenarios that the native (most actual) docker client installation can’t work. Initially, I’m thinking about 2 scenarios:

  • An old computer that can’t have Intel VT or AMD-V technology activated (old pc or mac), leaving them only the alternative to run docker by using kitematic docker toolbox (virtualbox or virtualization by software)
  • A computer that despite having Intel VT activated your operating system is not allowing you to use it (Windows Home). Shame on them!

After checking that your local docker installation is working, to install Minikube you have also different ways. The easiest way in a mac is to use brew:

$ brew cask install minikube
 ==> Satisfying dependencies
 ==> Installing Formula dependencies: kubernetes-cli
 ==> Installing kubernetes-cli
 ==> Downloading https://homebrew.bintray.com/bottles/kubernetes-cli-1.9.3.el_capit
 Already downloaded: /Users/Stealth/Library/Caches/Homebrew/kubernetes-cli-1.9.3.el_capitan.bottle.tar.gz
 ==> Pouring kubernetes-cli-1.9.3.el_capitan.bottle.tar.gz
 ==> Caveats
 Bash completion has been installed to:
 /usr/local/etc/bash_completion.d

zsh completions have been installed to:
 /usr/local/share/zsh/site-functions
 ==> Summary
 🍺 /usr/local/Cellar/kubernetes-cli/1.9.3: 172 files, 65.4MB
 ==> Downloading https://storage.googleapis.com/minikube/releases/v0.25.0/minikube-
 Already downloaded: /Users/Stealth/Library/Caches/Homebrew/Cask/minikube--0.25.0
 ==> Verifying checksum for Cask minikube
 ==> Installing Cask minikube
 ==> Linking Binary 'minikube-darwin-amd64' to '/usr/local/bin/minikube'.
 🍺 minikube was successfully installed!

As you can see the minikube installation also installs a kubectl version that matches the minikube version. You must know that it’s important that the cluster version and the client version matches, or weird things may occur. So watch out for existing kubectl versions you may have.

To start minikube:

$ minikube start
 Starting local Kubernetes v1.9.0 cluster...
 Starting VM...
 Downloading Minikube ISO
 142.22 MB / 142.22 MB [============================================] 100.00% 0s
 Getting VM IP address...
 Moving files into cluster...
 Downloading localkube binary
 162.41 MB / 162.41 MB [============================================] 100.00% 0s
 0 B / 65 B [----------------------------------------------------------] 0.00%
 65 B / 65 B [======================================================] 100.00% 0sSetting up certs...
 Connecting to cluster...
 Setting up kubeconfig...
 Starting cluster components...
 Kubectl is now configured to use the cluster.
 Loading cached images from config file.

If something goes wrong, destroy the ~/.minikube directory and try again.

To view the Kubernetes nodes available:

$ kubectl get nodes
 NAME STATUS ROLES AGE VERSION
 minikube Ready  53s v1.9.0

As we can see (either in the brew step and the get nodes step) is that we are running Kubernetes v1.9.0.  If we want to run a different version, we have to first check if this version is supported, with the command:

$ minikube get-k8s-versions

We will install a v1.7.0. To do this, we will stop the actual Minikube running with:

$ minikube stop

To start a different version of Minikube we have to start it with the –kubernetes-version <existing-version-supported>. But executing the start command will give us an error…

$ minikube start --kubernetes-version v1.7.0
Starting local Kubernetes v1.7.0 cluster...
Starting VM...
Getting VM IP address...
Kubernetes version downgrade is not supported. Using version: v1.9.0
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

So we end up with a v1.9.0… Time to destroy ~/.minikube directory! Also, destroy the minikube entry in Virtualbox. We try again…

$ minikube start --kubernetes-version v1.7.0
Starting local Kubernetes v1.7.0 cluster...
Starting VM...
Downloading Minikube ISO
 142.22 MB / 142.22 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading localkube binary
 137.57 MB / 137.57 MB [============================================] 100.00% 0s
 65 B / 65 B [======================================================] 100.00% 0s
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

Done! But a get nodes instruction fails…

$ kubectl get nodes
No resources found.

Do you remember how important is to have a kubectl matching the running cluster? We have to replace the existing v1.9.0 kubectl version with the downgraded to v1.7.0. To do this we have to download it with:

$ curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/darwin/amd64/kubectl

Notice that the v1.7.0 can be replaced to match another client version that you may desire. Now, if we try again the same get nodes operation with the v1.7.0 kubectl binary:

$ kubectl get nodes
NAME STATUS AGE VERSION
minikube Ready 14m v1.7.0

Docker Edge has the possibility to activate a local Kubernetes Cluster. Is it worth? Personal Kubernetes (II) is coming…