Personal Kubernetes (II): Docker Client new native orchestration

The official Docker CE client (Edge track) now can use Kubernetes for orchestration. I’m sure that by now you have installed Docker (Stable), and the about window looks like this:

Captura de pantalla 2018-02-23 a las 14.07.44

Switching to Edge track requires you to download a new docker client from Edge. Once installed, the about window will look like this

Captura de pantalla 2018-02-23 a las 12.33.32

Now that we have the Docker CE Edge client we have to enable Kubernetes in preferences (has a dedicated tab for it). We will Enable Kubernetes, and I will activate the show system containers to show extra information.

Captura de pantalla 2018-02-23 a las 12.33.04

It takes a while to change the status to running as it has to download some images for the Kubernetes roles.

$ docker images
REPOSITORY                                             TAG        IMAGE ID     CREATED       SIZE
docker/kube-compose-controller                         v0.3.0-rc1 d099699fac52 4 weeks ago   25.8MB
docker/kube-compose-api-server                         v0.3.0-rc1 6c13a6358efa 4 weeks ago   39MB
gcr.io/google_containers/kube-apiserver-amd64          v1.9.2     7109112be2c7 5 weeks ago   210MB
gcr.io/google_containers/kube-proxy-amd64              v1.9.2     e6754bb0a529 5 weeks ago   109MB
gcr.io/google_containers/kube-controller-manager-amd64 v1.9.2     769d889083b6 5 weeks ago   138MB
gcr.io/google_containers/kube-scheduler-amd64          v1.9.2     2bf081517538 5 weeks ago   62.7MB
gcr.io/google_containers/etcd-amd64                    3.1.11     59d36f27cceb 2 months ago  194MB
gcr.io/google_containers/k8s-dns-sidecar-amd64         1.14.7     db76ee297b85 4 months ago  42MB
gcr.io/google_containers/k8s-dns-kube-dns-amd64        1.14.7     5d049a8c4eec 4 months ago  50.3MB
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64   1.14.7     5feec37454f4 4 months ago  41MB
gcr.io/google_containers/pause-amd64                   3.0        99e59f495ffa 22 months ago 747kB

Once the state of the Kubernetes service is in running, let’s interact with it! As usual, we will use kubectl. Remember how important is to have the same version in the client and the server? Well, there is more this time. As I have both Minikube and Docker CE Kubernetes running in my machine, something is not right when I launch my kubectl

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: dial tcp 192.168.99.101:8443: i/o timeout

Every Kubernetes configuration has a context, to see what contexts are available we have the get-contexts option:

$ kubectl config get-contexts
CURRENT NAME               CLUSTER                    AUTHINFO              NAMESPACE
        docker-for-desktop docker-for-desktop-cluster docker-for-desktop
*       minikube           minikube                   minikube

As you can see there are 2 different contexts and the current (default) that the kubectl is using is the minikube one (as expected). To switch context we can use the use-context option, using the name of the context that we want to use.

$ kubectl config use-context docker-for-desktop
Switched to context "docker-for-desktop".

As we are now using the docker-for-desktop context, let’s now try kubectl again

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

Works! Note that the Client and Server versions Matches, and it’s v1.9.2

$ kubectl get nodes
NAME                  STATUS  ROLES   AGE  VERSION
docker-for-desktop    Ready   master  20m  v1.9.2

Some more information about the cluster (where are the services running)

$ kubectl cluster-info
Kubernetes master is running at https://localhost:6443
KubeDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

And we have our local Kubernetes from Docker CE running and Ready. Easy to obtain and install, but is as versatile as Minikube?

Changing the Kubernetes version

As we saw in the about dialog after installing the Edge version of Docker client and in the kubectl version output we are running a Kubernetes v1.9.2. But what if we want to change it as we did in Minikube?

There is nothing in the interface that allows us to switch the kubernetes version. Some searching in google shows us that the Kubernetes version is (for now) tied to the Docker (Edge) Client version that you have installed. Well, that leaves us with only one way to change the Kubernetes version: download an older client.

By looking in the release notes page we can find information about latest Docker Clients. Searching by Kubernetes we can find that the experimental support for Kubernetes was added in the Docker Community Edition 17.12.0-ce-mac45 (Edge) version, with a v1.8.2. But as we can see, in the release notes page there is no download link for any of the Edge releases.  How do we obtain such version if we want? As this information is not widely available, I’ve found in this thread a script (credits to Jonathan Sokolowski) that bruteforces the download link by guessing the build number. So putting “edge” in the channel and pushing the build number up to 22000…

channel=edge
build=22000
for build in $(seq $build 1800); do
 if curl -X HEAD -fsSL -I https://download.docker.com/mac/$channel/$build/Docker.dmg >/dev/null 2>/dev/null; then
 echo "Found build: $build"
 curl -X HEAD -fsSL -I https://download.docker.com/mac/$channel/$build/Docker.dmg
 fi
done

Outputs lots of versions in the 18.01 branch, but after a while a 17.12 version is out…

Found build: 21620
HTTP/1.1 200 OK
Content-Type: binary/octet-stream
Content-Length: 348671743
Connection: keep-alive
Date: Sun, 25 Feb 2018 13:07:11 GMT
x-amz-meta-humanversion: 17.12.0-ce-mac45 (21620)
x-amz-meta-sha1: 64ea38dcf45d5107582d0db71540320b1e81d687
x-amz-meta-channel: edge
x-amz-meta-version: 17.12.0-ce-mac45
x-amz-meta-checksum: 05f8c934ebb975133374accbd12197ccc4c6e8c921e18135a084f8b474ef7aeb
x-amz-meta-arch: mac
x-amz-meta-build: 21620
Last-Modified: Thu, 04 Jan 2018 09:34:55 GMT
x-amz-version-id: hJLLaKrBYXIekjaIRMMhzTGleBC2J_zt
ETag: "eac95b06d547b2d6b02364fe8b448dd9-67"
Server: AmazonS3
X-Cache: Hit from cloudfront
Via: 1.1 d7f531af10bfff5400817f213f0b7761.cloudfront.net (CloudFront)
X-Amz-Cf-Id: zQPHhfJGvJlhSXlCmOau4GaF1w_kKUTdO6ZDNa6C61GzJPXOzq5hzA==

So using the 21620 build number we finally can create the download link

https://download.docker.com/mac/edge/21620/Docker.dmg

Installing it results in obtaining the v1.8.2 Kubernetes version.

Captura de pantalla 2018-02-25 a las 14.22.07

There is a more advanced version scrapper in https://github.com/jsok/docker-for-mac-versions written in python that outputs the information in json and can limit the maximum number of builds to scan.

Maybe in the future, it will be easier to perform this operation. Meanwhile, I think the effort is not worth it… For the moment, this seems like siren calls…

Personal Kubernetes (I): Minikube

When developing the code and pushing it to production goes hand in hand things are much easy. And when you have Kubernetes as your orchestration engine you have the opportunity to simulate production locally. As there are several alternatives to do this, I’ll first talk about Minikube.

Minikube is a tool ready to launch a local kubernetes cluster using your existing docker installation, and expects you to have the docker working for this. This, that may be seen initially as a disadvantage, is probably that gives this alternative strength, allowing it to be run in some scenarios that the native (most actual) docker client installation can’t work. Initially, I’m thinking about 2 scenarios:

  • An old computer that can’t have Intel VT or AMD-V technology activated (old pc or mac), leaving them only the alternative to run docker by using kitematic docker toolbox (virtualbox or virtualization by software)
  • A computer that despite having Intel VT activated your operating system is not allowing you to use it (Windows Home). Shame on them!

After checking that your local docker installation is working, to install Minikube you have also different ways. The easiest way in a mac is to use brew:

$ brew cask install minikube
 ==> Satisfying dependencies
 ==> Installing Formula dependencies: kubernetes-cli
 ==> Installing kubernetes-cli
 ==> Downloading https://homebrew.bintray.com/bottles/kubernetes-cli-1.9.3.el_capit
 Already downloaded: /Users/Stealth/Library/Caches/Homebrew/kubernetes-cli-1.9.3.el_capitan.bottle.tar.gz
 ==> Pouring kubernetes-cli-1.9.3.el_capitan.bottle.tar.gz
 ==> Caveats
 Bash completion has been installed to:
 /usr/local/etc/bash_completion.d

zsh completions have been installed to:
 /usr/local/share/zsh/site-functions
 ==> Summary
 🍺 /usr/local/Cellar/kubernetes-cli/1.9.3: 172 files, 65.4MB
 ==> Downloading https://storage.googleapis.com/minikube/releases/v0.25.0/minikube-
 Already downloaded: /Users/Stealth/Library/Caches/Homebrew/Cask/minikube--0.25.0
 ==> Verifying checksum for Cask minikube
 ==> Installing Cask minikube
 ==> Linking Binary 'minikube-darwin-amd64' to '/usr/local/bin/minikube'.
 🍺 minikube was successfully installed!

As you can see the minikube installation also installs a kubectl version that matches the minikube version. You must know that it’s important that the cluster version and the client version matches, or weird things may occur. So watch out for existing kubectl versions you may have.

To start minikube:

$ minikube start
 Starting local Kubernetes v1.9.0 cluster...
 Starting VM...
 Downloading Minikube ISO
 142.22 MB / 142.22 MB [============================================] 100.00% 0s
 Getting VM IP address...
 Moving files into cluster...
 Downloading localkube binary
 162.41 MB / 162.41 MB [============================================] 100.00% 0s
 0 B / 65 B [----------------------------------------------------------] 0.00%
 65 B / 65 B [======================================================] 100.00% 0sSetting up certs...
 Connecting to cluster...
 Setting up kubeconfig...
 Starting cluster components...
 Kubectl is now configured to use the cluster.
 Loading cached images from config file.

If something goes wrong, destroy the ~/.minikube directory and try again.

To view the Kubernetes nodes available:

$ kubectl get nodes
 NAME STATUS ROLES AGE VERSION
 minikube Ready  53s v1.9.0

As we can see (either in the brew step and the get nodes step) is that we are running Kubernetes v1.9.0.  If we want to run a different version, we have to first check if this version is supported, with the command:

$ minikube get-k8s-versions

We will install a v1.7.0. To do this, we will stop the actual Minikube running with:

$ minikube stop

To start a different version of Minikube we have to start it with the –kubernetes-version <existing-version-supported>. But executing the start command will give us an error…

$ minikube start --kubernetes-version v1.7.0
Starting local Kubernetes v1.7.0 cluster...
Starting VM...
Getting VM IP address...
Kubernetes version downgrade is not supported. Using version: v1.9.0
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

So we end up with a v1.9.0… Time to destroy ~/.minikube directory! Also, destroy the minikube entry in Virtualbox. We try again…

$ minikube start --kubernetes-version v1.7.0
Starting local Kubernetes v1.7.0 cluster...
Starting VM...
Downloading Minikube ISO
 142.22 MB / 142.22 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading localkube binary
 137.57 MB / 137.57 MB [============================================] 100.00% 0s
 65 B / 65 B [======================================================] 100.00% 0s
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

Done! But a get nodes instruction fails…

$ kubectl get nodes
No resources found.

Do you remember how important is to have a kubectl matching the running cluster? We have to replace the existing v1.9.0 kubectl version with the downgraded to v1.7.0. To do this we have to download it with:

$ curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/darwin/amd64/kubectl

Notice that the v1.7.0 can be replaced to match another client version that you may desire. Now, if we try again the same get nodes operation with the v1.7.0 kubectl binary:

$ kubectl get nodes
NAME STATUS AGE VERSION
minikube Ready 14m v1.7.0

Docker Edge has the possibility to activate a local Kubernetes Cluster. Is it worth? Personal Kubernetes (II) is coming…

Automate all the things with Jenkins! (II): Create a pipeline to create Docker images

Maybe you have your own Jenkins Server by now. Great! Now, why don’t start automating all the things?

As Sysadmins we want all our proceses running flawlessly when needed, and in an automated way if possible/applicable. Creating, modifying and maintaining a Docker image can be quite a repetitive task, so we want to automate it at all costs. Think about security patches for example!

About this project

Docker has a nice container to test that Docker works, that is, the whalesay container. Basically is an implementation of cowsay (a popular perl script) that replaces the cow template with a whale one. What happens is that there is no raspberry official equivalent image, and the x86 image uses Ubuntu as source image, so the resulting docker image is huge (more than 200MB to just say Hello World…). So we will create an arm image for raspberry pi based on Alpine that does the whalesay thing. You can check this project on github (Dockerfile here). This article is based on this entry from getintodevops.com

For the Jenkins pipeline we have created a file (written in groove) that describes all the steps (stages) and operations that the pipeline will perform in order to carry out the job. In this file we are defining 4 stages:

  • Clone Repository: We will get the source code from git
  • Build Image: We will do the “Docker Build” operation over the source code
  • Test Image: We will test the created image
  • Push Image: We will push the created image to Docker Hub

For this last step we need the Docker Hub credentials on Jenkins to be able to perform the Docker Push operation.

Configuring Docker Hub With Jenkins

To store the Docker image resulting from our build, we’ll be using Docker Hub. You can sign up for a free account at https://hub.docker.com.

Once we have an account created in Docker Hub, we will put it on  Credentials -> System -> Global credentials -> Add Credentials.

Add your Docker Hub credentials as the type Username with password, with the ID docker-hub-credentials. The ID field is how we will refer to this entry in our script.

The Jenkinsfile

We’ll need to give Jenkins access to push the image to Docker Hub. For this, we’ll create Credentials in Jenkins, and refer to them in the Jenkinsfile as ‘docker-hub-credentials’.

node {
 def app

stage('Clone repository') {
 /* Ensure we can clone the repository */
 checkout scm
 }

stage('Build image') {
 /* This builds the actual image; synonymous to
 * docker build on the command line */
 app = docker.build("stealthizer/rpi-whalesay")
 }

stage('Test image') {
 /* No real tests so it just passes this step always OK */
 sh 'echo "Tests passed"'
 }

stage('Push image') {
 /* Finally, we'll push the image with two tags:
 * First, a short hash that identifies the commit in git
 * Second, the 'latest' tag.
 * Pushing multiple tags is cheap, as all the layers are reused. */
 sh "git rev-parse --short HEAD > .git/commit-id"
 def commit_id = readFile('.git/commit-id').trim()
 docker.withRegistry('https://registry.hub.docker.com', 'docker-hub-credentials') {
 app.push("${commit_id}")
 app.push("latest")
 }
 }
}

As you might have noticed in the above Jenkinsfile, we’re using docker.withRegistry to wrap the app.push commands – this instructs Jenkins to log in to a specified registry with the specified credential id (docker-hub-credentials).

The Jenkins Pipeline

Now, we will create the pipeline, that is, an automated expression of the task you want to perform from your code you have written to the very final delivery step (create the image, or put something into production, for example). This will be your contribution the continuous delivery processes.

To create the pipeline we will add a New Item -> Pipeline. We will customize the next field properties:

  • Description: Free text to describe your pipeline project
  • Github Project: the url of the project you are going to build automaticallyI’ll be using https://github.com/stealthizer/rpi-whalesay/.
  • Definition: On the pipeline part of this config page we will choose “pipeline script from SCM”, specify Git as our SCM source, put again the git project url as Repository url (no credentials needed if the repository is public), specify the branch you want to build (*/master can be a good start) and as script path we will just put Jenkinsfile.

Once we press on save we are finished creating out pipeline.

To test it we just press the Build option inside the job and see if all the stages finish with a Success state.

jenkins_stage_view

Docker on Raspberry pi

How to put Docker on a Raspberry pi? The easyest way is to install a distribution that already has it installed, like Hypriot. But the official way to install docker on Raspbian is (no sudo needed):

$ curl -sSL https://get.docker.com | sh

On the time this article has been written the version installed by this command is 17.12.0-ce.

If you want to run the docker command from the pi user, you can issue the command:

$ sudo usermod -aG docker pi

The result of this command will be available the next time you open a new session with the user pi.

To test the installation we will run the stealthizer/rpi-whalesay image, prepared to run in this architecture:

Captura de pantalla 2018-01-09 a las 14.39.41

And That’s all for now. This was a very basic entry, I know. But some weeks ago there was a problem in the last docker version for raspberry and some additional steps where required for docker to work…