Burning SD images for Raspberry Pi

For a recent project, I had to burn several Raspbian images into some SD cards. A friend of mine recommended me to use an image burner that I didn’t know, and since I had some alternatives to perform the burn operation I’ve decided to write a little post about it. It is not a post about burning performance, nor the best SD card to buy, it’s just a post about showing you some of the alternatives that you have to burn your SD cards for your Raspberry Pi (maybe some other platforms that uses the same booting mechanism too).

For this, I’ll be burning the image base Raspbian Stretch Lite from 2017-11-29 (latest when this post is being written) that you can download from here.

Once the image is downloaded (about 348 MB) We will be facing the question: which method We will be using to burn that image to an SD card?

I’ll present 3 alternatives ordered by difficulty, beginning from the easiest.

Easy mode: Etcher

Etcher is the easiest way I’ve found to burn an image to an SD, only beaten by ordering the SD already burned online… A clean interface, intuitive steps and is able to perform the job well. Once burned verifies that the copy has been done correctly and then you have the possibility to auto eject the SD card when finished.

Captura de pantalla 2018-03-08 a las 19.58.04

Etcher can be downloaded from here.

Normal to Hard: ApplePi-Baker

With a less intuitive interface (compared to Etcher) this tool is able to do more than burning. It allows you to burn your already downloaded images, of course, but also prepare the SD card to be compatible for NOOBS, an operating system installer assistant that shows you a gui to choose what operating system do you want to install. Also you can perform a backup from your already burned SD card.

Captura de pantalla 2018-03-09 a las 16.02.06

ApplePi Baker can be downloaded from here.

I’m the death incarnate

You are probably questioning yourself what those burners really do. Using a GUI software is OK but you must know what you are doing first. Well, you can burn your SD cards without any extra software if you want, just use the tools that your operating system (Linux/macosx) already have:

First, we have to know which device is your sd card reader. In Macosx you have to do:

$ diskutil list

/dev/disk2 (internal, physical):
 #:                   TYPE NAME      SIZE      IDENTIFIER
 0: FDisk_partition_scheme          *15.9 GB   disk2
 1:             DOS_FAT_32 SDCARD    15.9 GB   disk2s1

So we know that our device will be /dev/disk2. We will have to unmount it first, by doing:

$ diskutil unmountDisk /dev/disk2
Unmount of all volumes on disk2 was successful

In Linux a lsblk command will show you which device is the one to use:

NAME     MAJ:MIN RM  SIZE   RO TYPE MOUNTPOINT
sdb      8:16    1   15.9G  0  disk  
└─sdb1   8:17    1   15.9G  0  part /media/SDCARD

To unmount a partition in Linux use the command umount:

$ umount /media/SDCARD

Once the disk is unmounted we can burn the image using the command dd:

$ sudo dd if=2017-11-29-raspbian-stretch-lite.img of=/dev/disk2 bs=1m

1772+0 records in
1772+0 records out
1858076672 bytes transferred in 461.479067 secs (4026351 bytes/sec)

Note that the previous alternatives tested the burned image integrity on the SD, and by this method, you only burn the image onto the SD card.

Bonus

If you don’t have ejected the SD card from the reader or you have put the SD card on the reader again you will see a new mount point (from a FAT32 partition) called Boot. Maybe you have never booted using this SD card, but you can do some interesting things prior to booting at this point:

  • If you create a file in the root of this partition called ssh, the secure shell service will start at running.
  • If you put a file called cmdline.txt with the next content:
ip=192.168.1.200::192.168.1.1:255.255.255.0:rpi:eth0:off

Your Raspberry will boot with the assigned ip (192.168.1.200) and gateway 192.168.1.1.

You can find more documentation about the cmdline.txt and RPIConfig options in https://elinux.org/RPiconfig and https://elinux.org/RPi_cmdline.txt.

Personal Kubernetes (II): Docker Client new native orchestration

The official Docker CE client (Edge track) now can use Kubernetes for orchestration. I’m sure that by now you have installed Docker (Stable), and the about window looks like this:

Captura de pantalla 2018-02-23 a las 14.07.44

Switching to Edge track requires you to download a new docker client from Edge. Once installed, the about window will look like this

Captura de pantalla 2018-02-23 a las 12.33.32

Now that we have the Docker CE Edge client we have to enable Kubernetes in preferences (has a dedicated tab for it). We will Enable Kubernetes, and I will activate the show system containers to show extra information.

Captura de pantalla 2018-02-23 a las 12.33.04

It takes a while to change the status to running as it has to download some images for the Kubernetes roles.

$ docker images
REPOSITORY                                             TAG        IMAGE ID     CREATED       SIZE
docker/kube-compose-controller                         v0.3.0-rc1 d099699fac52 4 weeks ago   25.8MB
docker/kube-compose-api-server                         v0.3.0-rc1 6c13a6358efa 4 weeks ago   39MB
gcr.io/google_containers/kube-apiserver-amd64          v1.9.2     7109112be2c7 5 weeks ago   210MB
gcr.io/google_containers/kube-proxy-amd64              v1.9.2     e6754bb0a529 5 weeks ago   109MB
gcr.io/google_containers/kube-controller-manager-amd64 v1.9.2     769d889083b6 5 weeks ago   138MB
gcr.io/google_containers/kube-scheduler-amd64          v1.9.2     2bf081517538 5 weeks ago   62.7MB
gcr.io/google_containers/etcd-amd64                    3.1.11     59d36f27cceb 2 months ago  194MB
gcr.io/google_containers/k8s-dns-sidecar-amd64         1.14.7     db76ee297b85 4 months ago  42MB
gcr.io/google_containers/k8s-dns-kube-dns-amd64        1.14.7     5d049a8c4eec 4 months ago  50.3MB
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64   1.14.7     5feec37454f4 4 months ago  41MB
gcr.io/google_containers/pause-amd64                   3.0        99e59f495ffa 22 months ago 747kB

Once the state of the Kubernetes service is in running, let’s interact with it! As usual, we will use kubectl. Remember how important is to have the same version in the client and the server? Well, there is more this time. As I have both Minikube and Docker CE Kubernetes running in my machine, something is not right when I launch my kubectl

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: dial tcp 192.168.99.101:8443: i/o timeout

Every Kubernetes configuration has a context, to see what contexts are available we have the get-contexts option:

$ kubectl config get-contexts
CURRENT NAME               CLUSTER                    AUTHINFO              NAMESPACE
        docker-for-desktop docker-for-desktop-cluster docker-for-desktop
*       minikube           minikube                   minikube

As you can see there are 2 different contexts and the current (default) that the kubectl is using is the minikube one (as expected). To switch context we can use the use-context option, using the name of the context that we want to use.

$ kubectl config use-context docker-for-desktop
Switched to context "docker-for-desktop".

As we are now using the docker-for-desktop context, let’s now try kubectl again

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

Works! Note that the Client and Server versions Matches, and it’s v1.9.2

$ kubectl get nodes
NAME                  STATUS  ROLES   AGE  VERSION
docker-for-desktop    Ready   master  20m  v1.9.2

Some more information about the cluster (where are the services running)

$ kubectl cluster-info
Kubernetes master is running at https://localhost:6443
KubeDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

And we have our local Kubernetes from Docker CE running and Ready. Easy to obtain and install, but is as versatile as Minikube?

Changing the Kubernetes version

As we saw in the about dialog after installing the Edge version of Docker client and in the kubectl version output we are running a Kubernetes v1.9.2. But what if we want to change it as we did in Minikube?

There is nothing in the interface that allows us to switch the kubernetes version. Some searching in google shows us that the Kubernetes version is (for now) tied to the Docker (Edge) Client version that you have installed. Well, that leaves us with only one way to change the Kubernetes version: download an older client.

By looking in the release notes page we can find information about latest Docker Clients. Searching by Kubernetes we can find that the experimental support for Kubernetes was added in the Docker Community Edition 17.12.0-ce-mac45 (Edge) version, with a v1.8.2. But as we can see, in the release notes page there is no download link for any of the Edge releases.  How do we obtain such version if we want? As this information is not widely available, I’ve found in this thread a script (credits to Jonathan Sokolowski) that bruteforces the download link by guessing the build number. So putting “edge” in the channel and pushing the build number up to 22000…

channel=edge
build=22000
for build in $(seq $build 1800); do
 if curl -X HEAD -fsSL -I https://download.docker.com/mac/$channel/$build/Docker.dmg >/dev/null 2>/dev/null; then
 echo "Found build: $build"
 curl -X HEAD -fsSL -I https://download.docker.com/mac/$channel/$build/Docker.dmg
 fi
done

Outputs lots of versions in the 18.01 branch, but after a while a 17.12 version is out…

Found build: 21620
HTTP/1.1 200 OK
Content-Type: binary/octet-stream
Content-Length: 348671743
Connection: keep-alive
Date: Sun, 25 Feb 2018 13:07:11 GMT
x-amz-meta-humanversion: 17.12.0-ce-mac45 (21620)
x-amz-meta-sha1: 64ea38dcf45d5107582d0db71540320b1e81d687
x-amz-meta-channel: edge
x-amz-meta-version: 17.12.0-ce-mac45
x-amz-meta-checksum: 05f8c934ebb975133374accbd12197ccc4c6e8c921e18135a084f8b474ef7aeb
x-amz-meta-arch: mac
x-amz-meta-build: 21620
Last-Modified: Thu, 04 Jan 2018 09:34:55 GMT
x-amz-version-id: hJLLaKrBYXIekjaIRMMhzTGleBC2J_zt
ETag: "eac95b06d547b2d6b02364fe8b448dd9-67"
Server: AmazonS3
X-Cache: Hit from cloudfront
Via: 1.1 d7f531af10bfff5400817f213f0b7761.cloudfront.net (CloudFront)
X-Amz-Cf-Id: zQPHhfJGvJlhSXlCmOau4GaF1w_kKUTdO6ZDNa6C61GzJPXOzq5hzA==

So using the 21620 build number we finally can create the download link

https://download.docker.com/mac/edge/21620/Docker.dmg

Installing it results in obtaining the v1.8.2 Kubernetes version.

Captura de pantalla 2018-02-25 a las 14.22.07

There is a more advanced version scrapper in https://github.com/jsok/docker-for-mac-versions written in python that outputs the information in json and can limit the maximum number of builds to scan.

Maybe in the future, it will be easier to perform this operation. Meanwhile, I think the effort is not worth it… For the moment, this seems like siren calls…

Personal Kubernetes (I): Minikube

When developing the code and pushing it to production goes hand in hand things are much easy. And when you have Kubernetes as your orchestration engine you have the opportunity to simulate production locally. As there are several alternatives to do this, I’ll first talk about Minikube.

Minikube is a tool ready to launch a local kubernetes cluster using your existing docker installation, and expects you to have the docker working for this. This, that may be seen initially as a disadvantage, is probably that gives this alternative strength, allowing it to be run in some scenarios that the native (most actual) docker client installation can’t work. Initially, I’m thinking about 2 scenarios:

  • An old computer that can’t have Intel VT or AMD-V technology activated (old pc or mac), leaving them only the alternative to run docker by using kitematic docker toolbox (virtualbox or virtualization by software)
  • A computer that despite having Intel VT activated your operating system is not allowing you to use it (Windows Home). Shame on them!

After checking that your local docker installation is working, to install Minikube you have also different ways. The easiest way in a mac is to use brew:

$ brew cask install minikube
 ==> Satisfying dependencies
 ==> Installing Formula dependencies: kubernetes-cli
 ==> Installing kubernetes-cli
 ==> Downloading https://homebrew.bintray.com/bottles/kubernetes-cli-1.9.3.el_capit
 Already downloaded: /Users/Stealth/Library/Caches/Homebrew/kubernetes-cli-1.9.3.el_capitan.bottle.tar.gz
 ==> Pouring kubernetes-cli-1.9.3.el_capitan.bottle.tar.gz
 ==> Caveats
 Bash completion has been installed to:
 /usr/local/etc/bash_completion.d

zsh completions have been installed to:
 /usr/local/share/zsh/site-functions
 ==> Summary
 🍺 /usr/local/Cellar/kubernetes-cli/1.9.3: 172 files, 65.4MB
 ==> Downloading https://storage.googleapis.com/minikube/releases/v0.25.0/minikube-
 Already downloaded: /Users/Stealth/Library/Caches/Homebrew/Cask/minikube--0.25.0
 ==> Verifying checksum for Cask minikube
 ==> Installing Cask minikube
 ==> Linking Binary 'minikube-darwin-amd64' to '/usr/local/bin/minikube'.
 🍺 minikube was successfully installed!

As you can see the minikube installation also installs a kubectl version that matches the minikube version. You must know that it’s important that the cluster version and the client version matches, or weird things may occur. So watch out for existing kubectl versions you may have.

To start minikube:

$ minikube start
 Starting local Kubernetes v1.9.0 cluster...
 Starting VM...
 Downloading Minikube ISO
 142.22 MB / 142.22 MB [============================================] 100.00% 0s
 Getting VM IP address...
 Moving files into cluster...
 Downloading localkube binary
 162.41 MB / 162.41 MB [============================================] 100.00% 0s
 0 B / 65 B [----------------------------------------------------------] 0.00%
 65 B / 65 B [======================================================] 100.00% 0sSetting up certs...
 Connecting to cluster...
 Setting up kubeconfig...
 Starting cluster components...
 Kubectl is now configured to use the cluster.
 Loading cached images from config file.

If something goes wrong, destroy the ~/.minikube directory and try again.

To view the Kubernetes nodes available:

$ kubectl get nodes
 NAME STATUS ROLES AGE VERSION
 minikube Ready  53s v1.9.0

As we can see (either in the brew step and the get nodes step) is that we are running Kubernetes v1.9.0.  If we want to run a different version, we have to first check if this version is supported, with the command:

$ minikube get-k8s-versions

We will install a v1.7.0. To do this, we will stop the actual Minikube running with:

$ minikube stop

To start a different version of Minikube we have to start it with the –kubernetes-version <existing-version-supported>. But executing the start command will give us an error…

$ minikube start --kubernetes-version v1.7.0
Starting local Kubernetes v1.7.0 cluster...
Starting VM...
Getting VM IP address...
Kubernetes version downgrade is not supported. Using version: v1.9.0
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

So we end up with a v1.9.0… Time to destroy ~/.minikube directory! Also, destroy the minikube entry in Virtualbox. We try again…

$ minikube start --kubernetes-version v1.7.0
Starting local Kubernetes v1.7.0 cluster...
Starting VM...
Downloading Minikube ISO
 142.22 MB / 142.22 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading localkube binary
 137.57 MB / 137.57 MB [============================================] 100.00% 0s
 65 B / 65 B [======================================================] 100.00% 0s
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

Done! But a get nodes instruction fails…

$ kubectl get nodes
No resources found.

Do you remember how important is to have a kubectl matching the running cluster? We have to replace the existing v1.9.0 kubectl version with the downgraded to v1.7.0. To do this we have to download it with:

$ curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/darwin/amd64/kubectl

Notice that the v1.7.0 can be replaced to match another client version that you may desire. Now, if we try again the same get nodes operation with the v1.7.0 kubectl binary:

$ kubectl get nodes
NAME STATUS AGE VERSION
minikube Ready 14m v1.7.0

Docker Edge has the possibility to activate a local Kubernetes Cluster. Is it worth? Personal Kubernetes (II) is coming…

Automate all the things with Jenkins! (II): Create a pipeline to create Docker images

Maybe you have your own Jenkins Server by now. Great! Now, why don’t start automating all the things?

As Sysadmins we want all our proceses running flawlessly when needed, and in an automated way if possible/applicable. Creating, modifying and maintaining a Docker image can be quite a repetitive task, so we want to automate it at all costs. Think about security patches for example!

About this project

Docker has a nice container to test that Docker works, that is, the whalesay container. Basically is an implementation of cowsay (a popular perl script) that replaces the cow template with a whale one. What happens is that there is no raspberry official equivalent image, and the x86 image uses Ubuntu as source image, so the resulting docker image is huge (more than 200MB to just say Hello World…). So we will create an arm image for raspberry pi based on Alpine that does the whalesay thing. You can check this project on github (Dockerfile here). This article is based on this entry from getintodevops.com

For the Jenkins pipeline we have created a file (written in groove) that describes all the steps (stages) and operations that the pipeline will perform in order to carry out the job. In this file we are defining 4 stages:

  • Clone Repository: We will get the source code from git
  • Build Image: We will do the “Docker Build” operation over the source code
  • Test Image: We will test the created image
  • Push Image: We will push the created image to Docker Hub

For this last step we need the Docker Hub credentials on Jenkins to be able to perform the Docker Push operation.

Configuring Docker Hub With Jenkins

To store the Docker image resulting from our build, we’ll be using Docker Hub. You can sign up for a free account at https://hub.docker.com.

Once we have an account created in Docker Hub, we will put it on  Credentials -> System -> Global credentials -> Add Credentials.

Add your Docker Hub credentials as the type Username with password, with the ID docker-hub-credentials. The ID field is how we will refer to this entry in our script.

The Jenkinsfile

We’ll need to give Jenkins access to push the image to Docker Hub. For this, we’ll create Credentials in Jenkins, and refer to them in the Jenkinsfile as ‘docker-hub-credentials’.

node {
 def app

stage('Clone repository') {
 /* Ensure we can clone the repository */
 checkout scm
 }

stage('Build image') {
 /* This builds the actual image; synonymous to
 * docker build on the command line */
 app = docker.build("stealthizer/rpi-whalesay")
 }

stage('Test image') {
 /* No real tests so it just passes this step always OK */
 sh 'echo "Tests passed"'
 }

stage('Push image') {
 /* Finally, we'll push the image with two tags:
 * First, a short hash that identifies the commit in git
 * Second, the 'latest' tag.
 * Pushing multiple tags is cheap, as all the layers are reused. */
 sh "git rev-parse --short HEAD > .git/commit-id"
 def commit_id = readFile('.git/commit-id').trim()
 docker.withRegistry('https://registry.hub.docker.com', 'docker-hub-credentials') {
 app.push("${commit_id}")
 app.push("latest")
 }
 }
}

As you might have noticed in the above Jenkinsfile, we’re using docker.withRegistry to wrap the app.push commands – this instructs Jenkins to log in to a specified registry with the specified credential id (docker-hub-credentials).

The Jenkins Pipeline

Now, we will create the pipeline, that is, an automated expression of the task you want to perform from your code you have written to the very final delivery step (create the image, or put something into production, for example). This will be your contribution the continuous delivery processes.

To create the pipeline we will add a New Item -> Pipeline. We will customize the next field properties:

  • Description: Free text to describe your pipeline project
  • Github Project: the url of the project you are going to build automaticallyI’ll be using https://github.com/stealthizer/rpi-whalesay/.
  • Definition: On the pipeline part of this config page we will choose “pipeline script from SCM”, specify Git as our SCM source, put again the git project url as Repository url (no credentials needed if the repository is public), specify the branch you want to build (*/master can be a good start) and as script path we will just put Jenkinsfile.

Once we press on save we are finished creating out pipeline.

To test it we just press the Build option inside the job and see if all the stages finish with a Success state.

jenkins_stage_view

Docker on Raspberry pi

How to put Docker on a Raspberry pi? The easyest way is to install a distribution that already has it installed, like Hypriot. But the official way to install docker on Raspbian is (no sudo needed):

$ curl -sSL https://get.docker.com | sh

On the time this article has been written the version installed by this command is 17.12.0-ce.

If you want to run the docker command from the pi user, you can issue the command:

$ sudo usermod -aG docker pi

The result of this command will be available the next time you open a new session with the user pi.

To test the installation we will run the stealthizer/rpi-whalesay image, prepared to run in this architecture:

Captura de pantalla 2018-01-09 a las 14.39.41

And That’s all for now. This was a very basic entry, I know. But some weeks ago there was a problem in the last docker version for raspberry and some additional steps where required for docker to work…

Automate all the things with Jenkins! (I)

Automate all the Things!

It is normal that when we develop something we need to repeat some steps over and over again, like testing the code, compiling the binaries or create a new version of a docker image. To help us this way we will be using Jenkins, a continuous integration server. On our Raspberry, of course!

I’ve found that in a fresh recent Raspbian installations some steps are needed to make it work. The oracle-java8-jdk that the apt-get command installs is too old (1.8.0_65), leading to some problems, beginning in the installation itself. The solution found in stackexchange needed an additional step as the apt-key command was not working due to a problem with dirmngr not installed (solution found here)

We will begin to prepare the system to install Jenkins. These instructions are performed on a fresh Raspbian installation with no other java installed. We will first get a key from the Ubuntu keyserver:

sudo apt-get install dirmngr
sudo apt-key adv --recv-key --keyserver keyserver.ubuntu.com EEA14886

We will edit the packet source file /etc/apt/sources.list to add the following lines:

deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main
deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main

So we can proceed to install Java 8

sudo apt-get update
sudo apt-get install oracle-java8-installer
sudo apt-get install oracle-java8-set-default

To check the version we have installed we can execute:

$ java -version
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) Client VM (build 25.151-b12, mixed mode)

At least the java version must be >= 1.8.0_101, or this like this may happen.

To proceed with the Jenkins installation itself:

wget -q -O - https://jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
sudo sh -c 'echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt-get update
sudo apt-get install jenkins

We will wait until the service has started at port 8080. You will be prompted to unlock your installation by providing a secret initial password located at /var/lib/jenkins/secrets/initialAdminPassword.  You will be prompted to select some plugins (the suggested ones are ok to begin with) and then you will be prompted to create the admin user for this new Jenkins.

And that’s it! Soon we will be automating the creation of our docker images. Stay tuned!

 

Meltdown, Spectre… is my Raspberry pi safe?

So you have heard about Meltdown (CVE-2017-5754) and Spectre (CVE-2017-5753 and CVE-2017-5715), right? “almost all computers are affected by this processor bug“. It seems like 2018 will be fun. More details here about this processor bugs. Anyways:

A quick search in google shows you that:

“None of the above cores are listed as vulnerable to any version of the attack (they are not listed at all, in fact, because there is no known vulnerability to these attacks).”

“Note that Variants 1 and 2 (CVE-2017-5753 and CVE-2017-5715) are known as Spectre, and Variants 3 (CVE-2017-5754) and 3a (a related attack investigated by ARM) are called Meltdown. Therefore, at the present time, no Raspberry Pi devices are believed to be vulnerable to either Spectre or Meltdown.”

So, never let your guard down in security, but it seems that this time our pi are not affected by this. At this time at least…

Bonus:

Xkcd view on this…