Getting an ARM kubernetes cluster up and running

In the beginning, there was bare metal

6 minute read

Background

I recently decided to set up a Kubernetes cluster in my basement, partly because I’d never set a cluster up from scratch by myself, and partly because my existing NAS was beginning to run out of headroom.

For a variety of reasons, I decided to use ODROID HC2 boards. They’ve got gigabit ethernet, eight CPU cores, 2 GB RAM and a SATA-3 port for directly connecting a hard drive, which I wanted so I could use them as file server bricks. In a future post I will detail how I set up a distributed filesystem across the cluster.

EDIT - Added a link to the parts list, and added instructions for finding the new machine on your network.

Setting up an ODROID HC2 cluster

These notes should also work on an Odroid HC1 or XU4.

Install Debian Stretch

I used meveric’s debian-stretch ISO from https://oph.mdrjr.net/meveric/images/Stretch/.

I used Etcher to burn the debian-stretch ISO to an microSD card.

Flash your microSD card, plug it into the HC2 and attach a SATA drive to your HC2 if you’re going to use one, then connect it to your switch and power up. It will get an IP address with DHCP. Since they don’t have a video connector, you’ll have to scan your network to figure out what IP it got.

Find the Odroid on your network

nmap

You can use nmap to find the Odroid machines. Assuming your network is 10.0.0.1-254, you can scan the network with nmap -sP -n 10.0.0.0/24 | grep -v Host. Look for systems that show Wibrain in their MAC Address line.

Angry IP Scanner

If you’re not comfortable with nmap, I recommend Angry IP Scanner to find the new machine on your network because you can configure (select Fetchers in the Angry IP Scanner Menu on macOS, then add MAC Vendor to the selected fetchers) it to show the MAC vendor of your ethernet card - the ODroids show will show up as WIBRAIN.

Login as root, password odroid.

Change your root password!

Don’t skip this just because you’re running this on an internal-only network. The default root password for the image is well known, so run passwd root to change it so you’re not vulnerable if you accidentally open up your WIFI.

Install your updates

Install ISOs are inevitably out of date, but that’s ok, we’ll begin by updating all the installed packages.

apt-get update && apt-get upgrade && apt-get dist-upgrade

Install useful tooling

Let’s also add some useful tools to the machine.

apt-get install -y dnsutils git htop lshw man net-tools rsync sudo

Now we’re ready to install Docker and Kubernetes.

Install docker-ce

Install the docker-ce pre-requisites

    apt-get install \
         apt-transport-https \
         ca-certificates \
         curl \
         gnupg2 \
         software-properties-common

Partition & Format the drive

Install pre-requisites

Check what’s on your drive with lshw -C disk

For the sake of these examples, we’ll assume the SATA drive is /dev/sda

Format the drive

If you didn’t add a SATA drive to your Odroid, you can skip this section.

First partition it

  1. fdisk /dev/sda
  2. list all the existing partitions with the p command
  3. Remove any existing partitions with the d command
  4. Create a new partition with the n command
  5. Write the new partition table to disk with the w command

Now format it

  1. mkfs.ext4 /dev/sda1

Configure the system to automatically mount the drive

  1. Get the UUID with blkid | grep /dev/sda. You’ll see something like /dev/sda1: UUID="abcdabcd-abcd-11bb-9343-9089b93bbb72" TYPE="ext4" PARTUUID="13371337-abcd-1234-aa00-abcd1234abcd1234"
  2. Create a mount point to mount your filesystem. I picked /mnt/sata and created it with mkdir -p /mnt/sata
  3. Add an entry to /etc/fstab. Use your editor of choice to add a line UUID="abcdabcd-abcd-11bb-9343-9089b93bbb72" /mnt/sata ext4 defaults 0 2. Use the UUID from step 1, not the example one here.

You should now be able to mount the drive with mount /mnt/sata. If it succesfully mounts, it should show up after a reboot.

Force a static IP for the node

First, back up the current network config with cp /etc/network/interfaces /etc/network/interfaces-original

Now edit /etc/network/interfaces and put in:

# Ethernet adapter 0
auto eth0
allow-hotplug eth0
#no-auto-down eth0
iface eth0 inet static
address 192.168.1.100
netmask 255.255.255.0
gateway 192.168.1.1
dns-nameservers 8.8.8.8 8.8.4.4
# Or use your own by uncommenting below
# dns-nameservers 192.168.1.1

Install Docker and Kubernetes

Docker

Install the docker apt signing GPG key

On our Strech node,

Add docker’s GPG key

curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -

echo "deb [arch=armhf] https://download.docker.com/linux/debian $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list

Install pip & docker-compose

apt-get update && apt-get install -y python3-pip && pip3 install setuptools docker-compose

Install docker

apt-get install -y docker-ce

Confirm Docker is working

docker run hello-world

You should see something similar to this

root@rodan:~# docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c1eda109e4da: Pull complete
Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (arm32v7)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

Install Kubernetes

Install the k8s repository key

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

Add the k8s apt repo

deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

Install kubernetes

apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

Uptodate at https://kubernetes.io/docs/setup/independent/

This is pretty tedious, is there an easier way?

All this is tedious and prone to mistyping, especially if you’ve got multiple nodes to make into a cluster, so I’ve put everything except the network setup and disk formatting/mounting into a handy helper script, borg-odroid.

You can copy borg-odroid to a new machine after the first boot and it will bring the machine’s debian install up to date, then install docker & kubernetes and some other handy support tools.

Configure your Kubernetes Cluster

First - did you configure your kubernetes nodes to use static addresses? You will have issues if you didn’t.

Initialize the cluster on your master node

By default, Flannel requires we use 10.244.0.0/16 for our CIDR when initializing the cluster, because the flannel configuration we’re going to install later expects that CIDR - sure, we could change all the references to it, but that is just going to give us chances to break it.

kubeadm init --pod-network-cidr=10.244.0.0/16

Note: It is normal to see your master node as NotReady if you run kubectl get nodes before setting up networking.

Setup your config

rm -rf ~/.kube/ && mkdir ~/.kube && cp /etc/kubernetes/admin.conf $HOME/.kube/config

Set up Networking

  1. Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains.
  2. Flannel supports the ARM architecture, so kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

Allow pods to run on master node

If you only have one node in the cluster, k8s won’t run pods on the master node. Disable tainting with kubectl taint nodes --all node-role.kubernetes.io/master-

Install helm

helm init --tiller-image=jessestuart/tiller:v2.9.1 --upgrade

Set up the dashboard

  1. DASHSRC=https://raw.githubusercontent.com/kubernetes/dashboard/master
  2. curl -sSL $DASHSRC/src/deploy/recommended/kubernetes-dashboard-arm-head.yaml | kubectl apply -f -