Want to make sure all the variables declared in a terraform module are actually used in the code?

This code lists all variables used in each of the sub-directories containing terraform code.

It started off as a one-liner but, as usual, the code to make it look pretty is bigger than the main functional code!

#!/usr/bin/env bash

set -euo pipefail

default_ul_char=-

main() {
  process
}

print_underlined () {
  local text="$1" ; shift
  local ul_char
  if [[ -n ${1:-} ]] ; then
    ul_char="$1" ; shift
  else
    ul_char=$default_ul_char
  fi
  printf '%s\n%s\n' "$text" "${text//?/$ul_char}"
}

process() {
  # loop over all directories
  while read -r dir ; do
    pushd "$dir" >/dev/null
    echo
    print_underlined "$dir" 
    # get a unique list of variables used in all .tf files in this directory
    sort -u < <(
      perl -ne 'print "$1\n" while /var\.([\w-]+)/g' ./*.tf
    )
    popd > /dev/null
  done < <(
    # get a unique list of directories containing terraform files
    # starting in the present working directory
    sort -u < <(
      find . -name '*.tf' -exec dirname {} \;
    )
  )
}

main "$@"

If you work with rpm-based systems you will probably have seen content like this in the repo config files:

[base]
name=CentOS-$releasever - Base
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

The items in bold are yum variables.

Today, I needed to install i386 packages on a system running an x86_64 kernel (don't ask!).

Here's how I did it:

echo i386 > /etc/yum/vars/basearch

Documentation here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/deployment_guide/sec-using_yum_variables

From DevOps, Docker, and Empathy by Jérôme Petazzoni (title shamelessly stolen from a talk by Bridget Kromhout):

The DevOps movement is more about a culture shift than embracing a new set of tools. One of the tenets of DevOps is to get people to talk together.

Implementing containers won’t give us DevOps.

You can’t buy DevOps by the pound, and it doesn’t come in a box, or even in intermodal containers.

It’s not just about merging “Dev” and “Ops,” but also getting these two to sit at the same table and talk to each other.

Docker doesn’t enforce these things (I pity the fool who preaches or believes it) but it gives us a table to sit at, and a common language to facilitate the conversation. It’s a tool, just a tool indeed, but it helps people share context and thus understanding.

Updated 2017-12-06: complete re-write to support nested braces in variable description
Updated 2017-12-05: support minimal variable declaration, eg. variable "foo" {}

When developing terraform code, it is easy to end up with a bunch of variable definitions that are listed in no particular order.

Here's a bit of python code that will sort terraform variable definitions. Use it as a filter from inside vim, or as a standalone tool if you have all your variable definitions in one file.

eg:

tf_sort < variables.tf > variables.tf.sorted
mv variables.tf.sorted variables.tf

Here's the code:

#!/usr/bin/env python
# sort terraform variables

import sys

import regex

# this regex matches terraform variable definitions
# we capture the variable name so we can sort on it
pattern = regex.compile('''
(                   # capturing group #1
    variable        # literal text
    \s*             # white space (optional)
    "               # literal quote character
)
(                   # capturing group #2 (variable name)
    [^"]+           # anything except another quote
)
(                   # capturing group #3
    "               # literal quote character
    \s*             # white space (optional)
)
(?<rec>             # capturing group named "rec"
    {               # literal left brace
    (?:             # non-capturing group
        [^{}]++     # anything but braces one or more times with no backtracking
        |           # or
        (?&rec)     # recursive substitution of group "rec"
    )*              # this group can appear 0 or more times
    }               # literal right brace
)
''', regex.VERBOSE)


def process(content):
    # sort the content (a list of tuples) on the second item of the tuple
    # (which is the variable name)
    matches = sorted(regex.findall(pattern, content), key=lambda x: x[1])

    # iterate over the sorted list and output them
    for match in matches:
        print(''.join(map(str, match)))

        # don't print the newline on the last item
        if match != matches[-1]:
            print('')


# check if we're reading from stdin
if not sys.stdin.isatty():
    stdin = sys.stdin.read()
    if stdin:
        process(stdin)

# process any filenames on the command line
for filename in sys.argv[1:]:
    with open(filename) as f:
        process(f.read())

If you manually delete a resource that is being managed by terraform, it it not removed from the state file and becomes "orphaned".

You many see errors like this when running terraform:

1 error(s) occurred:
* aws_iam_role.s3_readonly (destroy): 1 error(s) occurred:
* aws_iam_role.s3_readonly (deposed #0): 1 error(s) occurred:
* aws_iam_role.s3_readonly (deposed #0): Error listing Profiles for IAM Role (s3_readonly) when trying to delete: NoSuchEntity: The role with name s3_readonly cannot be found.

This prevents terraform from running, even if you don't care about the missing resource such as when you're trying to delete everything, ie. running terraform destroy.

Fortunately, terraform has a command for exactly this situation, to remove a resource from the state file: terraform state rm <name of resource>

In the example above, the command would be terraform state rm aws_iam_role.s3_readonly

Update 2017-08-25: Additional characters excluded due to problems with password handling in PHP applications using the Laravel framework on AWS ElasticBeanstalk

I needed to generate random master passwords for several Amazon RDS MySQL instances.

The specification is as follows:

The password for the master database user can be any printable ASCII character except "/", """, or "@". Master password constraints differ for each database engine.

MySQL, Amazon Aurora, and MariaDB

  • Must contain 8 to 41 characters.

I came up with this:

head -n 1 < <(fold -w 41 < <(LC_ALL=C tr -d '@"$`!/'\'\\ < <(LC_ALL=C tr -dc '[:graph:]' < /dev/urandom)))

If you prefer to use pipes (rather than process substitution) the command would look like this:

cat /dev/urandom | LC_ALL=C tr -dc '[:graph:]' | tr -d '@"$`!/'\'\\ | fold -w 41 | head -n 1

Notes:

  • take a stream of random bytes
  • remove all chars not in the set specified by [:graph:], ie. get rid of everything that is not a printable ASCII character
  • remove the chars that are explicitly not permitted by the RDS password specification and others that can cause problems if not handled correctly
  • split the stream into lines 41 characters long, ie. the maximum password length
  • stop after the first line

I recently wanted to monitor my home NAS with prometheus, which essentially means running node_exporter on it.

The NAS is running FreeNAS, which is, at time of writing, based on FreeBSD 10.3-STABLE, so I need to somehow get hold of node_exporter for FreeBSD.

Rather than installing go and building node_exporter directly on the FreeNAS machine, I will build it on a Vagrant machine (running FreeBSD 10.3-STABLE) then copy it to the NAS. I will do this by creating a FreeBSD package which can be installed on the FreeNAS machine.

One final complication: I want to use an unreleased version of node_exporter that contains the fix for a bug I reported recently. So, I will need to build against an aribtrary github commit hash.

First, I created a simple Vagrant file as follows:

Vagrant.configure("2") do |config|
  config.vm.guest = :freebsd
  config.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true
  config.vm.box = "freebsd/FreeBSD-10.3-STABLE"
  config.ssh.shell = "sh"
  config.vm.base_mac = "080027D14C66"

  config.vm.provider :virtualbox do |vb|
    vb.customize ["modifyvm", :id, "--memory", "1024"]
    vb.customize ["modifyvm", :id, "--cpus", "2"]
    vb.customize ["modifyvm", :id, "--hwvirtex", "on"]
    vb.customize ["modifyvm", :id, "--audio", "none"]
    vb.customize ["modifyvm", :id, "--nictype1", "virtio"]
    vb.customize ["modifyvm", :id, "--nictype2", "virtio"]
  end
end

I ran vagrant up and vagrant ssh to start the machine and jump onto it.

I then ran the following commands to initialize the FreeBSD ports:

sudo -i
portsnap fetch
portsnap extract

The port I want is in /usr/ports/sysutils/node_exporter, so normally I would simply change into that directory and build the package:

cd /usr/ports/sysutils/node_exporter
make package

This will create a .txz file which is a FreeBSD pkg and can be installed on FreeNAS.

However, to build node_exporter from a specific github commit hash I need to make a couple of changes to the Makefile.

First, I identify the short hash of the commit I want to build – 269ee7a, in this instance.

Then I modify the Makefile, adding GH_TAGNAME and modifying PORTVERSION, something like this:

PORTVERSION=    0.13.0.269ee7a
...
GH_TAGNAME=     269ee7a

Finally, I generate distinfo to match the modified version, and build the package:

make makesum
make package

I now copy node_exporter-0.13.0.269ee7a.txz to my FreeNAS box, where I can install and run it.

On the FreeNAS box, I use the following commands to install, enable, and start the node_exporter service:

pkg add ./node_exporter-0.13.0.269ee7a.txz
cat <<EOT >> /etc/rc.conf

# enable prometheus node_exporter
node_exporter_enable="YES"
EOT
service node_exporter start

I can now configure prometheus to scrape metrics from this machine.

This post documents my experiences with trying to run kubernetes on a five-node Raspberry Pip 2 cluster.

I started by setting up each of the five Raspberry Pis with Hypriot v1.1.1.

I have internal DNS & DHCP services on my Lab network so I used hardware addresses to ensure each node has a DNS entry and always gets the same IP, as follows:

node01.rpi.yo61.net 192.168.1.141
node02.rpi.yo61.net 192.168.1.142
node03.rpi.yo61.net 192.168.1.143
node04.rpi.yo61.net 192.168.1.144
node05.rpi.yo61.net 192.168.1.145

On each node, I ensured all packages were up-to-date:

apt-get update && apt-get -y upgrade

I used the kubeadm guide for the following steps.

Add a new repo and install various kubernetes commands on all nodes:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

apt-get update
apt-get install -y kubelet kubeadm kubectl kubernetes-cni

I designated node01 as the master and ran the following commands on the master only:

Initialise the master:

kubeadm init --pod-network-cidr=10.244.0.0/16

Install flannel networking:

export ARCH=arm
curl -sSL "https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml?raw=true" | sed "s/amd64/${ARCH}/g" | kubectl create -f -

Check everything is running OK:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                          READY     STATUS    RESTARTS   AGE
kube-system   dummy-2501624643-8gpcm                        1/1       Running   0          24m
kube-system   etcd-node01.rpi.yo61.net                      1/1       Running   0          23m
kube-system   kube-apiserver-node01.rpi.yo61.net            1/1       Running   0          23m
kube-system   kube-controller-manager-node01.rpi.yo61.net   1/1       Running   0          23m
kube-system   kube-discovery-2202902116-j9zjn               1/1       Running   0          24m
kube-system   kube-dns-2334855451-lu3qh                     3/3       Running   0          22m
kube-system   kube-flannel-ds-p32x1                         2/2       Running   0          15m
kube-system   kube-proxy-28edm                              1/1       Running   0          22m
kube-system   kube-scheduler-node01.rpi.yo61.net            1/1       Running   0          23m

Join the other nodes:

kubeadm join --token=c2a0a6.7dd1d5b1c26795ef 192.168.1.141

Check they've all joined correctly:

$ kubectl get nodes

NAME                  STATUS    AGE
node01.rpi.yo61.net   Ready     27m
node02.rpi.yo61.net   Ready     45s
node03.rpi.yo61.net   Ready     54s
node04.rpi.yo61.net   Ready     16s
node05.rpi.yo61.net   Ready     25s

Install the dashboard:

export ARCH=arm
curl -sSL "https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml" | sed "s/amd64/${ARCH}/g" | kubectl create -f -

You should now have something like this:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                          READY     STATUS    RESTARTS   AGE
kube-system   dummy-2501624643-8gpcm                        1/1       Running   0          1h
kube-system   etcd-node01.rpi.yo61.net                      1/1       Running   0          1h
kube-system   kube-apiserver-node01.rpi.yo61.net            1/1       Running   0          1h
kube-system   kube-controller-manager-node01.rpi.yo61.net   1/1       Running   0          1h
kube-system   kube-discovery-2202902116-j9zjn               1/1       Running   0          1h
kube-system   kube-dns-2334855451-lu3qh                     3/3       Running   0          1h
kube-system   kube-flannel-ds-2g1tr                         2/2       Running   0          1h
kube-system   kube-flannel-ds-a2v7q                         2/2       Running   0          1h
kube-system   kube-flannel-ds-iqrs2                         2/2       Running   0          1h
kube-system   kube-flannel-ds-p32x1                         2/2       Running   0          1h
kube-system   kube-flannel-ds-qhfvc                         2/2       Running   0          1h
kube-system   kube-proxy-0agjm                              1/1       Running   0          1h
kube-system   kube-proxy-28edm                              1/1       Running   0          1h
kube-system   kube-proxy-3w6e8                              1/1       Running   0          1h
kube-system   kube-proxy-fgxxp                              1/1       Running   0          1h
kube-system   kube-proxy-ypzyd                              1/1       Running   0          1h
kube-system   kube-scheduler-node01.rpi.yo61.net            1/1       Running   0          1h
kube-system   kubernetes-dashboard-3507263287-so0mw         1/1       Running   0          1h

To be continued…