I needed to generate random master passwords for several Amazon RDS MySQL instances.

The specification is as follows:

The password for the master database user can be any printable ASCII character except "/", """, or "@". Master password constraints differ for each database engine.

MySQL, Amazon Aurora, and MariaDB

  • Must contain 8 to 41 characters.

I came up with this:

head -n 1 < <(fold -w 41 < <(tr -d '/"@' < <(LC_ALL=C tr -dc '[:graph:]' < /dev/urandom)))

If you prefer to use pipes (rather than process substitution) the command would look like this:

cat /dev/urandom | LC_ALL=C tr -dc '[:graph:]' | tr -d '/"@' | fold -w 41 | head -n 1


  • take a stream of random bytes
  • remove all chars not in the set specified by [:graph:], ie. get rid of everything that is not a printable ASCII character
  • remove the chars that are explicitly not permitted by the RDS password specification
  • split the stream into lines 41 characters long, ie. the maximum password length
  • stop after the first line

I recently wanted to monitor my home NAS with prometheus, which essentially means running node_exporter on it.

The NAS is running FreeNAS, which is, at time of writing, based on FreeBSD 10.3-STABLE, so I need to somehow get hold of node_exporter for FreeBSD.

Rather than installing go and building node_exporter directly on the FreeNAS machine, I will build it on a Vagrant machine (running FreeBSD 10.3-STABLE) then copy it to the NAS. I will do this by creating a FreeBSD package which can be installed on the FreeNAS machine.

One final complication: I want to use an unreleased version of node_exporter that contains the fix for a bug I reported recently. So, I will need to build against an aribtrary github commit hash.

First, I created a simple Vagrant file as follows:

Vagrant.configure("2") do |config|
  config.vm.guest = :freebsd
  config.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true
  config.vm.box = "freebsd/FreeBSD-10.3-STABLE"
  config.ssh.shell = "sh"
  config.vm.base_mac = "080027D14C66"

  config.vm.provider :virtualbox do |vb|
    vb.customize ["modifyvm", :id, "--memory", "1024"]
    vb.customize ["modifyvm", :id, "--cpus", "2"]
    vb.customize ["modifyvm", :id, "--hwvirtex", "on"]
    vb.customize ["modifyvm", :id, "--audio", "none"]
    vb.customize ["modifyvm", :id, "--nictype1", "virtio"]
    vb.customize ["modifyvm", :id, "--nictype2", "virtio"]

I ran vagrant up and vagrant ssh to start the machine and jump onto it.

I then ran the following commands to initialize the FreeBSD ports:

sudo -i
portsnap fetch
portsnap extract

The port I want is in /usr/ports/sysutils/node_exporter, so normally I would simply change into that directory and build the package:

cd /usr/ports/sysutils/node_exporter
make package

This will create a .txz file which is a FreeBSD pkg and can be installed on FreeNAS.

However, to build node_exporter from a specific github commit hash I need to make a couple of changes to the Makefile.

First, I identify the short hash of the commit I want to build – 269ee7a, in this instance.

Then I modify the Makefile, adding GH_TAGNAME and modifying PORTVERSION, something like this:

GH_TAGNAME=     269ee7a

Finally, I generate distinfo to match the modified version, and build the package:

make makesum
make package

I now copy node_exporter- to my FreeNAS box, where I can install and run it.

On the FreeNAS box, I use the following commands to install, enable, and start the node_exporter service:

pkg add ./node_exporter-
cat <<EOT >> /etc/rc.conf

# enable prometheus node_exporter
service node_exporter start

I can now configure prometheus to scrape metrics from this machine.

This post documents my experiences with trying to run kubernetes on a five-node Raspberry Pip 2 cluster.

I started by setting up each of the five Raspberry Pis with Hypriot v1.1.1.

I have internal DNS & DHCP services on my Lab network so I used hardware addresses to ensure each node has a DNS entry and always gets the same IP, as follows:


On each node, I ensured all packages were up-to-date:

apt-get update && apt-get -y upgrade

I used the kubeadm guide for the following steps.

Add a new repo and install various kubernetes commands on all nodes:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main

apt-get update
apt-get install -y kubelet kubeadm kubectl kubernetes-cni

I designated node01 as the master and ran the following commands on the master only:

Initialise the master:

kubeadm init --pod-network-cidr=

Install flannel networking:

export ARCH=arm
curl -sSL "https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml?raw=true" | sed "s/amd64/${ARCH}/g" | kubectl create -f -

Check everything is running OK:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                          READY     STATUS    RESTARTS   AGE
kube-system   dummy-2501624643-8gpcm                        1/1       Running   0          24m
kube-system   etcd-node01.rpi.yo61.net                      1/1       Running   0          23m
kube-system   kube-apiserver-node01.rpi.yo61.net            1/1       Running   0          23m
kube-system   kube-controller-manager-node01.rpi.yo61.net   1/1       Running   0          23m
kube-system   kube-discovery-2202902116-j9zjn               1/1       Running   0          24m
kube-system   kube-dns-2334855451-lu3qh                     3/3       Running   0          22m
kube-system   kube-flannel-ds-p32x1                         2/2       Running   0          15m
kube-system   kube-proxy-28edm                              1/1       Running   0          22m
kube-system   kube-scheduler-node01.rpi.yo61.net            1/1       Running   0          23m

Join the other nodes:

kubeadm join --token=c2a0a6.7dd1d5b1c26795ef

Check they've all joined correctly:

$ kubectl get nodes

NAME                  STATUS    AGE
node01.rpi.yo61.net   Ready     27m
node02.rpi.yo61.net   Ready     45s
node03.rpi.yo61.net   Ready     54s
node04.rpi.yo61.net   Ready     16s
node05.rpi.yo61.net   Ready     25s

Install the dashboard:

export ARCH=arm
curl -sSL "https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml" | sed "s/amd64/${ARCH}/g" | kubectl create -f -

You should now have something like this:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                          READY     STATUS    RESTARTS   AGE
kube-system   dummy-2501624643-8gpcm                        1/1       Running   0          1h
kube-system   etcd-node01.rpi.yo61.net                      1/1       Running   0          1h
kube-system   kube-apiserver-node01.rpi.yo61.net            1/1       Running   0          1h
kube-system   kube-controller-manager-node01.rpi.yo61.net   1/1       Running   0          1h
kube-system   kube-discovery-2202902116-j9zjn               1/1       Running   0          1h
kube-system   kube-dns-2334855451-lu3qh                     3/3       Running   0          1h
kube-system   kube-flannel-ds-2g1tr                         2/2       Running   0          1h
kube-system   kube-flannel-ds-a2v7q                         2/2       Running   0          1h
kube-system   kube-flannel-ds-iqrs2                         2/2       Running   0          1h
kube-system   kube-flannel-ds-p32x1                         2/2       Running   0          1h
kube-system   kube-flannel-ds-qhfvc                         2/2       Running   0          1h
kube-system   kube-proxy-0agjm                              1/1       Running   0          1h
kube-system   kube-proxy-28edm                              1/1       Running   0          1h
kube-system   kube-proxy-3w6e8                              1/1       Running   0          1h
kube-system   kube-proxy-fgxxp                              1/1       Running   0          1h
kube-system   kube-proxy-ypzyd                              1/1       Running   0          1h
kube-system   kube-scheduler-node01.rpi.yo61.net            1/1       Running   0          1h
kube-system   kubernetes-dashboard-3507263287-so0mw         1/1       Running   0          1h

To be continued…

When using rsync to copy rsnapshot archives you should use the --hard-links option:

       -H, --hard-links
              This tells rsync to look for hard-linked files in the source and
              link together the corresponding files on the destination.  With-
              out  this option, hard-linked files in the source are treated as
              though they were separate files.

Thanks to this post: https://www.cyberciti.biz/faq/linux-unix-apple-osx-bsd-rsync-copy-hard-links/

As noted in my previous post, migrating a VM from one SmartOS node to another is pretty straight-forward, and is documented on the SmartOS wiki.

One thing that is not mentioned is that if your VM is based on an image, then that image must already be installed/imported on the target host. If it is not, you will see an error like this:

[root@source ~]# vmadm send 0983fe0f-677e-4f54-b5cf-40184d041b6d | ssh smart01 vmadm receive
Invalid value(s) for: image_uuid

For example, on the source machine:

[root@source ~]# vmadm get 0983fe0f-677e-4f54-b5cf-40184d041b6d | grep image_uuid
      "image_uuid": "5e164fac-286d-11e4-9cf7-b3f73eefcd01",

[root@source ~]# imgadm list uuid=5e164fac-286d-11e4-9cf7-b3f73eefcd01
UUID                                  NAME      VERSION   OS     TYPE  PUB
5e164fac-286d-11e4-9cf7-b3f73eefcd01  centos-7  20140820  linux  zvol  2014-08-20

On the target machine:

[root@target ~]# imgadm list uuid=5e164fac-286d-11e4-9cf7-b3f73eefcd01

[root@target ~]# imgadm import 5e164fac-286d-11e4-9cf7-b3f73eefcd01
Importing 5e164fac-286d-11e4-9cf7-b3f73eefcd01 (centos-7@20140820) from "https://images.joyent.com"
Gather image 5e164fac-286d-11e4-9cf7-b3f73eefcd01 ancestry
Must download and install 1 image (336.9 MiB)
Download 1 image                            [==============================================================================================>] 100% 337.00MB   4.72MB/s  1m11s
Downloaded image 5e164fac-286d-11e4-9cf7-b3f73eefcd01 (336.9 MiB)
zones/5e164fac-286d-11e4-9cf7-b3f73eefcd01  [==============================================================================================>] 100% 337.00MB  25.48MB/s    13s
Imported image 5e164fac-286d-11e4-9cf7-b3f73eefcd01 (centos-7@20140820)

[root@target ~]# imgadm list uuid=5e164fac-286d-11e4-9cf7-b3f73eefcd01
UUID                                  NAME      VERSION   OS     TYPE  PUB
5e164fac-286d-11e4-9cf7-b3f73eefcd01  centos-7  20140820  linux  zvol  2014-08-20

You should now be able to successfully perform the migration.

Set-up public key ssh auth for root user (and disable password auth)

  • put the public key(s) in /usbkey/config.inc/authorized_keys
  • add the following lines to /usbkey/config


  • edit /usbkey/ssh/sshd_config. making sure that the following items are set:

    # Disallow password authentication
    PasswordAuthentication no
    # Permit root login (which should be already set)
    PermitRootLogin yes

  • reboot the server

Migrating a VM from one SmartOS server to another

This is pretty easy, and documented on the SmartOS wiki (although it's marked as "experimental" and is not documented in the man pages).

One gotcha – if you've already disabled password login as per the previous section, you'll need to create a new key pair on the source SmartOS node and copy the public key into /root/.ssh/authorized_keys on the target SmartOS node.

Assuming you can ssh from the source node to the target node, VM Migration is as easy as running the following command on the source node:

vmadm send $VM_GUID | ssh $target vmadm receive

This command stops the VM on the source node, and sends it to the target node. You will then need to start the machine on the target node and destroy it on the source node (once you're happy it's working in its new home).

Picture the scene…

There's a new release of Puppet Enterprise. You download it, run the upgrade in your test environment, run your regression tests, and all looks good. You then upgrade your production master – all looks good. All that remains to be done is to upgrade the puppet agent on all client nodes – all 750 of them.

Now, you could ssh to each node individually and run the PE installer via curl|bash. You could even automate that with pssh, or similar. But there's got to be a better way, right?

This was the position I found myself in earler this week.

I did some digging and found the puppet_agent module which, on the face of it, is written for just this situation. However, the module specifically doesn't automatically upgrade PE if the existing client is running v4.x.x but it *will* upgrade if a package version is passed to the module. Also, by default, it creates a new yum repo file pointing at the upstream Puppet repos which is not necessary on PE installs since the agent packages are already present on the master and available at https://<PUPPET_MASTER>:8140/packages/<PE_VERSION>/<OS+ARCH>/. In fact the PE install process creates a yum config pointing at this repo. This is not upgraded when the master is upgraded.

So, to summarise, I need to solve two issues:

  1. Create a yum config pointing at the new agent software on the master
  2. Pass the specific package version to the puppet_agent class.

I noticed on the puppet master under packages that, in addition to the versioned directories, there was a current link which points to the, er, "current" version of the agent. I also noticed that there was a top-level fact called platform_tag that defined the <OS+ARCH> combination. That gave me enough information to create a repo config that will always point to "current" agent software on the master.

Digging in the puppet_agent class, I found that it used a PE function pe_compiling_server_aio_build() to get the agent version available on the master. I now have all the information I need.

I wrote the following code in my profile::puppet_agent class, which is applied to all nodes:

  yumrepo { 'pe_repo':
    ensure    => present,
    baseurl   => "https://${::puppet_master_server}:8140/packages/current/${::platform_tag}",
    descr     => 'Puppet Labs PE Packages $releasever - $basearch',
    enabled   => 1,
    gpgcheck  => 1,
    gpgkey    => "https://${i::puppet_master_server}:8140/packages/GPG-KEY-puppetlabs",
    proxy     => '_none_',
    sslverify => false,
  class{ '::puppet_agent':
    manage_repo     => false,
    package_version => pe_compiling_server_aio_build(),

As if by magic, all my client nodes were upgraded to the latest agent software.

I configured this blog to use a free, automatically-issued Let's Encrypt SSL certificate around 6 months ago.

The command to issue the cert is as follows:

letsencrypt-auto certonly \
  -a webroot \
  --webroot-path /var/www/sites/blog.yo61.com/html/ \
  -d blog.yo61.com \
  --agree-tos \
  --email robin.bowes@example.com

To check if an existing certificate will expire within the next 28 days, use this command:

openssl x509 \
  -checkend 2419200 \
  -noout \
  -inform pem \
  -in /etc/letsencrypt/live/blog.yo61.com/cert.pem

Put these together, and run from a daily cron job (remembering to restart your web server after changing the certificate) and your cert will automatically renew 28 days before it expires.

openssl x509 \
  -checkend 2419200 \
  -noout \
  -inform pem \
  -in /etc/letsencrypt/live/blog.yo61.com/cert.pem || \
letsencrypt-auto certonly \
  -a webroot \
  --webroot-path /var/www/sites/blog.yo61.com/html/ 
  -d blog.yo61.com \
  --agree-tos \
  --email robin.bowes@example.com && \
systemctl restart httpd

I recently migrated this blog to a new server running CentOS 7 and decided to use php-fpm and mod_proxy_fcgi instead of mod_php. I also like to install WordPress in its own directory and had problems getting the wp-admin sections of the site to work. I figured it out eventually with help from this page: https://wiki.apache.org/httpd/PHPFPMWordpress

This is the complete apache config fragment that defines the vhost, including SSL:

<VirtualHost *:443>
  ServerName blog.yo61.com

  ## Vhost docroot
  DocumentRoot "/var/www/sites/blog.yo61.com/html"

  ## Directories, there should at least be a declaration for /var/www/sites/blog.yo61.com/html

  <Directory "/var/www/sites/blog.yo61.com/html">
    AllowOverride FileInfo
    Require all granted
    DirectoryIndex index.php
    FallbackResource /index.php

  <Directory "/var/www/sites/blog.yo61.com/html/wordpress/wp-admin">
    AllowOverride None
    Require all granted
    FallbackResource disabled

  ## Logging
  ErrorLog "/var/log/httpd/blog.yo61.com_https_error_ssl.log"
  ServerSignature Off
  CustomLog "/var/log/httpd/blog.yo61.com_https_access_ssl.log" combined

  ## SSL directives
  SSLEngine on
  SSLCertificateFile      "/etc/letsencrypt/live/blog.yo61.com/cert.pem"
  SSLCertificateKeyFile   "/etc/letsencrypt/live/blog.yo61.com/privkey.pem"
  SSLCertificateChainFile "/etc/letsencrypt/live/blog.yo61.com/chain.pem"
  SSLCACertificatePath    "/etc/pki/tls/certs"

  ## Custom fragment
  ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://$1
  Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"
  Header always set X-Frame-Options DENY
  Header always set X-Content-Type-Options nosniff