If you manually delete a resource that is being managed by terraform, it it not removed from the state file and becomes "orphaned".

You many see errors like this when running terraform:

1 error(s) occurred:
* aws_iam_role.s3_readonly (destroy): 1 error(s) occurred:
* aws_iam_role.s3_readonly (deposed #0): 1 error(s) occurred:
* aws_iam_role.s3_readonly (deposed #0): Error listing Profiles for IAM Role (s3_readonly) when trying to delete: NoSuchEntity: The role with name s3_readonly cannot be found.

This prevents terraform from running, even if you don't care about the missing resource such as when you're trying to delete everything, ie. running terraform destroy.

Fortunately, terraform has a command for exactly this situation, to remove a resource from the state file: terraform state rm <name of resource>

In the example above, the command would be terraform state rm aws_iam_role.s3_readonly

Update 2017-08-25: Additional characters excluded due to problems with password handling in PHP applications using the Laravel framework on AWS ElasticBeanstalk

I needed to generate random master passwords for several Amazon RDS MySQL instances.

The specification is as follows:

The password for the master database user can be any printable ASCII character except "/", """, or "@". Master password constraints differ for each database engine.

MySQL, Amazon Aurora, and MariaDB

  • Must contain 8 to 41 characters.

I came up with this:

head -n 1 < <(fold -w 41 < <(LC_ALL=C tr -d '@"$`!/'\'\\ < <(LC_ALL=C tr -dc '[:graph:]' < /dev/urandom)))

If you prefer to use pipes (rather than process substitution) the command would look like this:

cat /dev/urandom | LC_ALL=C tr -dc '[:graph:]' | tr -d '@"$`!/'\'\\ | fold -w 41 | head -n 1

Notes:

  • take a stream of random bytes
  • remove all chars not in the set specified by [:graph:], ie. get rid of everything that is not a printable ASCII character
  • remove the chars that are explicitly not permitted by the RDS password specification and others that can cause problems if not handled correctly
  • split the stream into lines 41 characters long, ie. the maximum password length
  • stop after the first line

I recently wanted to monitor my home NAS with prometheus, which essentially means running node_exporter on it.

The NAS is running FreeNAS, which is, at time of writing, based on FreeBSD 10.3-STABLE, so I need to somehow get hold of node_exporter for FreeBSD.

Rather than installing go and building node_exporter directly on the FreeNAS machine, I will build it on a Vagrant machine (running FreeBSD 10.3-STABLE) then copy it to the NAS. I will do this by creating a FreeBSD package which can be installed on the FreeNAS machine.

One final complication: I want to use an unreleased version of node_exporter that contains the fix for a bug I reported recently. So, I will need to build against an aribtrary github commit hash.

First, I created a simple Vagrant file as follows:

Vagrant.configure("2") do |config|
  config.vm.guest = :freebsd
  config.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true
  config.vm.box = "freebsd/FreeBSD-10.3-STABLE"
  config.ssh.shell = "sh"
  config.vm.base_mac = "080027D14C66"

  config.vm.provider :virtualbox do |vb|
    vb.customize ["modifyvm", :id, "--memory", "1024"]
    vb.customize ["modifyvm", :id, "--cpus", "2"]
    vb.customize ["modifyvm", :id, "--hwvirtex", "on"]
    vb.customize ["modifyvm", :id, "--audio", "none"]
    vb.customize ["modifyvm", :id, "--nictype1", "virtio"]
    vb.customize ["modifyvm", :id, "--nictype2", "virtio"]
  end
end

I ran vagrant up and vagrant ssh to start the machine and jump onto it.

I then ran the following commands to initialize the FreeBSD ports:

sudo -i
portsnap fetch
portsnap extract

The port I want is in /usr/ports/sysutils/node_exporter, so normally I would simply change into that directory and build the package:

cd /usr/ports/sysutils/node_exporter
make package

This will create a .txz file which is a FreeBSD pkg and can be installed on FreeNAS.

However, to build node_exporter from a specific github commit hash I need to make a couple of changes to the Makefile.

First, I identify the short hash of the commit I want to build – 269ee7a, in this instance.

Then I modify the Makefile, adding GH_TAGNAME and modifying PORTVERSION, something like this:

PORTVERSION=    0.13.0.269ee7a
...
GH_TAGNAME=     269ee7a

Finally, I generate distinfo to match the modified version, and build the package:

make makesum
make package

I now copy node_exporter-0.13.0.269ee7a.txz to my FreeNAS box, where I can install and run it.

On the FreeNAS box, I use the following commands to install, enable, and start the node_exporter service:

pkg add ./node_exporter-0.13.0.269ee7a.txz
cat <<EOT >> /etc/rc.conf

# enable prometheus node_exporter
node_exporter_enable="YES"
EOT
service node_exporter start

I can now configure prometheus to scrape metrics from this machine.

This post documents my experiences with trying to run kubernetes on a five-node Raspberry Pip 2 cluster.

I started by setting up each of the five Raspberry Pis with Hypriot v1.1.1.

I have internal DNS & DHCP services on my Lab network so I used hardware addresses to ensure each node has a DNS entry and always gets the same IP, as follows:

node01.rpi.yo61.net 192.168.1.141
node02.rpi.yo61.net 192.168.1.142
node03.rpi.yo61.net 192.168.1.143
node04.rpi.yo61.net 192.168.1.144
node05.rpi.yo61.net 192.168.1.145

On each node, I ensured all packages were up-to-date:

apt-get update && apt-get -y upgrade

I used the kubeadm guide for the following steps.

Add a new repo and install various kubernetes commands on all nodes:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

apt-get update
apt-get install -y kubelet kubeadm kubectl kubernetes-cni

I designated node01 as the master and ran the following commands on the master only:

Initialise the master:

kubeadm init --pod-network-cidr=10.244.0.0/16

Install flannel networking:

export ARCH=arm
curl -sSL "https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml?raw=true" | sed "s/amd64/${ARCH}/g" | kubectl create -f -

Check everything is running OK:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                          READY     STATUS    RESTARTS   AGE
kube-system   dummy-2501624643-8gpcm                        1/1       Running   0          24m
kube-system   etcd-node01.rpi.yo61.net                      1/1       Running   0          23m
kube-system   kube-apiserver-node01.rpi.yo61.net            1/1       Running   0          23m
kube-system   kube-controller-manager-node01.rpi.yo61.net   1/1       Running   0          23m
kube-system   kube-discovery-2202902116-j9zjn               1/1       Running   0          24m
kube-system   kube-dns-2334855451-lu3qh                     3/3       Running   0          22m
kube-system   kube-flannel-ds-p32x1                         2/2       Running   0          15m
kube-system   kube-proxy-28edm                              1/1       Running   0          22m
kube-system   kube-scheduler-node01.rpi.yo61.net            1/1       Running   0          23m

Join the other nodes:

kubeadm join --token=c2a0a6.7dd1d5b1c26795ef 192.168.1.141

Check they've all joined correctly:

$ kubectl get nodes

NAME                  STATUS    AGE
node01.rpi.yo61.net   Ready     27m
node02.rpi.yo61.net   Ready     45s
node03.rpi.yo61.net   Ready     54s
node04.rpi.yo61.net   Ready     16s
node05.rpi.yo61.net   Ready     25s

Install the dashboard:

export ARCH=arm
curl -sSL "https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml" | sed "s/amd64/${ARCH}/g" | kubectl create -f -

You should now have something like this:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                          READY     STATUS    RESTARTS   AGE
kube-system   dummy-2501624643-8gpcm                        1/1       Running   0          1h
kube-system   etcd-node01.rpi.yo61.net                      1/1       Running   0          1h
kube-system   kube-apiserver-node01.rpi.yo61.net            1/1       Running   0          1h
kube-system   kube-controller-manager-node01.rpi.yo61.net   1/1       Running   0          1h
kube-system   kube-discovery-2202902116-j9zjn               1/1       Running   0          1h
kube-system   kube-dns-2334855451-lu3qh                     3/3       Running   0          1h
kube-system   kube-flannel-ds-2g1tr                         2/2       Running   0          1h
kube-system   kube-flannel-ds-a2v7q                         2/2       Running   0          1h
kube-system   kube-flannel-ds-iqrs2                         2/2       Running   0          1h
kube-system   kube-flannel-ds-p32x1                         2/2       Running   0          1h
kube-system   kube-flannel-ds-qhfvc                         2/2       Running   0          1h
kube-system   kube-proxy-0agjm                              1/1       Running   0          1h
kube-system   kube-proxy-28edm                              1/1       Running   0          1h
kube-system   kube-proxy-3w6e8                              1/1       Running   0          1h
kube-system   kube-proxy-fgxxp                              1/1       Running   0          1h
kube-system   kube-proxy-ypzyd                              1/1       Running   0          1h
kube-system   kube-scheduler-node01.rpi.yo61.net            1/1       Running   0          1h
kube-system   kubernetes-dashboard-3507263287-so0mw         1/1       Running   0          1h

To be continued…

When using rsync to copy rsnapshot archives you should use the --hard-links option:

       -H, --hard-links
              This tells rsync to look for hard-linked files in the source and
              link together the corresponding files on the destination.  With-
              out  this option, hard-linked files in the source are treated as
              though they were separate files.

Thanks to this post: https://www.cyberciti.biz/faq/linux-unix-apple-osx-bsd-rsync-copy-hard-links/

As noted in my previous post, migrating a VM from one SmartOS node to another is pretty straight-forward, and is documented on the SmartOS wiki.

One thing that is not mentioned is that if your VM is based on an image, then that image must already be installed/imported on the target host. If it is not, you will see an error like this:

[root@source ~]# vmadm send 0983fe0f-677e-4f54-b5cf-40184d041b6d | ssh smart01 vmadm receive
Invalid value(s) for: image_uuid

For example, on the source machine:

[root@source ~]# vmadm get 0983fe0f-677e-4f54-b5cf-40184d041b6d | grep image_uuid
      "image_uuid": "5e164fac-286d-11e4-9cf7-b3f73eefcd01",

[root@source ~]# imgadm list uuid=5e164fac-286d-11e4-9cf7-b3f73eefcd01
UUID                                  NAME      VERSION   OS     TYPE  PUB
5e164fac-286d-11e4-9cf7-b3f73eefcd01  centos-7  20140820  linux  zvol  2014-08-20

On the target machine:

[root@target ~]# imgadm list uuid=5e164fac-286d-11e4-9cf7-b3f73eefcd01

[root@target ~]# imgadm import 5e164fac-286d-11e4-9cf7-b3f73eefcd01
Importing 5e164fac-286d-11e4-9cf7-b3f73eefcd01 (centos-7@20140820) from "https://images.joyent.com"
Gather image 5e164fac-286d-11e4-9cf7-b3f73eefcd01 ancestry
Must download and install 1 image (336.9 MiB)
Download 1 image                            [==============================================================================================>] 100% 337.00MB   4.72MB/s  1m11s
Downloaded image 5e164fac-286d-11e4-9cf7-b3f73eefcd01 (336.9 MiB)
zones/5e164fac-286d-11e4-9cf7-b3f73eefcd01  [==============================================================================================>] 100% 337.00MB  25.48MB/s    13s
Imported image 5e164fac-286d-11e4-9cf7-b3f73eefcd01 (centos-7@20140820)

[root@target ~]# imgadm list uuid=5e164fac-286d-11e4-9cf7-b3f73eefcd01
UUID                                  NAME      VERSION   OS     TYPE  PUB
5e164fac-286d-11e4-9cf7-b3f73eefcd01  centos-7  20140820  linux  zvol  2014-08-20

You should now be able to successfully perform the migration.

Set-up public key ssh auth for root user (and disable password auth)

  • put the public key(s) in /usbkey/config.inc/authorized_keys
  • add the following lines to /usbkey/config

    config_inc_dir=config.inc
    default_keymap=uk
    root_authorized_keys_file=authorized_keys

  • edit /usbkey/ssh/sshd_config. making sure that the following items are set:

    # Disallow password authentication
    PasswordAuthentication no
    # Permit root login (which should be already set)
    PermitRootLogin yes

  • reboot the server

Migrating a VM from one SmartOS server to another

This is pretty easy, and documented on the SmartOS wiki (although it's marked as "experimental" and is not documented in the man pages).

One gotcha – if you've already disabled password login as per the previous section, you'll need to create a new key pair on the source SmartOS node and copy the public key into /root/.ssh/authorized_keys on the target SmartOS node.

Assuming you can ssh from the source node to the target node, VM Migration is as easy as running the following command on the source node:

vmadm send $VM_GUID | ssh $target vmadm receive

This command stops the VM on the source node, and sends it to the target node. You will then need to start the machine on the target node and destroy it on the source node (once you're happy it's working in its new home).

I configured this blog to use a free, automatically-issued Let's Encrypt SSL certificate around 6 months ago.

The command to issue the cert is as follows:

letsencrypt-auto certonly \
  -a webroot \
  --webroot-path /var/www/sites/blog.yo61.com/html/ \
  -d blog.yo61.com \
  --agree-tos \
  --email robin.bowes@example.com

To check if an existing certificate will expire within the next 28 days, use this command:

openssl x509 \
  -checkend 2419200 \
  -noout \
  -inform pem \
  -in /etc/letsencrypt/live/blog.yo61.com/cert.pem

Put these together, and run from a daily cron job (remembering to restart your web server after changing the certificate) and your cert will automatically renew 28 days before it expires.

openssl x509 \
  -checkend 2419200 \
  -noout \
  -inform pem \
  -in /etc/letsencrypt/live/blog.yo61.com/cert.pem || \
letsencrypt-auto certonly \
  -a webroot \
  --webroot-path /var/www/sites/blog.yo61.com/html/ 
  -d blog.yo61.com \
  --agree-tos \
  --email robin.bowes@example.com && \
systemctl restart httpd

I recently migrated this blog to a new server running CentOS 7 and decided to use php-fpm and mod_proxy_fcgi instead of mod_php. I also like to install WordPress in its own directory and had problems getting the wp-admin sections of the site to work. I figured it out eventually with help from this page: https://wiki.apache.org/httpd/PHPFPMWordpress

This is the complete apache config fragment that defines the vhost, including SSL:

<VirtualHost *:443>
  ServerName blog.yo61.com

  ## Vhost docroot
  DocumentRoot "/var/www/sites/blog.yo61.com/html"

  ## Directories, there should at least be a declaration for /var/www/sites/blog.yo61.com/html

  <Directory "/var/www/sites/blog.yo61.com/html">
    AllowOverride FileInfo
    Require all granted
    DirectoryIndex index.php
    FallbackResource /index.php
  </Directory>

  <Directory "/var/www/sites/blog.yo61.com/html/wordpress/wp-admin">
    AllowOverride None
    Require all granted
    FallbackResource disabled
  </Directory>

  ## Logging
  ErrorLog "/var/log/httpd/blog.yo61.com_https_error_ssl.log"
  ServerSignature Off
  CustomLog "/var/log/httpd/blog.yo61.com_https_access_ssl.log" combined

  ## SSL directives
  SSLEngine on
  SSLCertificateFile      "/etc/letsencrypt/live/blog.yo61.com/cert.pem"
  SSLCertificateKeyFile   "/etc/letsencrypt/live/blog.yo61.com/privkey.pem"
  SSLCertificateChainFile "/etc/letsencrypt/live/blog.yo61.com/chain.pem"
  SSLCACertificatePath    "/etc/pki/tls/certs"

  ## Custom fragment
  ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/var/www/sites/blog.yo61.com/html/$1
  Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"
  Header always set X-Frame-Options DENY
  Header always set X-Content-Type-Options nosniff
</VirtualHost>