When using rsync to copy rsnapshot archives you should use the --hard-links option:

       -H, --hard-links
              This tells rsync to look for hard-linked files in the source and
              link together the corresponding files on the destination.  With-
              out  this option, hard-linked files in the source are treated as
              though they were separate files.

Thanks to this post: https://www.cyberciti.biz/faq/linux-unix-apple-osx-bsd-rsync-copy-hard-links/

As noted in my previous post, migrating a VM from one SmartOS node to another is pretty straight-forward, and is documented on the SmartOS wiki.

One thing that is not mentioned is that if your VM is based on an image, then that image must already be installed/imported on the target host. If it is not, you will see an error like this:

[root@source ~]# vmadm send 0983fe0f-677e-4f54-b5cf-40184d041b6d | ssh smart01 vmadm receive
Invalid value(s) for: image_uuid

For example, on the source machine:

[root@source ~]# vmadm get 0983fe0f-677e-4f54-b5cf-40184d041b6d | grep image_uuid
      "image_uuid": "5e164fac-286d-11e4-9cf7-b3f73eefcd01",

[root@source ~]# imgadm list uuid=5e164fac-286d-11e4-9cf7-b3f73eefcd01
UUID                                  NAME      VERSION   OS     TYPE  PUB
5e164fac-286d-11e4-9cf7-b3f73eefcd01  centos-7  20140820  linux  zvol  2014-08-20

On the target machine:

[root@target ~]# imgadm list uuid=5e164fac-286d-11e4-9cf7-b3f73eefcd01

[root@target ~]# imgadm import 5e164fac-286d-11e4-9cf7-b3f73eefcd01
Importing 5e164fac-286d-11e4-9cf7-b3f73eefcd01 (centos-7@20140820) from "https://images.joyent.com"
Gather image 5e164fac-286d-11e4-9cf7-b3f73eefcd01 ancestry
Must download and install 1 image (336.9 MiB)
Download 1 image                            [==============================================================================================>] 100% 337.00MB   4.72MB/s  1m11s
Downloaded image 5e164fac-286d-11e4-9cf7-b3f73eefcd01 (336.9 MiB)
zones/5e164fac-286d-11e4-9cf7-b3f73eefcd01  [==============================================================================================>] 100% 337.00MB  25.48MB/s    13s
Imported image 5e164fac-286d-11e4-9cf7-b3f73eefcd01 (centos-7@20140820)

[root@target ~]# imgadm list uuid=5e164fac-286d-11e4-9cf7-b3f73eefcd01
UUID                                  NAME      VERSION   OS     TYPE  PUB
5e164fac-286d-11e4-9cf7-b3f73eefcd01  centos-7  20140820  linux  zvol  2014-08-20

You should now be able to successfully perform the migration.

Set-up public key ssh auth for root user (and disable password auth)

  • put the public key(s) in /usbkey/config.inc/authorized_keys
  • add the following lines to /usbkey/config

    config_inc_dir=config.inc
    default_keymap=uk
    root_authorized_keys_file=authorized_keys

  • edit /usbkey/ssh/sshd_config. making sure that the following items are set:

    # Disallow password authentication
    PasswordAuthentication no
    # Permit root login (which should be already set)
    PermitRootLogin yes

  • reboot the server

Migrating a VM from one SmartOS server to another

This is pretty easy, and documented on the SmartOS wiki (although it's marked as "experimental" and is not documented in the man pages).

One gotcha – if you've already disabled password login as per the previous section, you'll need to create a new key pair on the source SmartOS node and copy the public key into /root/.ssh/authorized_keys on the target SmartOS node.

Assuming you can ssh from the source node to the target node, VM Migration is as easy as running the following command on the source node:

vmadm send $VM_GUID | ssh $target vmadm receive

This command stops the VM on the source node, and sends it to the target node. You will then need to start the machine on the target node and destroy it on the source node (once you're happy it's working in its new home).

Picture the scene…

There's a new release of Puppet Enterprise. You download it, run the upgrade in your test environment, run your regression tests, and all looks good. You then upgrade your production master – all looks good. All that remains to be done is to upgrade the puppet agent on all client nodes – all 750 of them.

Now, you could ssh to each node individually and run the PE installer via curl|bash. You could even automate that with pssh, or similar. But there's got to be a better way, right?


This was the position I found myself in earler this week.

I did some digging and found the puppet_agent module which, on the face of it, is written for just this situation. However, the module specifically doesn't automatically upgrade PE if the existing client is running v4.x.x but it *will* upgrade if a package version is passed to the module. Also, by default, it creates a new yum repo file pointing at the upstream Puppet repos which is not necessary on PE installs since the agent packages are already present on the master and available at https://<PUPPET_MASTER>:8140/packages/<PE_VERSION>/<OS+ARCH>/. In fact the PE install process creates a yum config pointing at this repo. This is not upgraded when the master is upgraded.

So, to summarise, I need to solve two issues:

  1. Create a yum config pointing at the new agent software on the master
  2. Pass the specific package version to the puppet_agent class.

I noticed on the puppet master under packages that, in addition to the versioned directories, there was a current link which points to the, er, "current" version of the agent. I also noticed that there was a top-level fact called platform_tag that defined the <OS+ARCH> combination. That gave me enough information to create a repo config that will always point to "current" agent software on the master.

Digging in the puppet_agent class, I found that it used a PE function pe_compiling_server_aio_build() to get the agent version available on the master. I now have all the information I need.

I wrote the following code in my profile::puppet_agent class, which is applied to all nodes:

  yumrepo { 'pe_repo':
    ensure    => present,
    baseurl   => "https://${::puppet_master_server}:8140/packages/current/${::platform_tag}",
    descr     => 'Puppet Labs PE Packages $releasever - $basearch',
    enabled   => 1,
    gpgcheck  => 1,
    gpgkey    => "https://${i::puppet_master_server}:8140/packages/GPG-KEY-puppetlabs",
    proxy     => '_none_',
    sslverify => false,
  }->
  class{ '::puppet_agent':
    manage_repo     => false,
    package_version => pe_compiling_server_aio_build(),
  }

As if by magic, all my client nodes were upgraded to the latest agent software.

I configured this blog to use a free, automatically-issued Let's Encrypt SSL certificate around 6 months ago.

The command to issue the cert is as follows:

letsencrypt-auto certonly \
  -a webroot \
  --webroot-path /var/www/sites/blog.yo61.com/html/ \
  -d blog.yo61.com \
  --agree-tos \
  --email robin.bowes@example.com

To check if an existing certificate will expire within the next 28 days, use this command:

openssl x509 \
  -checkend 2419200 \
  -noout \
  -inform pem \
  -in /etc/letsencrypt/live/blog.yo61.com/cert.pem

Put these together, and run from a daily cron job (remembering to restart your web server after changing the certificate) and your cert will automatically renew 28 days before it expires.

openssl x509 \
  -checkend 2419200 \
  -noout \
  -inform pem \
  -in /etc/letsencrypt/live/blog.yo61.com/cert.pem || \
letsencrypt-auto certonly \
  -a webroot \
  --webroot-path /var/www/sites/blog.yo61.com/html/ 
  -d blog.yo61.com \
  --agree-tos \
  --email robin.bowes@example.com && \
systemctl restart httpd

I recently migrated this blog to a new server running CentOS 7 and decided to use php-fpm and mod_proxy_fcgi instead of mod_php. I also like to install WordPress in its own directory and had problems getting the wp-admin sections of the site to work. I figured it out eventually with help from this page: https://wiki.apache.org/httpd/PHPFPMWordpress

This is the complete apache config fragment that defines the vhost, including SSL:

<VirtualHost *:443>
  ServerName blog.yo61.com

  ## Vhost docroot
  DocumentRoot "/var/www/sites/blog.yo61.com/html"

  ## Directories, there should at least be a declaration for /var/www/sites/blog.yo61.com/html

  <Directory "/var/www/sites/blog.yo61.com/html">
    AllowOverride FileInfo
    Require all granted
    DirectoryIndex index.php
    FallbackResource /index.php
  </Directory>

  <Directory "/var/www/sites/blog.yo61.com/html/wordpress/wp-admin">
    AllowOverride None
    Require all granted
    FallbackResource disabled
  </Directory>

  ## Logging
  ErrorLog "/var/log/httpd/blog.yo61.com_https_error_ssl.log"
  ServerSignature Off
  CustomLog "/var/log/httpd/blog.yo61.com_https_access_ssl.log" combined

  ## SSL directives
  SSLEngine on
  SSLCertificateFile      "/etc/letsencrypt/live/blog.yo61.com/cert.pem"
  SSLCertificateKeyFile   "/etc/letsencrypt/live/blog.yo61.com/privkey.pem"
  SSLCertificateChainFile "/etc/letsencrypt/live/blog.yo61.com/chain.pem"
  SSLCACertificatePath    "/etc/pki/tls/certs"

  ## Custom fragment
  ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/var/www/sites/blog.yo61.com/html/$1
  Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"
  Header always set X-Frame-Options DENY
  Header always set X-Content-Type-Options nosniff
</VirtualHost>

We've all run into problems like this:

$ echo 12000 > /proc/sys/vm/dirty_writeback_centisecs
-bash: /proc/sys/vm/dirty_writeback_centisecs: Permission denied

The command fails because the target file is only writeable by root. The fix seems obvious and easy:

$ sudo echo 12000 > /proc/sys/vm/dirty_writeback_centisecs -bash: /proc/sys/vm/dirty_writeback_centisecs: Permission denied

Huh? It still fails. What gives? The reason it fails is that it is the shell that sets up the re-direction before running the command under sudo. The solution is to run the whole pipeline under sudo. There are several ways to do this:

echo 'echo 12000 > /proc/sys/vm/dirty_writeback_centisecs' | sudo sh
sudo sh -c 'echo 12000 > /proc/sys/vm/dirty_writeback_centisecs'
echo 12000 | sudo tee /proc/sys/vm/dirty_writeback_centisecs

This is fine for simple commands, but what if you have a complex command that already includes quotes and shell meta-characters?

Here's what I use for that:

sudo su <<\EOF
echo 12000 > /proc/sys/vm/dirty_writeback_centisecs
EOF

Note that the backslash before EOF is important to ensure meta-characters are not expanded.

Finally, here's an example of a command for which I needed to use this technique:

sudo sh  << \EOF
perl -n -e '
use strict;
use warnings;
if (/^([^=]*=)([^\$]*)(.*)/) {
  my $pre = $1;
  my $path = $2;
  my $post = $3;
  (my $newpath = $path) =~ s/usr/usr\/local/;
  $newpath =~ s/://g;
  print "$pre$newpath:$path$post\n"
}
else {
  print
}
' < /opt/rh/ruby193/enable > /opt/rh/ruby193/enable.new
EOF

Volcane recently asked in ##infra-talk on Freenode if anyone knew of "some little tool that can be used in a cronjob for example to noop the the real task if say load avg is high or similar?"

I came up with the idea to use nagios plugins. So, for example, to check load average before running a task:

/usr/lib64/nagios/plugins/check_load -w 0.7,0.6,0.5 -c 0.9,0.8,0.7 >/dev/null && echo "Run the task here"

Substitute the values used for the -w and -c args as appropriate, or use a different plugin for different conditions.