I configured this blog to use a free, automatically-issued Let's Encrypt SSL certificate around 6 months ago.

The command to issue the cert is as follows:

letsencrypt-auto certonly \
  -a webroot \
  --webroot-path /var/www/sites/blog.yo61.com/html/ \
  -d blog.yo61.com \
  --agree-tos \
  --email robin.bowes@example.com

To check if an existing certificate will expire within the next 28 days, use this command:

openssl x509 \
  -checkend 2419200 \
  -noout \
  -inform pem \
  -in /etc/letsencrypt/live/blog.yo61.com/cert.pem

Put these together, and run from a daily cron job (remembering to restart your web server after changing the certificate) and your cert will automatically renew 28 days before it expires.

openssl x509 \
  -checkend 2419200 \
  -noout \
  -inform pem \
  -in /etc/letsencrypt/live/blog.yo61.com/cert.pem || \
letsencrypt-auto certonly \
  -a webroot \
  --webroot-path /var/www/sites/blog.yo61.com/html/ 
  -d blog.yo61.com \
  --agree-tos \
  --email robin.bowes@example.com && \
systemctl restart httpd

I recently migrated this blog to a new server running CentOS 7 and decided to use php-fpm and mod_proxy_fcgi instead of mod_php. I also like to install WordPress in its own directory and had problems getting the wp-admin sections of the site to work. I figured it out eventually with help from this page: https://wiki.apache.org/httpd/PHPFPMWordpress

This is the complete apache config fragment that defines the vhost, including SSL:

<VirtualHost *:443>
  ServerName blog.yo61.com

  ## Vhost docroot
  DocumentRoot "/var/www/sites/blog.yo61.com/html"

  ## Directories, there should at least be a declaration for /var/www/sites/blog.yo61.com/html

  <Directory "/var/www/sites/blog.yo61.com/html">
    AllowOverride FileInfo
    Require all granted
    DirectoryIndex index.php
    FallbackResource /index.php
  </Directory>

  <Directory "/var/www/sites/blog.yo61.com/html/wordpress/wp-admin">
    AllowOverride None
    Require all granted
    FallbackResource disabled
  </Directory>

  ## Logging
  ErrorLog "/var/log/httpd/blog.yo61.com_https_error_ssl.log"
  ServerSignature Off
  CustomLog "/var/log/httpd/blog.yo61.com_https_access_ssl.log" combined

  ## SSL directives
  SSLEngine on
  SSLCertificateFile      "/etc/letsencrypt/live/blog.yo61.com/cert.pem"
  SSLCertificateKeyFile   "/etc/letsencrypt/live/blog.yo61.com/privkey.pem"
  SSLCertificateChainFile "/etc/letsencrypt/live/blog.yo61.com/chain.pem"
  SSLCACertificatePath    "/etc/pki/tls/certs"

  ## Custom fragment
  ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/var/www/sites/blog.yo61.com/html/$1
  Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"
  Header always set X-Frame-Options DENY
  Header always set X-Content-Type-Options nosniff
</VirtualHost>

We've all run into problems like this:

$ echo 12000 > /proc/sys/vm/dirty_writeback_centisecs
-bash: /proc/sys/vm/dirty_writeback_centisecs: Permission denied

The command fails because the target file is only writeable by root. The fix seems obvious and easy:

$ sudo echo 12000 > /proc/sys/vm/dirty_writeback_centisecs -bash: /proc/sys/vm/dirty_writeback_centisecs: Permission denied

Huh? It still fails. What gives? The reason it fails is that it is the shell that sets up the re-direction before running the command under sudo. The solution is to run the whole pipeline under sudo. There are several ways to do this:

echo 'echo 12000 > /proc/sys/vm/dirty_writeback_centisecs' | sudo sh
sudo sh -c 'echo 12000 > /proc/sys/vm/dirty_writeback_centisecs'
echo 12000 | sudo tee /proc/sys/vm/dirty_writeback_centisecs

This is fine for simple commands, but what if you have a complex command that already includes quotes and shell meta-characters?

Here's what I use for that:

sudo su <<\EOF
echo 12000 > /proc/sys/vm/dirty_writeback_centisecs
EOF

Note that the backslash before EOF is important to ensure meta-characters are not expanded.

Finally, here's an example of a command for which I needed to use this technique:

sudo sh  << \EOF
perl -n -e '
use strict;
use warnings;
if (/^([^=]*=)([^\$]*)(.*)/) {
  my $pre = $1;
  my $path = $2;
  my $post = $3;
  (my $newpath = $path) =~ s/usr/usr\/local/;
  $newpath =~ s/://g;
  print "$pre$newpath:$path$post\n"
}
else {
  print
}
' < /opt/rh/ruby193/enable > /opt/rh/ruby193/enable.new
EOF

Volcane recently asked in ##infra-talk on Freenode if anyone knew of "some little tool that can be used in a cronjob for example to noop the the real task if say load avg is high or similar?"

I came up with the idea to use nagios plugins. So, for example, to check load average before running a task:

/usr/lib64/nagios/plugins/check_load -w 0.7,0.6,0.5 -c 0.9,0.8,0.7 >/dev/null && echo "Run the task here"

Substitute the values used for the -w and -c args as appropriate, or use a different plugin for different conditions.

I recently had need to install uwsgi on EL7 (CentOS 7, actually, but RHEL 7 will be the same).

I ended up rebuilding the uwsgi SRPM from Fedora 21 which was relatively straight-forward but it required a few tweaks to the .spec file. I also had to build a chain of dependencies: mongodb, perl-Cora, libecb, perl-EV, libev, zeromq, perl-BDB, perl-AnyEvent-BDB, perl-AnyEvent-AIO.

All packages (including SRPMs) are in my repo: http://repo.yo61.net/el/7/

In his talk at Puppetconf 2013, James Fryman mentioned a blog post by James White which contains a list of guidelines for management which has come to be known as the jameswhite manifesto.

Here’s the same list but unconstrained by a fixed-width text box so you can actually read it. 🙂

Rules

On Infrastructure

  • There is one system, not a collection of systems.
  • The desired state of the system should be a known quantity.
  • The “known quantity” must be machine parseable.
  • The actual state of the system must self-correct to the desired state.
  • The only authoritative source for the actual state of the system is the system.
  • The entire system must be deployable using source media and text files.

On Buying Software

  • Keep the components in the infrastructure simple so it will be better understood.
  • All products must authenticate and authorize from external, configurable sources.
  • Use small tools that interoperate well, not one “do everything poorly” product.
  • Do not implement any product that no one in your organization has administered.
  • “Administered” does not mean saw it in a rigged demo, online or otherwise.
  • If you must deploy the product, hire someone who has implemented it before to do so.

On Automation

  • Do not author any code you would not buy.
  • Do not implement any product that does not provide an API.
  • The provided API must have all functionality that the application provides.
  • The provided API must be tailored to more than one language and platform.
  • Source code counts as an API, and may be restricted to one language or platform.
  • The API must include functional examples and not requre someone to be an expert on the product to use.
  • Do not use any product with configurations that are not machine parseable and machine writeable.
  • All data stored in the product must be machine readable and writeable by applications other than the product itself.
  • Writing hacks around the deficiencies in a product should be less work than writing the product’s functionality.

In general

  • Keep the disparity in your architecture to an absolute minimum.
  • Use Set Theory to accomplish this.
  • Do not improve manual processes if you can automate them instead.
  • Do not buy software that requires bare-metal.
  • Manual data transfers and datastores maintained manually are to be avoided.

I recently offered to help out with the hosting of a WordPress  site. It’s currently hosted somewhere with no shell access – just ftp – and there are a lot of images to transfer.

I quickly figured out I could use wget to mirror the site, using something like:

wget -m ftp://username:password@example.com

However, this broke in this case because the username for the site contained an @ character (the username was user@example.com).

Turns out the solution was to encode the special chars using HTML notation. This is the command that did the trick:

wget -m ftp://user%40example.com:password@example.com