I'm a big fan of provisioning tools, particularly puppet.

Sometimes I just want to quickly throw a clean install on a new machine that I can then use to provision other machines (and even to re-configure the puppetmaster).

So, I wrote a script to do just that. The only requirement is a minimal install of your favourite CentOS/Red Hat/Fedora OS and the script will do the rest.

It's available from github: https://github.com/robinbowes/puppet-server-bootstrap

I recently offered to help out with the hosting of a WordPress  site. It’s currently hosted somewhere with no shell access – just ftp – and there are a lot of images to transfer.

I quickly figured out I could use wget to mirror the site, using something like:

wget -m ftp://username:password@example.com

However, this broke in this case because the username for the site contained an @ character (the username was user@example.com).

Turns out the solution was to encode the special chars using HTML notation. This is the command that did the trick:

wget -m ftp://user%40example.com:password@example.com

This is one of those “dead easy so why so hard” issues.

I use chrome on Fedora 18 on my home desktop. I have put up with a non-working java plugin for some time (to be honest, I’ve not been to bothered given java’s history of security issues).

Here’s how to enable the java plugin under chrome on Fedora 18 using icedtea (openjdk).

sudo yum install icedtea-web
sudo mkdir -p /usr/lib64/firefox/plugins
sudo ln -s /usr/lib64/IcedTeaPlugin.so /usr/lib64/firefox/plugins/libjavaplugin.so

Now restart chrome and go here to test the java plugin now works.

I wanted to create a full-disk partition, with optimal alignment, on a 4TB disk under CentOS 6.4 and use it as an LVM PV.

fdisk doesn’t work on disks larger than 2TB so I used parted:

parted -a optimal /dev/sda
(parted) mklabel
Warning: The existing disk label on /dev/sda will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? Yes
(parted) mkpart primary ext2 0% 100%
(parted) set 1 lvm on

Some time ago, I wrote up how I created RPMs for ruby gems to simplify installation on EL-flavoured distributions. In the comments for that article, Jordan Sissel pointed me at his fpm tool which I said I’d check out if I ever needed to build any more rubygem RPMs.

Well, that time has come. I wanted to deploy a later version of capistrano across a client’s infrastructure and my previous approach didn’t work so I grabbed fpm and did this:

mkdir ~/tmp/gems
cd ~/tmp/gems
gem install --no-ri --no-rdoc --install-dir . capistrano
find ./cache -name '*.gem' | xargs -rn1 fpm -s gem -t rpm
ls *.rpm
rubygem-capistrano-2.15.4-1.noarch.rpm	rubygem-net-scp-1.1.0-1.noarch.rpm   rubygem-net-ssh-2.6.7-1.noarch.rpm
rubygem-highline-1.6.19-1.noarch.rpm	rubygem-net-sftp-2.1.2-1.noarch.rpm  rubygem-net-ssh-gateway-1.2.0-1.noarch.rpm

Nice and easy. Kudos whack!

My Note 2 recently started apparently locking up/freezing and apparently required powering off to fix it.

Thanks to this post, I discovered that this seems to be a "known" problem with the eMMC chip which is susceptible to "Sudden Death Syndrome"

There is an app to determine if your phone has the chip that is affected, and another app to write data to every area of the chip to "fix" the issue.

My phone now appears to be back to normal.

We have app servers with smallish local file systems and application data mounted over NFS.

Sometimes I want to find all files matching a particular set of criteria but don't want to traverse the NFS mounts.

Here's how to do it:

find / -group sophosav -print -o -fstype nfs -prune

Ordering is important, as is the explict inclusion of -print. If you omit this, it will print the name(s) of the NFS mounts as well.

Change start location (/) and criteria (-group sophosav) to suit your own purposes.

I was experiencing problems with dnscache not resolving certain domains. On inspection, it turned out to be akamai-hosted domains that were failing. A quick google turned up this thread from 2004 (!), and a little further digging turned up this patch.

I tweaked the patch a little to set QUERY_MAXLOOP to 1000 (original value: 100, value in patch: 160), and rebuilt.

All works just fine now:


[robin@dist ~]$ env DNSCACHEIP= dnsqr A www.cisco.com
1 www.cisco.com:
212 bytes, 1+5+0+0 records, response, noerror
query: 1 www.cisco.com
answer: www.cisco.com 0 CNAME www.cisco.com.akadns.net
answer: www.cisco.com.akadns.net 0 CNAME wwwds.cisco.com.edgekey.net
answer: wwwds.cisco.com.edgekey.net 0 CNAME wwwds.cisco.com.edgekey.net.globalredir.akadns.net
answer: wwwds.cisco.com.edgekey.net.globalredir.akadns.net 0 CNAME e144.dscb.akamaiedge.net
answer: e144.dscb.akamaiedge.net 12 A

This is one of those things that goes to show: it's easy if you know how.

I've got a zfs-based file server (currently using SmartOS) which uses NFSv4 shares. OSX can connect to NFS shares using "Connect To Server" from the finder" using a syntax like this:


I've previously tried to use on my mbp but have never managed to get it to work in a stable fashion.

Then, this evening, I stumbled across the solution:


That's all there is to it – I now have stable NFSv4 connections from my Mac!

The installation of hbase on CentOS is fairly painless thanks to those generous folks at Cloudera. Add their CDH4 repository and you're there: yum install hbase.

However, adding lzo compression for hbase is a little more tricky. There are a few guides describing how to checkout from github, build the extension, and copy the resulting libraries into the right place, but I want a nice, simple RPM package to deploy.

Enter the hadoop-lzo-packager project on github. Let's try and use this to build an RPM I can use to install lzo support for hbase.

Get the source code:

git clone git://github.com/toddlipcon/hadoop-lzo-packager.git

Install the deps:

yum install lzo-devel ant ant-nodeps gcc-c++ rpmbuild java-devel

Build the RPMs:

cd hadoop-lzo-packager
export JAVA_HOME=/usr/lib/jvm/java
./run.sh --no-debs

Et voila – cloudera-hadoop-lzo RPMS ready for installation. But wait… The libs get installed to /usr/lib/hadoop-0.20… That's no good, I want them in /usr/lib/hbase.

So I went ahead & hacked run.sh and template.spec to allow the install dir on the target machine to be specified on the command-line. I can now use a command line something like this:

./run.sh --name hbase-lzo --install-dir /usr/lib/hbase --no-deb

That produces a set of RPMs (binary, source, and debuginfo) with the base name hbase-lzo and libraries installed to /usr/lib/hbase

My changes (plus another small change adding necessary BuildRequires to the RPM spec template) are in my fork of the project on github