We have app servers with smallish local file systems and application data mounted over NFS.

Sometimes I want to find all files matching a particular set of criteria but don't want to traverse the NFS mounts.

Here's how to do it:

find / -group sophosav -print -o -fstype nfs -prune

Ordering is important, as is the explict inclusion of -print. If you omit this, it will print the name(s) of the NFS mounts as well.

Change start location (/) and criteria (-group sophosav) to suit your own purposes.

I was experiencing problems with dnscache not resolving certain domains. On inspection, it turned out to be akamai-hosted domains that were failing. A quick google turned up this thread from 2004 (!), and a little further digging turned up this patch.

I tweaked the patch a little to set QUERY_MAXLOOP to 1000 (original value: 100, value in patch: 160), and rebuilt.

All works just fine now:

 

[robin@dist ~]$ env DNSCACHEIP=192.168.1.90 dnsqr A www.cisco.com
1 www.cisco.com:
212 bytes, 1+5+0+0 records, response, noerror
query: 1 www.cisco.com
answer: www.cisco.com 0 CNAME www.cisco.com.akadns.net
answer: www.cisco.com.akadns.net 0 CNAME wwwds.cisco.com.edgekey.net
answer: wwwds.cisco.com.edgekey.net 0 CNAME wwwds.cisco.com.edgekey.net.globalredir.akadns.net
answer: wwwds.cisco.com.edgekey.net.globalredir.akadns.net 0 CNAME e144.dscb.akamaiedge.net
answer: e144.dscb.akamaiedge.net 12 A 2.19.144.170

This is one of those things that goes to show: it's easy if you know how.

I've got a zfs-based file server (currently using SmartOS) which uses NFSv4 shares. OSX can connect to NFS shares using "Connect To Server" from the finder" using a syntax like this:

nfs://nas.example.com/share_name

I've previously tried to use on my mbp but have never managed to get it to work in a stable fashion.

Then, this evening, I stumbled across the solution:

nfs://vers=4,nas.example.com/share_name

That's all there is to it – I now have stable NFSv4 connections from my Mac!

The installation of hbase on CentOS is fairly painless thanks to those generous folks at Cloudera. Add their CDH4 repository and you're there: yum install hbase.

However, adding lzo compression for hbase is a little more tricky. There are a few guides describing how to checkout from github, build the extension, and copy the resulting libraries into the right place, but I want a nice, simple RPM package to deploy.

Enter the hadoop-lzo-packager project on github. Let's try and use this to build an RPM I can use to install lzo support for hbase.

Get the source code:

git clone git://github.com/toddlipcon/hadoop-lzo-packager.git

Install the deps:

yum install lzo-devel ant ant-nodeps gcc-c++ rpmbuild java-devel

Build the RPMs:

cd hadoop-lzo-packager
export JAVA_HOME=/usr/lib/jvm/java
./run.sh --no-debs

Et voila – cloudera-hadoop-lzo RPMS ready for installation. But wait… The libs get installed to /usr/lib/hadoop-0.20… That's no good, I want them in /usr/lib/hbase.

So I went ahead & hacked run.sh and template.spec to allow the install dir on the target machine to be specified on the command-line. I can now use a command line something like this:

./run.sh --name hbase-lzo --install-dir /usr/lib/hbase --no-deb

That produces a set of RPMs (binary, source, and debuginfo) with the base name hbase-lzo and libraries installed to /usr/lib/hbase

My changes (plus another small change adding necessary BuildRequires to the RPM spec template) are in my fork of the project on github

I wanted to set a field on a MySQL table to one of 4 values for testing purposes. Let's say I want to set the "pet" field to one of {cat,dog,rabbit,hamster}.

First, add a new field to the table:

alter table test add column `id` int(10) unsigned unique key autoincrement;

Now insert each of the four values:

update test set pet = 'cat' where MOD(id, 4) = 1;
update test set pet = 'dog' where MOD(id+3, 4) = 1;
update test set pet = 'rabbit' where MOD(id+2, 4) = 1;
update test set pet = 'hamster' where MOD(id+1, 4) = 1;

Finally, drop the additional field:

alter table test drop column `id`;

I'm always interesting hearing better/alternative ways to do this sort of thing.

A couple of possibilities are screen and minicom

screen /dev/ttyS0 19200

Or in my case (with a Keyspan USB serial adapter):

screen /dev/tty.KeySerial1

minicom is available from homebrew, ie. brew install minicom

I've not used it for a while, and it didn't work at my first attempt – probably needs some configuration.

When I began using puppet, I quickly realised that configuration data was best kept separate from puppet manifests. Initially, I used extlookup and kept configuration data in CSV files. Then complex data structures came to puppet and I now use hirea/hiera-puppet with configuration data stored in hierarchical YAML files (other hiera backends are available). This article describes how to define in YAML the resources that should be applied to a node.

Continue reading