The team at bitly has written an http reverse proxy that provides authentication using Google's OAuth2 API. They write about it in a blog post.

The proxy is written in Go but builds to a single, statically-linked executable, ie. there are no complex run-time dependencies, which is great.

I've built an RPM for EL7 which also includes a sample systemd unit file, and sample configuration file. Both source and binary RPMs are available in my yum repo.

Additionally, I've create a puppet module that installs the RPM, creates a systemd service, and sets up an nginx front end to the proxy service. The module is available from the Puppetforge, and also on github.

I'd be interested in any feedback/comments/bug reports/pull requests.

I'm setting up a new puppet master running under passenger on CentOS 7 using packages from the puppetlabs and foreman repos. I used a fork of Stephen Johnson's puppet module to set everything up (with puppet apply). All went swimmingly, except I would see this error in the logs the first time the puppet master app loaded (ie. the first time it got a request):

[ 2014-11-07 23:22:13.2600 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] *** Phusion Passenger: no passenger_native_support.so found for the current Ruby interpreter. Compiling one (set PASSENGER_COMPILE_NATIVE_SUPPORT_BINARY=0 to disable)...
[ 2014-11-07 23:22:13.2600 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] # mkdir -p /usr/share/gems/gems/passenger-4.0.18/lib/phusion_passenger/locations.ini/buildout/ruby/ruby-2.0.0-x86_64-linux
[ 2014-11-07 23:22:13.2600 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] Not a valid directory. Trying a different one...
[ 2014-11-07 23:22:13.2600 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] -------------------------------
[ 2014-11-07 23:22:13.2600 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] # mkdir -p /var/lib/puppet/.passenger/native_support/4.0.18/ruby-2.0.0-x86_64-linux
[ 2014-11-07 23:22:13.2600 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] # cd /var/lib/puppet/.passenger/native_support/4.0.18/ruby-2.0.0-x86_64-linux
[ 2014-11-07 23:22:13.2600 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] # /usr/bin/ruby '/usr/share/gems/gems/passenger-4.0.18/ruby_extension_source/extconf.rb'
[ 2014-11-07 23:22:13.3048 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] /usr/bin/ruby: No such file or directory -- /usr/share/gems/gems/passenger-4.0.18/ruby_extension_source/extconf.rb (LoadError)
[ 2014-11-07 23:22:13.3156 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] Compilation failed.
[ 2014-11-07 23:22:13.3156 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] -------------------------------
[ 2014-11-07 23:22:13.3157 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] Ruby native_support extension not loaded. Continuing without native_support.

I double checked, and I do have the native libs installed – they're in the rubygem-passenger-native-libs rpm – the main library is in /usr/lib64/gems/ruby/passenger-4.0.18/native/passenger_native_support.so.

Digging in the passenger code, it tries to load the native libs by doing:

require 'native/passenger_native_support'

If I hacked this to:

require '/usr/lib64/gems/ruby/passenger-4.0.18/native/passenger_native_support'

then it loaded correctly.

It seems that /usr/lib64/gems/ruby/passenger-4.0.18 is not in the ruby load path.

Additional directories can be added to the ruby load path by setting an environment variable, RUBYLIB.

To set RUBYLIB for the apache process, I added the following line to /etc/sysconfig/httpd and restarted apache:

RUBYLIB=/usr/lib64/gems/ruby/passenger-4.0.18

The passenger native libraries now load correctly.

The installation of hbase on CentOS is fairly painless thanks to those generous folks at Cloudera. Add their CDH4 repository and you're there: yum install hbase.

However, adding lzo compression for hbase is a little more tricky. There are a few guides describing how to checkout from github, build the extension, and copy the resulting libraries into the right place, but I want a nice, simple RPM package to deploy.

Enter the hadoop-lzo-packager project on github. Let's try and use this to build an RPM I can use to install lzo support for hbase.

Get the source code:

git clone git://github.com/toddlipcon/hadoop-lzo-packager.git

Install the deps:

yum install lzo-devel ant ant-nodeps gcc-c++ rpmbuild java-devel

Build the RPMs:

cd hadoop-lzo-packager
export JAVA_HOME=/usr/lib/jvm/java
./run.sh --no-debs

Et voila – cloudera-hadoop-lzo RPMS ready for installation. But wait… The libs get installed to /usr/lib/hadoop-0.20… That's no good, I want them in /usr/lib/hbase.

So I went ahead & hacked run.sh and template.spec to allow the install dir on the target machine to be specified on the command-line. I can now use a command line something like this:

./run.sh --name hbase-lzo --install-dir /usr/lib/hbase --no-deb

That produces a set of RPMs (binary, source, and debuginfo) with the base name hbase-lzo and libraries installed to /usr/lib/hbase

My changes (plus another small change adding necessary BuildRequires to the RPM spec template) are in my fork of the project on github

I had a utility server running RHEL 6.2 (I installed it as part of a RHEV evaluation process). However, I have no RHEL entitlements so am not able to get updates.

So, I converted it to CentOS 6.2, with a little help from this post:

yum clean all
mkdir ~/centos
cd ~/centos
wget http://mirror.centos.org/centos/6.2/os/x86_64/RPM-GPG-KEY-CentOS-6
wget http://mirror.centos.org/centos/6.2/os/x86_64/Packages/centos-release-6-2.el6.centos.7.x86_64.rpm
wget http://mirror.centos.org/centos/6.2/os/x86_64/Packages/yum-3.2.29-22.el6.centos.noarch.rpm
wget http://mirror.centos.org/centos/6.2/os/x86_64/Packages/yum-utils-1.1.30-10.el6.noarch.rpm
wget http://mirror.centos.org/centos/6.2/os/x86_64/Packages/yum-plugin-fastestmirror-1.1.30-10.el6.noarch.rpm
rpm --import RPM-GPG-KEY-CentOS-6
rpm -e --nodeps redhat-release-server
rpm -e yum-rhn-plugin rhn-check rhnsd rhn-setup
rpm -Uhv --force *.rpm
yum upgrade
reboot

Nice!

Most of the servers I manage are 64-bit. I have one linode box that is 32-bit. I chose 32-bit because it has better memory usage than 64-bit, which is possibly important with a 512MB instance. This was probably a mistake as the management overhead involved with maintaining a 32-bit infrastructure for just one 32-bit machine is silly. No matter – we are where we are…!

I use the fnv_64 user-defined function from maatkit with MySQL. So, I need to build a 32-bit version for use on the 32-bit server.

Here's how to use mock to create a 32-bit build environment (in this case, for CentOS 5) on a 64-bit machine (the host is actually a Fedora 15 server).

Continue reading

Dell distributes its OMSA software in RPM packages and even has a yum repo available so you'd think that updating to the next version would be as simple as yum update, right? Wrong!

You have to remove the old version first, and then install the new version. Oh, and you also need to stop the Dell services, restart ipmi, then restart the Dell services.

Something like this:

yum -y remove srvadmin-* \
  && rm -Rf /opt/dell \
  && yum -y install srvadmin-all dell_ft_install \
  && srvadmin-services.sh stop \
  && service ipmi restart \
  && srvadmin-services.sh start

It seems that several people have been having problems getting Dell OMSA 6.2 to work correctly on CentOS 5.4 x86_64. Specifically, the software does not detect any storage controllers, and therefore also doesn't find any disks. eg.

[root@b034 ~]# omreport storage pdisk controller=0
Invalid controller value. Read, controller=0
No controllers found.

After a little investigation, I found the source of the problem.

Continue reading

As I mentioned in a previous post, the MySQL RPMs provided for RHEL/CentOS by percona are not actually compatible with RHEL/CentOS. They use the same package layout as the MySQL-provided RPMs.

Here's how I create my own RPMs having the same package layout as the RHEL/CentOS packages but with the percona highperf patchset applied.

Continue reading