Having grown up with CVS then moved on to svn, I find git amazingly powerful. However, I don't use it enough to learn some of the more powerful features. This is just a post to document a couple of useful things I learned recently.

I was working on a project on github, adding a missing command-line option. But, while editing, I got carried away and made a load of additional trivial changes, cleaning up the man page text, making capitalisation consistent, etc. All good stuff, but I didn't want to commit both sets of changes in the same commit.

What I needed to do was to select which changes to commit. The command to do that is:

git add --patch

To show the changes that have been added to the staging area, ie. those that will be commited:

git diff --cached

If you mistakenly add a change and want to remove it from the staging area:

git reset --patch

Finally, commit the change:

git commit

Optionally, push the change back to github:

git push origin master

Nifty stuff.

We sometimes see problems updating our Dell machines to the latest firmware, ie. update_firmware -y fails:

Running updates...
-	Installing dell_dup_componentid_00159 - 1.4.7Installation failed for
package: dell_dup_componentid_00159 - 1.4.7
aborting update...

The error message from the low-level command was:

Could not parse output, bad xml for package: dell_dup_componentid_00159

Dell have been unable to tell me why this is, or provide a fix or workaround.

Here's what I did to get the firmware installed:

Continue reading

I have a NexentaStor-based NAS device on my home network. It has 10 x 500GB SATA Drives in a raidz2 configuration giving me approx 4TB of usable storage.

I don't have many users at home (basically, just me!) so I don't bother with any central authentication mechanism – I simply make sure that use I the same login (robin) across all servers, and on unix/linux servers I make sure the UID is the same (10000).

When a user is added to NexentaStor, it is allocated the next available UID beginning at 1001 (1000 is allocated to the admin user). For the robin login, I need this to be 10000.

Not a problem – simply drop to a bash shell on the NAS and use vipw to change the UID of the login.

The bit I *always* forget is that the CIFS server uses a different passwd file (/var/smb/smbpasswd) and it is necessary to change the UID in that file too.

We currently deploy our app code to around 50 nodes using capistrano. We use the "copy" deployment method, ie. the code is checked out of svn onto the local deployment node, rolled into a tarball, then copied out to each target node where it is unrolled into a release dir before the final symlink is put in place.

As you might imagine, copying to 50 nodes generates quite a bit of traffic, and it takes ~5 mins to do a full deploy.

I was reading this interesting link today; one bullet in particular jumped out at me:

  • "… the few hundred MB binary gets rapidly pushed bia [sic] bit torrent."

Now that's an interesting idea – I wonder if I can knock up something in capistrano that deploys using bittorrent?

Dell distributes its OMSA software in RPM packages and even has a yum repo available so you'd think that updating to the next version would be as simple as yum update, right? Wrong!

You have to remove the old version first, and then install the new version. Oh, and you also need to stop the Dell services, restart ipmi, then restart the Dell services.

Something like this:

yum -y remove srvadmin-* \
  && rm -Rf /opt/dell \
  && yum -y install srvadmin-all dell_ft_install \
  && srvadmin-services.sh stop \
  && service ipmi restart \
  && srvadmin-services.sh start

We’ve had a bunch of new servers in place for around 3 months now. They seem to be working well and are performing just fine.

Then, out of the blue, our monitoring started throwing alerts on seemingly random servers. Our queues were building up – basically, database performance had dropped dramatically and our processing scripts couldn’t stuff data into the DBs fast enough.

What could be causing it?

Continue reading

I use cobbler to provision our new Dell servers, which is great but it needs the MAC addresses of the servers to identify each machine.

Previously, I have been doing this manually:

  1. log in to the DRAC web interface
  2. launch the java console
  3. rebooting the server
  4. go into the BIOS
  5. navigate to Embedded Devices
  6. manually record the MAC addresses

This takes quite a while, and is prone to error.

I recently had another 42 servers to deploy to I looked for a way to automate this process. I found one! Continue reading

I use the toggl time-tracking service to keep track of the hours I work for my various clients.

toggl make available desktop clients for Windows, Mac, & Linux, but the Linux packages are in .deb format for Ubuntu and, until recently, they did not provide x86_64 packages.

toggl recently released the desktop client as open source so I grabbed it and have built an RPM.

SRPM: TogglDesktop-2.5.1-1.fc12.src.rpm

RPM (Fedora 12, x86_64): TogglDesktop-2.5.1-1.fc12.x86_64.rpm