Having grown up with CVS then moved on to svn, I find git amazingly powerful. However, I don't use it enough to learn some of the more powerful features. This is just a post to document a couple of useful things I learned recently.
I was working on a project on github, adding a missing command-line option. But, while editing, I got carried away and made a load of additional trivial changes, cleaning up the man page text, making capitalisation consistent, etc. All good stuff, but I didn't want to commit both sets of changes in the same commit.
What I needed to do was to select which changes to commit. The command to do that is:
git add --patch
To show the changes that have been added to the staging area, ie. those that will be commited:
git diff --cached
If you mistakenly add a change and want to remove it from the staging area:
git reset --patch
Finally, commit the change:
Optionally, push the change back to github:
git push origin master
I'm sure we've all seen this message from time to time when using puppet with exported resources:
Error 400 on SERVER: Exported resource Sshkey[foo] cannot override local resource on node bar.example.com
It's actually pretty easy to fix. Simply delete the exported resource for node foo.
Assuming you are using MySQL for your DB, something like this will do the trick:
mysql -e "delete from resources where restype like 'sshkey' and exported=1 and host_id = (select id from hosts where name 'foo')" puppet
We sometimes see problems updating our Dell machines to the latest firmware, ie. update_firmware -y fails:
- Installing dell_dup_componentid_00159 - 1.4.7Installation failed for
package: dell_dup_componentid_00159 - 1.4.7
The error message from the low-level command was:
Could not parse output, bad xml for package: dell_dup_componentid_00159
Dell have been unable to tell me why this is, or provide a fix or workaround.
Here's what I did to get the firmware installed:
I have a NexentaStor-based NAS device on my home network. It has 10 x 500GB SATA Drives in a raidz2 configuration giving me approx 4TB of usable storage.
I don't have many users at home (basically, just me!) so I don't bother with any central authentication mechanism – I simply make sure that use I the same login (robin) across all servers, and on unix/linux servers I make sure the UID is the same (10000).
When a user is added to NexentaStor, it is allocated the next available UID beginning at 1001 (1000 is allocated to the admin user). For the robin login, I need this to be 10000.
Not a problem – simply drop to a bash shell on the NAS and use vipw to change the UID of the login.
The bit I *always* forget is that the CIFS server uses a different passwd file (/var/smb/smbpasswd) and it is necessary to change the UID in that file too.
We currently deploy our app code to around 50 nodes using capistrano. We use the "copy" deployment method, ie. the code is checked out of svn onto the local deployment node, rolled into a tarball, then copied out to each target node where it is unrolled into a release dir before the final symlink is put in place.
As you might imagine, copying to 50 nodes generates quite a bit of traffic, and it takes ~5 mins to do a full deploy.
I was reading this interesting link today; one bullet in particular jumped out at me:
- "… the few hundred MB binary gets rapidly pushed bia [sic] bit torrent."
Now that's an interesting idea – I wonder if I can knock up something in capistrano that deploys using bittorrent?
I wanted to install a Fedora 13 machine as a paravirtual domu guest on our CentOS 5.5, xen 3.4.2 host. I also wanted to provision it using koan/cobbler. I ran into a few problems along the way, but I got there in the end!
Dell distributes its OMSA software in RPM packages and even has a yum repo available so you'd think that updating to the next version would be as simple as
yum update, right? Wrong!
You have to remove the old version first, and then install the new version. Oh, and you also need to stop the Dell services, restart ipmi, then restart the Dell services.
Something like this:
yum -y remove srvadmin-* \
&& rm -Rf /opt/dell \
&& yum -y install srvadmin-all dell_ft_install \
&& srvadmin-services.sh stop \
&& service ipmi restart \
&& srvadmin-services.sh start
Some kind soul on #bash on Freenode recently pointed me at this excellent document:
(I ran into #20, in case you're wondering)
We’ve had a bunch of new servers in place for around 3 months now. They seem to be working well and are performing just fine.
Then, out of the blue, our monitoring started throwing alerts on seemingly random servers. Our queues were building up – basically, database performance had dropped dramatically and our processing scripts couldn’t stuff data into the DBs fast enough.
What could be causing it?
I use cobbler to provision our new Dell servers, which is great but it needs the MAC addresses of the servers to identify each machine.
Previously, I have been doing this manually:
- log in to the DRAC web interface
- launch the java console
- rebooting the server
- go into the BIOS
- navigate to Embedded Devices
- manually record the MAC addresses
This takes quite a while, and is prone to error.
I recently had another 42 servers to deploy to I looked for a way to automate this process. I found one! Continue reading