I have a NexentaStor-based NAS device on my home network. It has 10 x 500GB SATA Drives in a raidz2 configuration giving me approx 4TB of usable storage.
I don't have many users at home (basically, just me!) so I don't bother with any central authentication mechanism – I simply make sure that use I the same login (robin) across all servers, and on unix/linux servers I make sure the UID is the same (10000).
When a user is added to NexentaStor, it is allocated the next available UID beginning at 1001 (1000 is allocated to the admin user). For the robin login, I need this to be 10000.
Not a problem – simply drop to a bash shell on the NAS and use vipw to change the UID of the login.
The bit I *always* forget is that the CIFS server uses a different passwd file (/var/smb/smbpasswd) and it is necessary to change the UID in that file too.
We currently deploy our app code to around 50 nodes using capistrano. We use the "copy" deployment method, ie. the code is checked out of svn onto the local deployment node, rolled into a tarball, then copied out to each target node where it is unrolled into a release dir before the final symlink is put in place.
As you might imagine, copying to 50 nodes generates quite a bit of traffic, and it takes ~5 mins to do a full deploy.
I was reading this interesting link today; one bullet in particular jumped out at me:
- "… the few hundred MB binary gets rapidly pushed bia [sic] bit torrent."
Now that's an interesting idea – I wonder if I can knock up something in capistrano that deploys using bittorrent?
I wanted to install a Fedora 13 machine as a paravirtual domu guest on our CentOS 5.5, xen 3.4.2 host. I also wanted to provision it using koan/cobbler. I ran into a few problems along the way, but I got there in the end!
Dell distributes its OMSA software in RPM packages and even has a yum repo available so you'd think that updating to the next version would be as simple as
yum update, right? Wrong!
You have to remove the old version first, and then install the new version. Oh, and you also need to stop the Dell services, restart ipmi, then restart the Dell services.
Something like this:
yum -y remove srvadmin-* \
&& rm -Rf /opt/dell \
&& yum -y install srvadmin-all dell_ft_install \
&& srvadmin-services.sh stop \
&& service ipmi restart \
&& srvadmin-services.sh start
Some kind soul on #bash on Freenode recently pointed me at this excellent document:
(I ran into #20, in case you're wondering)
We’ve had a bunch of new servers in place for around 3 months now. They seem to be working well and are performing just fine.
Then, out of the blue, our monitoring started throwing alerts on seemingly random servers. Our queues were building up – basically, database performance had dropped dramatically and our processing scripts couldn’t stuff data into the DBs fast enough.
What could be causing it?
I use cobbler to provision our new Dell servers, which is great but it needs the MAC addresses of the servers to identify each machine.
Previously, I have been doing this manually:
- log in to the DRAC web interface
- launch the java console
- rebooting the server
- go into the BIOS
- navigate to Embedded Devices
- manually record the MAC addresses
This takes quite a while, and is prone to error.
I recently had another 42 servers to deploy to I looked for a way to automate this process. I found one! Continue reading
I use the toggl time-tracking service to keep track of the hours I work for my various clients.
toggl make available desktop clients for Windows, Mac, & Linux, but the Linux packages are in .deb format for Ubuntu and, until recently, they did not provide x86_64 packages.
toggl recently released the desktop client as open source so I grabbed it and have built an RPM.
RPM (Fedora 12, x86_64): TogglDesktop-2.5.1-1.fc12.x86_64.rpm
It seems that several people have been having problems getting Dell OMSA 6.2 to work correctly on CentOS 5.4 x86_64. Specifically, the software does not detect any storage controllers, and therefore also doesn't find any disks. eg.
[root@b034 ~]# omreport storage pdisk controller=0
Invalid controller value. Read, controller=0
No controllers found.
After a little investigation, I found the source of the problem.
Update: see my recent post describing a better way to do this.
I often need to deploy Ruby gems across many CentOS servers. I prefer to use the native OS package management tools (rpm + yum) rather than using Ruby gems.
Here’s how to build RPMs from Ruby gems using gem2rpm.