It's a new year! So I thought I had better write a blog post just to show I'm still alive (and to test that some changes I've made are working). Besides that, not much to say.
So, I've been a bit busy lately, so much so, that I had to defer a semester of Uni. The reason? I've had to manage the installation of a bunch of GPS trackers for a large state government organisation. It's been, in a word, shit.
I inherited the management from someone, who, basically, didn't do much except attempt to make pretty graphs of unrealistic timelines in Microsoft Project. They left, so I was thrown into the project. Now, even though it's not related to a software project, you still have the same sort of things that projects get (funny that).
So, I will write a few blog posts, when I can, to cover just some minor things that I found.
I use ZFS, and I love it, I think it is the best filesystem out there. It's primary focus is on integrity, which is the most important thing. What is also important, backups. Even with the data integrity that ZFS offers (which far surpasses any hardware RAID), you still have to backup.
Again, with ZFS, this is much easier than with other solutions (like Bacula for example). Since we run Sun servers, we also run Solaris, since when you run Solaris on Sun hardware, the licence is relatively cheap. As a result, I use the Timeslider service to automatically create snapshots (which, when you share a ZFS filesystem out via CIFS shows up in the Windows GUI as "previous versions").
Because of this, I also use the "zfs-send" plugin, basically backing up snapshots to a separate Solaris server. However, there are some gotchas which may catch you out if you had a working config, and then change things around and find the zfs-send service failing.
First, zfs-send will put a hold on snapshots. It does this so they don't get deleted before they're used to send to the remote server. However, if you're in the situation where you need to clear all the snapshots (for example, you've moved, or changed zfs filesystems you want to backup). Then you will find you can't delete these, what you have to do is "zfs release" the snapshots.
Here is a little snippet that will do this (and delete ALL zfs-auto-snap snapshots on the system):
for snap in `zfs list -H -o name -t snapshot | grep @zfs-auto-snap`; do zfs release org.opensolaris:time-slider-plugin:zfs-send $snap; zfs destroy $snap; done
Then, secondly, zfs-send stores the name of the previously sent snapshot as a property on the filesystem. It does this, so it knows it can use an incremental zfs send. However, if you have broken this sequence, or deleted the snapshots, then this will cause it to break.
You can look for it with:
zfs get -r org.opensolaris:time-slider-plugin:zfs-send storage
Where "storage" can be replaced with your particular zpool name. To clear a property, you use "zfs inherit", like so:
zfs inherit org.opensolaris:time-slider-plugin:zfs-send storage/shares
Changing "storage/shares" to the particular ZFS file system you want to clear the property from. You can clear this property recursively by just adding the "-r" option:
zfs inherit -r org.opensolaris:time-slider-plugin:zfs-send storage/shares
Once you've done this, just enable the service (or clear it if it was forced into maintenance) and you should be golden.
Jonathan Blow of Software Quality, you should watch this if you're interested in writing software. I used to have an Amiga, and to be honest, it was far more responsive than my current beast of a PC.
It seems to me, that one of the most important aspects of software development is one that doesn't get a great amount of focus. Debugging. Sure, it's mentioned here and there, but, for example, first year students aren't even taught about the command line Java debugger.
So, I believe this video of Stuart Halloway, "Debugging with the Scientific Method" is required viewing. Of course, it's not just debugging, but any sort of performance or work on a website or application. Take stackoverflow for example, it's a popular site and hosted on their own servers. I have been reading lately on their setup and the monitoring they do, not only for uptime, but for performance.
For example, they use HAProxy to load balance to their web tier servers, obviously not unusual, that's what HAProxy is for. But, they also have these proxies capture and filter performance data from their application via headers in the HTTP response. It's probably something that everyone does, but to be honest, I've never come across any mention of this trick. (There's also their miniprofiler tool, which I'm using a variant of).
Given how little debugging is taught in university (well, my university) I can't judge on how common and detailed this sort of performance measurement is. I suspect that it might not be very common, so could be an interesting area for me to focus on.