Tag Archives: linux

Considering my backup systems

With the recent news that Crashplan were doing away with their “Home” offering, I had reason to reconsider my choice of online backup backup provider. Since I haven’t written anything here lately and the results of my exploration (plus description of everything else I do to ensure data longevity) might be of interest to others looking to set up backup systems for their own data, a version of my notes from that process follows.

The status quo

I run a Linux-based home server for all of my long-term storage, currently 15 terabytes of raw storage with btrfs RAID on top. The choice of btrfs and RAID allows me some degree of robustness against local disk failures and accidental damage to data.

If a disk fails I can replace it without losing data, and using btrfs’ RAID support it’s possible to use heterogenous disks, meaning when I need more capacity it’s possible to remove one disk (putting the volume into a degraded state) and add a new (larger) one and rebalance onto the new disk.

btrfs’ ability to take copy-on-write snapshots of subvolumes at any time makes it reasonable to take regular snapshots of everything, providing a first line of defense against accidental damage to data. I use Snapper to automatically create rolling snapshots of each of the major subvolumes:

  • Synchronized files (mounted to other machines over the network) have 8 hourly, 7 daily, 4 weekly and 3 monthly snapshots available at any time.
  • Staging items (for sorting into other locations) have a snapshot for each of the last two hours only, because those items change frequently and are of low value until considered further.
  • Everything else keeps one snapshot from the last hour and each of the last 3 days.

This configuration strikes a balance according to my needs for accident recovery and storage demands plus performance. The frequently-changed items (synchronized with other machines and containing active projects) have a lot of snapshots because most individual files are small but may change frequently, so a large number of snapshots will tend to have modest storage needs. In addition, the chances of accidental data destruction are highest there. The other subvolumes are either more static or lower-value, so I feel little need to keep many snapshots of them.

I use Crashplan to back up the entire system to their “cloud”1 service for $5 per month. The rate at which I add data to the system is usually lower than the rate at which it can be uploaded back to Crashplan as a backup, so in most cases new data is backed up remotely within hours of being created.

Finally, I have a large USB-connected external hard drive as a local offline backup. Also formatted with btrfs like the server (but with the entire disk encrypted), I can use btrfs send to send incremental backups to this external disk, even without the ability to send information from the external disk back. In practice, this means I can store the external disk somewhere else completely (possibly without an Internet connection) and occasionally shuttle diffs to it to update to a more recent version. I always unplug this disk from power and its host computer when not being updated, so it should only be vulnerable to physical damage and not accidental modification of its contents.

Synchronization and remotes

For synchronizing current projects between my home server (which I treat as the canonical repository for everything), the tools vary according to the constraints of the remote system. I mount volumes over NFS or SMB from systems that rarely or never leave my network. For portable devices (laptop computers), Syncthing (running on the server and portable device) makes bidirectional synchronization easy without requiring that both machines always be on the same network.

I keep very little data on portable devices that is not synchronized back to the server, but because it is (or, was) easy to set up, I used Crashplan’s peer-to-peer backup feature to back up my portable computers to the server. Because the Crashplan application is rather heavyweight (it’s implemented in Java!) and it refuses to include peer-to-peer backups in uploads to their storage service (reasonably so; I can’t really complain about that policy), my remote servers back up to my home server with Borg.

I also have several Android devices that aren’t always on my home network- these aren’t covered very well by backups, unfortunately. I use FolderSync to automatically upload things like photos to my server which covers the extent of most data I create on those devices, but it seems difficult to make a backup of an Android device that includes things like preferences and per-app data without rooting the device (which I don’t wish to do for various reasons).

Summarizing the status quo

  • btrfs snapshots offer quick access to recent versions of files.
  • btrfs RAID provides resilience against single-disk failures and easy growth of total storage in my server.
  • Remote systems synchronize or back up most of their state to the server.
  • Everything on the server is continuously backed up to Crashplan’s remote servers.
  • A local offline backup can be easily moved and is rarely even connected to a computer so it should be robust against even catastrophic failures.

Evaluating alternatives

Now that we know how things were, we can consider alternative approaches to solve the problem of Crashplan’s $5-per-month service no longer being available. The primary factors for me are cost and storage capacity. Because most of my data changes rarely but none of it is strictly immutable, I want a system that makes it possible to do incremental backups. This will of course also depend on software support, but it means that I will tend to prefer services with straightforward pricing because it is difficult to estimate how many operations (read or write) are necessary to complete an incremental backup.

Some services like Dropbox or Google Drive as commonly-known examples might be appropriate for some users, but I won’t consider them. As consumer-oriented services positioned for the use case of “make these files available whenever I have Internet access,” they’re optimized for applications very different from the needs of my backups and tend to be significantly more expensive at the volumes I need.

So, the contenders:

  • Crashplan for Small Business: just like Crashplan Home (which was going away), but costs $10/mo for unlimited storage and doesn’t support peer-to-peer backup. Can migrate existing Crashplan Home backup archives to Small Business as long as they are smaller than 5 terabytes.
  • Backblaze: $50 per year for unlimited storage, but their client only runs on Mac and Windows.
  • Google Cloud Storage: four flavors available, where the interesting ones for backups are Nearline and Coldline. Low cost per gigabyte stored, but costs are incurred for each operation and transfer of data out.
  • Backblaze B2: very low cost per gigabyte, but incurs costs for download.
  • Online.net C14: very low cost per gigabyte, no cost for operations or data transfer in the “intensive” flavor.
  • AWS Glacier: lowest cost for storage, but very high latency and cost for data retrieval.

The pricing is difficult to consume in this form, so I’ll make some estimates with an 8 terabyte backup archive. This somewhat exceeds my current needs, so should be a useful if not strictly accurate guide. The following table summarizes expected monthly costs for storage, addition of new data and the hypothetical cost of recovering everything from a backup stored with that service.

Service Storage cost Recovery cost Notes
Crashplan $10 0 “Unlimited” storage, flat fee.
Backblaze $4.17 0 “Unlimited” storage, flat fee.
GCS Nearline $80 ~$80 Negligible but nonzero cost per operation. Download $0.08 to $0.23 per gigabyte depending on total monthly volume and destination.
GCS Coldline $56 ~$80 Higher but still negligible cost per operation. All items must be stored for at least 90 days (kind of).
B2 $40 $80 Flat fee for storage and transfer per-gigabyte.
C14 €40 0 “Intensive” flavor. Other flavors incur per-operation costs.
Glacier $32 $740 Per-gigabyte retrieval fees plus Internet egress. Reads may take up to 12 hours for data to become available. Negligible cost per operation. Minimum storage 90 days (like Coldline).

Note that for Google Cloud and AWS I’ve used the pricing quoted for the cheapest regions; Iowa on GCP and US East on AWS.

Analysis

Backblaze is easily the most attractive option, but the availability restriction for their client (which is required to use the service) to Windows and Mac makes it difficult to use. It may be possible to run a Windows virtual machine on my Linux server to make it work, but that sounds like a lot of work for something that may not be reliable. Backblaze is out.

AWS Glacier is inexpensive for storage, but extremely expensive and slow when retrieving data. The pricing structure is complex enough that I’m not comfortable depending on this rough estimate for the costs, since actual costs for incremental backups would depend strongly on the details of how they were implemented (since the service incurs charges for reads and writes). The extremely high latency on bulk retrievals (up to 12 hours) and higher cost for lower-latency reads makes it questionable that it’s even reasonable to do incremental backups on Glacier. Not Glacier.

C14 is attractively priced, but because they are not widely known I expect backup packages will not (yet?) support it as a destination for data. Unfortunately, that means C14 won’t do.

Google Cloud is fairly reasonably-priced, but Coldline’s storage pricing is confusing in the same ways that Glacier is. Either flavor is better pricing-wise than Glacier simply because the recovery cost is so much lower, but there are still better choices than GCS.

B2’s pricing for storage is competitive and download rates are reasonable (unlike Glacier!). It’s worth considering, but Crashplan still wins in cost. Plus I’m already familiar with software for doing incremental backups on their service (their client!) and wouldn’t need to re-upload everything to a new service.

Fallout

I conclude that the removal of Crashplan’s “Home” service effectively means a doubling of the monthly cost to me, but little else. There are a few extra things to consider, however.

First, my backup archive at Crashplan was larger than 5 terabytes so could not be migrated to their “Business” version. I worked around that by removing some data from my backup set and waiting a while for those changes to translate to “data is actually gone from the server including old versions,” then migrating to the new service and adding the removed data back to the backup set. This means I probably lost a few old versions of the items I removed and re-added, but I don’t expect to ever need any of them.

Second and more concerning in general is the newfound inability to do peer-to-peer backups from portable (and otherwise) computers to my own server. For Linux machines that are always Internet-connected Borg continues to do the job, but I needed a new package that works on Windows. I’ve eventually chosen Duplicati, which can connect to my server the same way Borg does (over SSH/SFTP) and will in general work over arbitrarily-restricted internet connections in the same way that Crashplan did.

Concluding

I’m still using Crashplan, but converting to their more-expensive service was not quite trivial. It’s still much more inexpensive to back up to their service compared to others, which means they still have some significant freedom to raise the cost until I consider some other way to back up my data remotely.

As something of a corollary, it’s pretty clear that my high storage use on Crashplan is subsidized by other customers who store much less on the service; this is just something they must recognize when deciding how to price the service!

Chainloading Truecrypt

I recently purchased a new laptop computer (a Lenovo Thinkpad T520), and wanted to configure it to dual-boot between Windows and Linux.  Since this machine is to be used “on the go”, I also wanted to have full encryption of any operating systems on the device. My choices of tools for this are Truecrypt on the Windows side, and dm_crypt with LUKS on Linux. Mainly due to rather troublesome design on the Windows side of this setup, it was not as easy as I might have hoped. I did eventually get it working, however.

Admonishment

Truecrypt was “Discontinued” in 2014, but still works okay. VeraCrypt is substantially a drop-in replacement if you’re looking for a piece of software that is still actively maintained. As of this update (early 2017) the only non-commercial option for an encrypted Windows system booted from UEFI is Windows’ native BitLocker (with which dual-booting is possible but it won’t be possible to read the encrypted Windows partition from Linux), but if you’re booting via legacy BIOS these instructions should still work for TrueCrypt or VeraCrypt.

Windows

Installing Windows on the machine was easy enough, following the usual installation procedure. I created a new partition to install Windows to filling half of the disk, and let it do its thing. Downloading and installing Truecrypt is similarly easy. From there, I simply chose the relevant menu entry to turn on system encryption.

The first snag appeared when the system encryption wizard refused to continue until I had burned an optical disk containing the recovery information (in case the volume headers were to get corrupted). I opted to copy the iso file to another location, with the ability to boot it via grub4dos if necessary in the future (or merely burn a disc as necessary). The solution to this was to re-invoke the volume creation wizard with the noisocheck option:

C:\Program Files\TrueCrypt>TrueCrypt Format.exe /noisocheck

One reboot followed, and I was able to let TrueCrypt go through and encrypt the system. It was then time to set up Linux.

Linux

Basic setup of my Linux system was straightforward. Arch (my distribution of choice) offers good support for LUKS encryption of the full system, so most of the installation went smoothly.

On reaching the bootloader installation phase, I let it install and configure syslinux (my loader of choice simply because it is easier to configure than GRUB), but did not install it to the MBR. With the installation complete, I had to do some work to manually back up the MBR installed by Truecrypt, then install a non-default MBR for Syslinux.

First up was backing up the Truecrypt MBR to a file:

# dd if=/dev/sda of=/mnt/boot/tc.bs count=1

That copies the first sector of the disk (512 bytes, containing the MBR and partition table) to a file (tc.bs) on my new /boot partition.

Before installing a Syslinux MBR, I wanted to ensure that chainloading the MBR from a file would work. To that end, I used the installer to chainload to my new installation, and used that to attempt loading Windows. The following incantation (entered manually from the syslinux prompt) eventually worked:

.com32 chain.c32 hd0 1 file=/tc.bs

Pulling that line apart, I use the chainloader to boot the file tc.bs in the base of my /boot partition, and load the first partition on my first hard drive (that is, where Windows is installed). This worked, so I booted once more into the installer to install the Syslinux MBR:

# dd if=/usr/lib/syslinux/mbr.bin of=/dev/sda bs=1 count=440 conv=notrunc

This copies 440 bytes from the given file to my hard drive, where 440 bytes is the size of the MBR. The input file is already that size so the count parameter should not be necessary, but one cannot be too careful when doing such modification to the MBR.

Rebooting, that, sadly, did not work. It turns out that the Syslinux MBR merely scans the current hard drive for partitions that are marked bootable, and boots the first one. The Truecrypt MBR does the same thing, which is troublesome– in order for Truecrypt to work the Windows partition must be marked bootable, but Syslinux is unable to find its configuration when this is the case.

Enter albmbr.bin. Syslinux ships several different MBRs, and the alternate does not scan for bootable partitions. Instead, the last byte of the MBR is set to a value indicating which partition to boot from. Following the example from the Syslinux wiki (linked above), then, I booted once more from my installer and copied the altmbr into position:

# printf 'x5' | cat /usr/lib/syslinux/altmbr.bin - | dd bs=1 count=440 conv=notrunc of=/dev/sda

This shell pipeline echoes a single byte of value 5, appends it to the contents of altmbr.bin, and writes the resulting 440 bytes to the MBR on sda. The 5 comes from the partition Syslinux was installed on, in this case the first logical partition on the disk (/dev/sda5).

With that, I was able to boot Syslinux properly and it was a simple matter to modify the configuration to boot either Windows or Linux on demand. Selected parts of my syslinux.cfg file follow:

UI menu.c32

LABEL arch
    MENU LABEL Arch Linux
    LINUX /vmlinuz-linux
    APPEND root=/dev/mapper/Homura-root cryptdevice=/dev/sda6:HomuHomu ro
    INITRD /initramfs-linux.img

LABEL windows
    MENU LABEL Windows 7
    COM32 chain.c32
    APPEND hd0 1 file=/tc.bs

Further resources

For all things Syslinux, the documentation wiki offers documentation sufficient for most purposes, although it can be somewhat difficult to navigate. A message from the Syslinux mailing list gave me the key to making Syslinux work from the MBR. The Truecrypt documentation offered some interesting information, but was surprisingly useless in the quest for a successful chainload (indeed, the volume creation wizard very clearly states that using a non-truecrypt MBR is not supported).

High-availability /home revisited

About a month ago, I wrote about my experiments in ways to keep my home directory consistently available. I ended up concluding that DRBD is a neat solution for true high-availability systems, but it’s not really worth the trouble for what I want to do, which is keeping my home directory available and in-sync across several systems.

Considering the problem more, I determined that I really value a simple setup. Specifically, I want something that uses very common software, and is resistant to network failures. My local network going down is an extremely rare occurence, but it’s possible that my primary workstation will become a portable machine at some point in the future- if that happens, anything that depends on a constant network connection becomes hard to work with.

If an always-online option is out of the question, I can also consider solutions which can handle concurrent modification (which DRBD can do, but requires using OCFS, making that solution a no-go).

Rsync

rsync is many users’ first choice for moving files between computers, and for good reason: it’s efficient and easy to use.  The downside in this case is that rsync tends to be destructive, because the source of a copy operation is taken to be the canonical version, any modifications made in the destination will be wiped out.  I already have regular cron jobs running incremental backups of my entire /home so the risk of rsync permanently destroying valuable data is low.  However, being forced to recover from backup in case of accidental deletions is a hassle, and increases the danger of actual data loss.

In that light, a dumb rsync from the NAS at boot-time and back to it at shutdown could make sense, but carries undesirable risk.  It would be possible to instruct rsync to never delete files, but the convenience factor is reduced, since any file deletions would have to be done manually after boot-up.  What else is there?

Unison

I eventually decided to just use Unison, another well-known file synchronization utility.  Unison is able to handle non-conflicting changes between destinations as well as intelligently detect which end of a transfer has been modified.  Put simply, it solves the problems of rsync, although there are still situations where it requires manual intervention.  Those are handled with reasonable grace, however, with prompting for which copy to take, or the ability to preserve both and manually resolve the conflict.

Knowing Unison can do what I want and with acceptable amounts of automation (mostly only requiring intervention on conflicting changes), it became a simple matter of configuration.  Observing that all the important files in my home directory which are not already covered by some other synchronization scheme (such as configuration files managed with Mercurial) are only in a few subdirectories, I quickly arrived at the following profile:

root = /home/tari
root = /media/Caring/sync/tari

path = incoming
path = pictures
path = projects
path = wallpapers

Fairly obvious function here, the two sync roots are /home/tari (my home directory) and /media/Caring/sync/tari (the NAS is mounted via NFS at /media/Caring), and only the four listed directories will be syncronized. An easy and robust solution.

I have yet to configure the system for automatic syncronization, but I’ll probably end up simply installing a few scripts to run unison at boot and when shutting down, observing that other copies of the data are unlikely to change while my workstation is active.  Some additional hooks may be desired, but I don’t expect configuration to be difficult.  If it ends up being more complex, I’ll just have to post another update on how I did it.

Update Jan. 30: I ended up adding a line to my rc.local and rc.shutdown scripts that invokes unison:

su tari -c "unison -auto home"

Note that the Unison profile above is stored as ~/.unison/home.prf, so this handles syncing everything I listed above.

rtorrent scripting considered harmful

As best I can tell, whomever designed the scripting system for rtorrent did so in a manner contrived to make it as hard to use as possible.  It seems that = is the function application operator, and precedence is stated by using a few levels of distinct escaping. For example:

# Define a method 'tnadm_complete', which executes 'baz' if both 'foo' and 'bar' return true.
system.method.insert=tnadm_complete,simple,branch={and="foo=,bar=",baz=}

With somewhat more sane design, it might look more like this:

system.method.insert(tnadm_complete, simple, branch(and(foo(),bar()),baz()))

That still doesn’t help the data-type ambiguity problems (‘tnadm_complete’ is a string here, but not obviously so), but it’s a bit better in readability. I haven’t tested whether the escaping with {} can be nested, but I’m not confident that it can.

In any case, that’s just a short rant since I just spent about two hours wrapping my brain around it. Hopefully that work turns into some progress on a new project concept, otherwise it was mostly a waste. As far as the divergence meter goes, I’m currently debugging a lack of communication between my in-circuit programmer and the microcontroller.

Incidentally, the rtorrent community wiki is a rather incomplete but still useful reference for this sort of thing, while gi-torrent provides a reasonably-organized overview of the XMLRPC methods available (which appear to be what the scripting exposes), and the Arch wiki has a few interesting examples.

Experiments with a high-availability /home

I was recently experimenting with ways to configure my computing setup for high availability of my personal data, which is stored in a Btrfs-formatted partition on my SSD. When my workstation is booted into Windows, however, I want to be able to access my data with minimal effort. Since there’s no way to access a Btrfs volume natively from within Windows, I had to find another approach. It seemed like automatically syncing files out to my NAS was the best solution, since that’s always available and independent of most other things I would be doing at any time.

Candidates

The obvious first option for syncing files to the NAS is the ever-common rsync. It’s great at periodic file transfers, but real-time syncing of modifications is rather beyond the ken of rsync.  lsync provides a reasonable way to keep things reasonably in-sync, but it’s far from an elegant solution.  Were I so motivated, it would be reasonable to devise a similar rsync wrapper using inotify (or similar mechanisms) to only handle modified files and possibly even postpone syncing changes until some change threshold is exceeded.  With existing software, however, rsync is a rather suboptimal solution.

From a cursory scan, cluster filesystems such as ceph or lustre seem like good options for tackling this problem.  The main disadvantage of the cluster filesystem approach, however, is rather high complexity. Most cluster filesystem implementations require a few layers of software, generally both a metadata server and storage server. In large deployments that software stack makes sense, but it’s needless complexity for me.  In addition, ensuring that data is correctly duplicated across both systems at any given time may be a challenge.  I didn’t end up trying this route so ensuring data duplication may be easier than it seems, but a cluster filesystem ultimately seemed like needless complexity for what I wanted to do.

While researching cluster filesystems, I discovered xtreemfs, which has a number of unique features, such as good support for wide-area storage networks, and is capable of operating securely even over the internet.  Downsides of xtreemfs are mostly related to the technology it’s built on, since the filesystem itself is implemented with Linux’s FUSE (Filesystem in USErspace) layer and is implemented in Java.  Both those properties make it rather clunky to work with and configure, so I ended up looking for another solution after a little time spent attempting to build and configure xtreemfs.

The solution I ultimately settled upon was DRBD, which is a block-level replication tool.  Unlike the other approaches, DRBD sits at the block level (rather than the filesystem level), so any desired filesystem can be run on top of it.  This was a major advantage to me, because Btrfs provides a few features that I find important (checksums for data, and copy-on-write snapshotting). Handling block-level syncing is necessarily somewhat more network-intensive than running at the file level, but since I was targeting use over a gigabit LAN, network usage was a peripheral concern.

Implementation

From the perspective of normal operation, a DRBD volume looks like RAID 1 running over a network.  One host is marked as the primary, and any changes to the volume on that host are propagated to the secondary host.  If the primary goes offline for whatever reason, the secondary system can be promoted to the new primary, and the resource stays available. In the situation of my designs for use of DRBD, my workstation machine would be the primary in order to achieve normal I/O performance while still replicating changes to the NAS. Upon taking the workstation down for whatever reason (usually booting it into another OS), all changes should be on the NAS, which remains active as a lone secondary.

DRBD doesn’t allow secondary volumes to be used at all (mainly since that would introduce additional concerns to ensure data integrity), so in order to mount the secondary and make it accessible (such as via a Samba share) the first step is to mark the volume as primary. I was initially cautious about how bringing the original primary back online would affect synchronization, but it turned out to handle such a situation gracefully. When the initial primary (workstation) comes back online following promotion of the secondary (NAS), the former primary is demoted back to secondary status, which also ensures that any changes while the workstation was offline are correctly mirrored back. While the two stores are resyncing, it is possible to mark the workstation as primary once more and continue normal operation while the NAS’ modifications sync back.

Given that both my NAS and workstation machines run Arch Linux, setup of DRBD for this scheme was fairly simple. First order of business was to create a volume to base DRBD on. The actual DRBD driver is part of mainline Linux since version 2.6.33, so having the requisite kernel module loaded was easy. The userspace utilities are available in the AUR, so it was easy to get those configured and installed. Finally, I created a resource configuration file as follows:

resource home {
  device /dev/drbd0;
  meta-disk internal;

  protocol A;
  startup {
    become-primary-on Nakamura;
  }

  on Nakamura {
    disk /dev/Nakamura/home;
    address ipv4 192.168.1.10:7789;
  }
  on Nero {
    disk /dev/loop0;
    address ipv4 192.168.1.8:7789;
  }

}

The device option specifies what name the DRBD block device should be created with, and meta-disk internal specifies that the DRBD metadata (which contains such things as the dirty bitmap for syncing modified blocks) should be stored within the backing device, rather than in some external file. The protocol line specifies asynchronous operation (don’t wait for a response from the secondary before returning saying a write is complete), which helps performance but makes the system less robust in the case of a sudden failure. Since my use case is less concerned with robustness and more with simple availability and maintaining performance as much as possible, I opted for the asynchronous protocol. The startup block specifies that Nakamura (the workstation) should be promoted to primary when it comes online.

The two on blocks specify the two hosts of the cluster. Nakamura’s volume is backed by a Linux logical volume (in the volume group ‘Nakamura’), while Nero’s is hosted on a loop device. I chose to use a loop device on Nero simply because the machine has a large amount of storage (6TB in RAID5), but no unallocated space, so I had to use a loop device. In using a loop device I ended up ignoring a warning in the DRBD manual about running it over loop block devices causing deadlocks– this ended up being a poor choice, as described later.

It was a fairly simple matter of bringing the volumes online once I had written the configuration. Load the relevant kernel module, and use the userland utilities to set up the backing device. Finally, bring the volume up. Repeat this series of steps again on the other host.

# modprobe drbd
# drbdadm create-md home
# drbdadm up home

With the module loaded and a volume online, status information is visible in /proc/drbd, looking something like the following (shamelessly taken from the DRBD manual):

$ cat /proc/drbd
version: 8.3.0 (api:88/proto:86-89)
GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by buildsystem@linbit, 2008-12-18 16:02:26
 0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r---
    ns:0 nr:8 dw:8 dr:0 al:0 bm:2 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

The first few lines provide version information, and the two lines beginning with ‘0:’ describe the state of a DRBD volume. Of the rest of the information, we can see that both hosts are online and communicating (Connected), both are currently marked as secondaries (Secondary/Secondary), and both have the latest version of all data (UpToDate/UpToDate). The last step in creating the volume is to mark one host as primary. Since this is a newly-created volume, marking one host as primary requires invalidation of the other, prompting resynchronization of the entire device. I execute drbdadm primary --force home on Nakamura to mark that host as having the canonical version of the data, and the devices begin to synchronize.

Once everything is set, it becomes possible to use the DRBD block device (/dev/drbd0 in my configuration) like any other block device- create filesystems, mount it, or write random data to it. With a little work to invoke the DRBD initscripts at boot time, I was able to get everything working as expected. There were a few small issues with the setup, though:

  • Nero (the NAS) required manual intervention to be promoted to the primary role. This could be improved by adding some sort of hooks on access to promote it to primary and mount the volume. This could probably be implemented with autofs for a truly transparent function, or even a simple web page hosted by the NAS which prompts promotion when it is visited.
  • Deadlocks! I mentioned earlier that I chose to ignore the warning in the manual about deadlocks when running DRBD on top of loop devices, and I did start seeing some on Nero. All I/O on the volume hosting the loop device on Nero would stall, and the only way out was by rebooting the machine.

Conclusion

DRBD works for keeping data in sync between two machines in a transparent fashion, at the cost of a few more software requirements and a slight performance hit. The kernelspace tools are in mainline Linux so should be available in any reasonably recent kernel, but availability of the userspace utilities is questionable. Fortunately, building them for oneself is fairly easy. Provided the drbd module is loaded, it is not necessary to use the userspace utilities to bring the volume online- the backing block device can be mounted without DRBD, but the secondary device will need to be manually invalidated upon reconnect. That’s useful for ensuring that it’s difficult for data to be rendered inaccessible, since the userspace utilities are not strictly needed to get at the data.

I ultimately didn’t continue running this scheme for long, mainly due to the deadlock issues I had on the NAS, which could have been resolved with some time spent reorganizing the storage on that host. I decided that wasn’t worth the effort, however. To achieve a similar effect, I ended up configuring a virtual machine on my Windows installation that has direct access to the disks which have Linux-hosted data, so I can boot the physical Linux installation in a virtual machine. By modifying the initscripts a little, I configured it to start Samba at boot time when running virtualized in order to give access to the data. The virtualized solution is a bit more of a hack than DRBD and is somewhat less robust (in case of unexpected shutdown, this makes two operating systems coming down hard), but I think the relative simplicity and absence of a network tether are a reasonable compromise.

Were I to go back to a DRBD-backed solution at some time, I might want to look into using DRBD in dual-primary mode. In most applications only a single primary can be used since most filesystems are designed without the locking required to allow multiple drivers to operate on them at the same time (this is why NFS and similar network filesystems require lock managers). Using a shared-disk filesystem such as OCFS (or OCFS2), DRBD is capable of having both hosts in primary mode, so the filesystem can be mounted and modified on both hosts at once. Using dual primaries would simplify the promotion scheme (each host must simply be promoted to primary when it comes online), but would also require care to avoid split-brain situations (in which communications are lost but both hosts are still online and processing I/O requests, so they desync and require manual intervention to resolve conflicts). I didn’t try OCFS2 at all during this experiment mainly because I didn’t want to stop using btrfs as my primary filesystem.

To conclude, DRBD works for what I wanted to do, but deadlocks while running it on a loop device kept me from using it for long. The virtual machine-based version of this scheme performs well enough for my needs, despite being rather clunky to work with. I will keep DRBD in mind for similar uses in the future, though, and may revisit the issue at a later date when my network layout changes.

Update 26.1.2012: I’ve revisited this concept in a simpler (and less automatic) fashion.