Ika Okonomiyaki

I cooked a bit of an experiment tonight: okonomiyaki with squid.  Preparing the squid took a long time, but I was very pleased with how it turned out in the end.

To prepare the squid, simply pull to separate the body from the head.  Most of the innards will stay attached to the head.  Chop the tentacles off just below the eyes, and remove any tough bits (beak maybe) from the inner ring on the tentacles.  Remove the pen (which keeps the squid from flopping about while alive) from the body and slice into rings.  Clean any remaining guts out of the rings.  For cooking the squid, I put it over medium-high heat for between 20 and 25 minutes.  Added a fair amount of soy sauce throughout.

The pancake batter was approx. 2 cups of flour, 1.25 cups of water, 4 eggs, and a fair amount of finely chopped cabbage.  Doing it again, I would err towards chopping the cabbage to finely, since the larger pieces were a bit odd.  Preparing the pancakes was as simple as cooking them on the griddle until no longer gooey.

Final assembly consisted of topping a pancake with mayonnaise and some stir fry sauce (I didn’t have any real okonomiyaki sauce, but my stir-fry sauce worked well as a substitute).  Some chopped green onion on top of that, then top with the squid.  Some pineapple on the side helped offset the assault of savory flavors from everything else with a bit of sweetness.

Gallery

High-availability /home revisited

About a month ago, I wrote about my experiments in ways to keep my home directory consistently available. I ended up concluding that DRBD is a neat solution for true high-availability systems, but it’s not really worth the trouble for what I want to do, which is keeping my home directory available and in-sync across several systems.

Considering the problem more, I determined that I really value a simple setup. Specifically, I want something that uses very common software, and is resistant to network failures. My local network going down is an extremely rare occurence, but it’s possible that my primary workstation will become a portable machine at some point in the future- if that happens, anything that depends on a constant network connection becomes hard to work with.

If an always-online option is out of the question, I can also consider solutions which can handle concurrent modification (which DRBD can do, but requires using OCFS, making that solution a no-go).

Rsync

rsync is many users’ first choice for moving files between computers, and for good reason: it’s efficient and easy to use.  The downside in this case is that rsync tends to be destructive, because the source of a copy operation is taken to be the canonical version, any modifications made in the destination will be wiped out.  I already have regular cron jobs running incremental backups of my entire /home so the risk of rsync permanently destroying valuable data is low.  However, being forced to recover from backup in case of accidental deletions is a hassle, and increases the danger of actual data loss.

In that light, a dumb rsync from the NAS at boot-time and back to it at shutdown could make sense, but carries undesirable risk.  It would be possible to instruct rsync to never delete files, but the convenience factor is reduced, since any file deletions would have to be done manually after boot-up.  What else is there?

Unison

I eventually decided to just use Unison, another well-known file synchronization utility.  Unison is able to handle non-conflicting changes between destinations as well as intelligently detect which end of a transfer has been modified.  Put simply, it solves the problems of rsync, although there are still situations where it requires manual intervention.  Those are handled with reasonable grace, however, with prompting for which copy to take, or the ability to preserve both and manually resolve the conflict.

Knowing Unison can do what I want and with acceptable amounts of automation (mostly only requiring intervention on conflicting changes), it became a simple matter of configuration.  Observing that all the important files in my home directory which are not already covered by some other synchronization scheme (such as configuration files managed with Mercurial) are only in a few subdirectories, I quickly arrived at the following profile:

root = /home/tari
root = /media/Caring/sync/tari

path = incoming
path = pictures
path = projects
path = wallpapers

Fairly obvious function here, the two sync roots are /home/tari (my home directory) and /media/Caring/sync/tari (the NAS is mounted via NFS at /media/Caring), and only the four listed directories will be syncronized. An easy and robust solution.

I have yet to configure the system for automatic syncronization, but I’ll probably end up simply installing a few scripts to run unison at boot and when shutting down, observing that other copies of the data are unlikely to change while my workstation is active.  Some additional hooks may be desired, but I don’t expect configuration to be difficult.  If it ends up being more complex, I’ll just have to post another update on how I did it.

Update Jan. 30: I ended up adding a line to my rc.local and rc.shutdown scripts that invokes unison:

su tari -c "unison -auto home"

Note that the Unison profile above is stored as ~/.unison/home.prf, so this handles syncing everything I listed above.

Locating packages with cmake

When building programs with cmake on non-UNIX systems, it can be a pain to specify the location of external libraries. I’ve been upgrading mkg3a to support using libpng to load icons in addition to the old bmp loader, but that means I need to link against libpng, and also zlib (since libpng depends on zlib to handle the image compression). Compiling it all on Windows, however, is not an easy task, since there’s no standard search path for libraries like there is on UNIX systems (eg /usr/include for libraries, /usr/lib for libraries..). I didn’t find any good resources on how to make it work in my own searches, so here’s a quick write-up of the process in the hopes that it’ll be useful to somebody else.

I grabbed the zlib and libpng static libraries from gnuwin32 and extracted them near my mkg3a source tree, in the same directory. Setting up to build, then, my directory tree looks something like the following (some files omitted for brevity):

+ build
- libs
 - include
  + libpng12
  | png.h
  | pngconf.h
  | zconf.h
  | zlib.h
 - lib
  | libpng.lib
  | zlib.lib
 + manifest
- mkg3a
 | CMakeLists.txt
 | config.h.in
 | README

So I have a libs directory containing the headers and library files to link against, build is my build tree, and mkg3a is the source tree.

In order to tell cmake where to find zlib and libpng now, we can use the CMAKE_PREFIX_PATH variable, which is a path relative to the source directory. In this case, the following command will pick up the libraries in libs and generate project files for Visual Studio 2010 (note we’re executing from within the build tree):

H:Desktopbuild> cmake -G "Visual Studio 10" -D CMAKE_PREFIX_PATH=../libs ../mkg3a

If the build tree were instead under the source tree (mkg3a/build/ instead of just build/), the value for CMAKE_PREFIX_PATH would not need to change, since it is specified relative to the source directory.

In short: set CMAKE_PREFIX_PATH to help it find packages when they’re not in the usual system locations. It’s much easier to combine all your external libraries into one directory (libs in my example), but you could also specify a list of paths and keep them separate.

rtorrent scripting considered harmful

As best I can tell, whomever designed the scripting system for rtorrent did so in a manner contrived to make it as hard to use as possible.  It seems that = is the function application operator, and precedence is stated by using a few levels of distinct escaping. For example:

# Define a method 'tnadm_complete', which executes 'baz' if both 'foo' and 'bar' return true.
system.method.insert=tnadm_complete,simple,branch={and="foo=,bar=",baz=}

With somewhat more sane design, it might look more like this:

system.method.insert(tnadm_complete, simple, branch(and(foo(),bar()),baz()))

That still doesn’t help the data-type ambiguity problems (‘tnadm_complete’ is a string here, but not obviously so), but it’s a bit better in readability. I haven’t tested whether the escaping with {} can be nested, but I’m not confident that it can.

In any case, that’s just a short rant since I just spent about two hours wrapping my brain around it. Hopefully that work turns into some progress on a new project concept, otherwise it was mostly a waste. As far as the divergence meter goes, I’m currently debugging a lack of communication between my in-circuit programmer and the microcontroller.

Incidentally, the rtorrent community wiki is a rather incomplete but still useful reference for this sort of thing, while gi-torrent provides a reasonably-organized overview of the XMLRPC methods available (which appear to be what the scripting exposes), and the Arch wiki has a few interesting examples.

Divergence meter progress

One project which I’ve been working on since about October and just got around to creating a project page for is the divergence meter.

There’s not a lot to see there yet, but I’ve recorded my notes on what the design needs and the outline for the control and power supply module.  I ordered the PCB in early December in the hopes that they would be available for me to work on while in Wauwatosa during the semester break.  That didn’t pan out, so unfortunately the whole project won’t move until next week, when I return to Houghton and can get my boards from the mailbox.

My batch of nixie tubes arrived earlier than expected, however, and I got the components to populate the board in mid-November.  All I need is the boards and some time to solder, while hoping I don’t completely botch the job of soldering a 38-TSOP package, especially since that chip (the MSP430F2272) cost me $5.  Photos follow.

One I find the time to assemble the control board, the software should come together pretty quickly.  Just a matter of time now..