Tag Archives: software

Free Software Claus is Coming to Town

I help organize a conference for Free Software enthusiasts called SeaGL. This year I’m proud to report that Shauna Gordon McKeon and Richard Stallman (aka “RMS”) are keynote speakers.

I first invited RMS to Seattle 13 years ago, and finally in 2015 it all came together. In his words:

My talks are not technical. The topics of free software, copyright vs community, and digital inclusion deal with ethical/political issues that concern all users of computers.

So please do come on down to Seattle Central College on October 23rd and 24th, 2015 for SeaGL!

Sandstorm – personal cloud, self-organzing cluster

I’ve heard a lot of Meteor news lately, but somehow I missed Sandstorm. Your own personal cloud. Install services easier than installing apps on your phone. Add machines and they self-organize into a cluster. This sounds just way too awesome. Looks like they use Meteor heavily. Jade Wang (formerly of the Meteor Development Group) is a co-founder.

Apps must be packaged for Sandstorm (made into “grains”). The list of ported apps is pretty inspiring. Included are: draw.io, LibreBoard, HackerSlides, Let’s Chat, Paperwork… All were new to me, several are written in Meteor, and I was able to check out all of these in seconds. I’m hooked.

Oplog: a tail of wonder and woe

First, your TL;DR:

  1. Stress test your Meteor app.
  2. Oplog tailing may be less efficient for particular workloads.


My work involves using crowdsourcing to assess and improve technical skill. We’re focusing on improving basic technical skills of surgeons first because—no surprise here—it matters. A more skilled surgeon means patients with less complications. Being healthy, not dying. Good stuff.

One way we gather data is a survey app where crowdworkers watch a short video of real human surgery and answer simple questions about what they saw. For example:

  • Do both hands work well together?
  • Is the surgeon efficient?
  • Are they rough or gentle?

Turns out the crowd nails this! Think of it this way: most anyone can recognize standout performers on the basketball court or a playing a piano, even if they’re not an expert at either. Minimal training and this “gut feel” are all we need to objectively measure basic technical skill.


So, a survey app. Watch a video, answer a few questions. Pretty straightforward. We built one in-house. Meteor was a great choice here. Rapid development, easy deployment, JavaScript everywhere, decent Node.js stack out of the box, all that.

And of course we used oplog tailing right from the start because much of what read about oplog tailing made it sound like it was the only way to go. Sure, you’ll want oplog tailing for realtime (<10sec delayed) data when you have multiple apps connecting to the same MongoDB database. But if you don’t need that, you may not need it at all, and you may not want it.

Traffic pattern

Our traffic is very bursty. We publish a HIT on Amazon Mechanical Turk. Within minutes, the crowd is upon our survey app. Our app generally does fine, but folks complained of very slow survey completion times when we started hitting somewhere around 80 DDP(?) sessions in Kadira. Each DDP session in our survey app should equate to one simultaneous active user (hereafter “user”).

Here’s what we want to know:

  1. Why does our app slow down when it does?
  2. Can it scale [linearly]?
  3. Are there any small code or configuration changes we could do to get a lot more performance out of the existing hardware?


  1. Meteor pegs the CPU when oplog tailing is enabled.
  2. Yes, if we disable oplog tailing.
  3. Yes, disabling oplog tailing and clustering our app.

Stress test

We created a stress test to get a better feel for the performance characteristics of our app.

The test uses nightwatch to emulate a turker completing a survey. Load the survey app, click radio buttons, enter comments, and throw in a few random waits. Many threads of the nightwatch test are spawned and charge on in parallel. The machine running nightwatch needs to be pretty beefy. I preferred a browser-based stress test because I noticed client-server interactions amplified the amount and frequency of DDP traffic (hello Mr. Reactivity). It was also easier to write and run nightwatch then pick the exact DDP traffic to send.

Notes on our app:

  • We use mup to deploy to Ubuntu EC2 servers on AWS.
  • Tested configuration uses one mup-deployed Meteor app.
  • The app connects to a local MongoDB server running a standalone one-member replica set (just to get the oplog).
    • I also tested with Modulus, scaled to one 512mb servo in us-east-1a. Non-enterprise Modulus runs with oplog tailing disabled, and the app connects to MongoDB on a host other than localhost.
  • Our app uses iron:router.
  • Our app doesn’t need to be reactive. Surveyees work in isolation. But this is how we wrote the app, so that’s what I tested.


I ran a series of stress tests. Ramp up traffic, capture metrics, change code and/or server configuration, repeat. Here are the results.


  • Each row in the spreadsheet represents one test.
  • Every test ran for 5 minutes.
  • When one “user” completes a survey, another one begins (so the number of users is kept more or less constant during evey 5-minute test).
  • There are lots of notes and Kadira screenshots in the results spreadsheet. For the Kadira screenshots, the relevant data is on the rightmost side of the graphs.
  • I think Kadira session counts are high. Maybe it isn’t counting disconnects, maybe DDP timeouts take a while, or maybe the nightwatch test disconnects slowly.
  • Row 3. At 40 users, the CPU is pegged. Add any more users and it takes too long for them to complete a survey.
  • Row 5. Notice how doubling the cores does not double the number of test passes (less than linear scaling along this dimension).
  • Row 6. Ouch, we’re really not scaling! Might need to investigate the efficiency of meteorhacks:cluster.
  • Row 7. Oplog tailing is disabled for this and all future tests. MongoDB CPU load is roughly doubled from the 40-user, 1-core, oplog-tailing-enabled test.
  • Row 9. Too much for one core: 6.5% of the tests failed.
  • Row 11. This is what we want to see! 2x cores, 2x users, 2x passes. In other words, with oplog tailing disabled and double the number of cores, we supported double the number of users and doubled test passes.
  • I should have also tested 160 users, 4 cores, oplog disabled. I didn’t. Live with it.
  • Disabling oplog tailing seemed to allow the processing load to shift more to MongoDB. MongoDB appeared to be able to handle same more… gracefully.
  • I didn’t get very far with Modulus. I’m very interested in their offering, but I just couldn’t get users (test runs) through our app fast enough to make further testing worthwhile.
  • A DNS issue prevented capturing Kadira status while running on Modulus.
  • cluster lives up to its promise—adding cores and spreading load.
  • I don’t think we’re moving much data, but any reactivity comes at a price at scale (even our so far little bitty scale).
  • Our survey app could and should be modified to use much less reactivity since, as I mentioned earlier, it is unnecessary.

Server-side profiles

This is somewhat of an addendum, but I figured it might be useful.

Here’s what the Meteor Node.js process does when 10 users hitting our survey app running on one core.

Oplog tailing enabled:

Pie chart server profile with oplog<br /><br />

Oplog tailing disabled:

Pie chart server profile without oplog<br /><br />


  • Note that these pie charts only show %CPU usage. CPU and network are the primary resources our app uses, so this is fine.
  • The profile data for each slice (when you drill down) are very low-level. It’s hard to make any quick conclusions (or I just need more practice reading these).
  • When oplog tailing is enabled, the Cursor.fetch slice is about twice as big, and none of the methods causing that CPU load are ours. Perhaps this is the oplog “tailing” in action?
  • When oplog taling is disabled, drilling into Cursor.fetch shows us exactly what specific methods of ours are causing CPU load. Even if oplog tailing is more efficient, this level of introspection was priceless. We need this until we learn to better debug patterns in our code that lead to more CPU when oplog tailing is enabled.
  • The giant ~30% slice of “Other” is a bit of a bummer. What’s going on in there? Low-level/native Node.js operations like the MongoDB driver doing its thing? Sending/receiving network traffic?
  • Kadira monitoring isn’t free CPU-wise, but it is worth it.
  • What should these pie charts look like in a well-optimized application under load? Perhaps the biggest slice should belong to “Other”?

Further reading:

Feedback/questions/comments/corrections welcome! I’d espeically love to hear about your experiences scaling Meteor.

My Hadoop/MapReduce article in Linux Journal

I’m proud that LJ accepted my Hadoop/MapReduce article for the April 2013 issue! If you’re new to MapReduce and are interested in learning about same, this article is for you.


I’ll also be presenting a talk based on the article at LinuxFest Northwest 2013.

Google: Stuck on You

I require proprietary software to get through my day, but I like not being too dependent on it. With respect to that rule for myself and Google, I’ve failed.

I probably use the Internet mainly for search and email, and I need Google for both. Maps? All the time.

And there’s a doc I’d like to read now. The most important information to me is in the comments, but I can’t see the comments because this doc is “too popular”.

Google drive notice: file too popular


See also: You Can’t Quit, I Dare You

Web Framework Flavor of the Month

I’ve been playing with Meteor a bit lately. It’s a “kitchen sink” system for writing web apps, complete with a database (MongoDB), server-side (Node.js), and client-side stuff. It’s all JavaScript.

It’s pretty fun for little experiments. I can imagine certain kinds of websites it would be good for (web-based chat, HTML5 games, collaborative editors, and one-webpage apps — same stuff I think vanilla Node.js excels at) and some it would not (mobile, CRUD with an RDBMS). I’m wondering if it would/should work well with larger web apps.

I’m afraid of JavaScript, but I think it’s finally time for me to overcome that fear. What better way to do so than to use JavaScript everywhere (database, server, client, APIs)?!

Meteor isn’t the only game around, it’s just the one I’ve looked at.

You are NOT a Software Engineer!

I enjoyed You are NOT a Software Engineer! by Chris Aitchison. It’s a fun analogy. Writing software certainly does feel more like something roughly planned and growing organically or evolving rather than something perfectly specified and executed. And I think this is OK.

Another thing we coders often forget: we are also authors. We write code for humans (others and our future selves) to read. I want you to be stoked when you read what I write! And coding is writing.

Avoid trivial merges with github pull requests

I like a clean, boring git history. I prefer this:

* 6ca186e Someone set us up the commit
* f55bcf8 Initial commit

to this:

*   494c94e Merge pull request #1 from kormoc/pr_test
| * 6ca186e Someone set us up the commit
* f55bcf8 Initial commit

The latter includes 494c94e, a technically unnecessary commit. I call it a trivial merge, other folks call it a merge bubble.

By default, github will preserve trivial merges when you use the “Merge pull request” button.

If you don’t want these trivial commits in your history, you have to pull (fetch/merge) locally. When someone creates a pull request for you, github sends you a handy email with a command you can cut and paste to perform the merge locally.

You can merge this Pull Request by running

git pull https://github.com/kormoc/pulltest pr_test

Or view, comment on, or merge it at:


Recall that git pull does an implicit merge. If you merge locally and there are no conflicts, the trivial merge will be omitted.

You may miss the trivial commits because they include a reference to the pull request on github. I won’t. I might ask the patch submitter to refer to the pull request by name/link in their commit log message.

If you want to prevent anyone from pushing trivial merges, more work is required.

Update 2013-06-25

I now prefer what GitHub’s merge button does, namely: preserving the merge history for pull requests.

3 Reasons Why You Should Never Use Enlocked

Enlocked advertises easy, secure email. Sounds good to me! My current solution (Thunderbird+Enigmail) works, barely, but it is a big pain in the tukhus. I’d go for something better. Heck, I’d pay for it. And Enlocked is free!

I gave their Chrome plugin a try. Installation was a breeze and it worked exactly as advertised. It integrates almost seamlessly into GMail (when replying, quoted text is still encrypted, but they’ll probably fix that soon). It really was friendly enough for anyone! But I’m not dusting off the old blog just to tell you that. No ma’am.

Unicorn and Cow (and sentry)

1. They encrypt and decrypt using their own key.

If you’ve ever spent the not-insignificant time to learn and use PGP yourself, you’ll know that one point of going through all the trouble is complete, end-to-end encryption. You don’t have to trust your email handlers. Any of them. And there can be many! So, uh, you just never give your private key to anyone, ok? Everyone gets their own keys (there are plenty for everyone, and they’re free!). That’s the way PGP works.

I should say that I’m not positive Enlocked uses their own key. It could just be a key they generate using some secret they securely get through you via OpenID or something fancy like that (even so, they’re free to brute force your secret day and night since they have the key). But without knowing for sure, you might as well assume it’s their key and they can decrypt your messages anytime they darn well please. Or if someone forces them to decrypt messages (like a government, or someone with lots of power or money), same result.

2. They encrypt and decrypt on their servers.

From their How it Works page:

The systems at Enlocked only have access to your messages for the short time we are encrypting or decrypting them, and then our software instantly removes any copies.

This is really more of the first reason (no end-to-end encryption), but it’s just another place where their inevitable security breach could occur.

3. Their software is closed-source.

If you know me you know I’m a Free² Software zealot, so you expect this kind of propaganda from me. But transparency is really important where the actual encryption and decryption takes place. They must at least make their client-side code available for review.

Sorry Enlocked, nobody serious about security will adopt your software until you address these issues.

Disclaimer: I’m no security expert. But Bruce Schneier is. If you really want to get schooled on security, read anything he’s written. For instance: Secrets and Lies: Digital Security in a Networked World.

Link Checker Wishlist

Link checkers spider through your website and make sure that links work. I want an awesome link checker. Ideally, it would espouse as many of these attributes as possible:

  • easy to learn
  • easy to configure/customize
    • example config: don’t hit URLs on other servers
  • sensible default behaviors
    • example: respects robots.txt and ‘nofollow’ link attributes
  • scriptable / embeddable
    • useful from command line
    • useful from within CI servers like Jenkins
  • recurses (parses HTML, follows links)
    • and smartly avoids checking the same pages twice
  • fast
  • thrifty with memory
  • pluggable
    • example plugin: run jslint on all JavaScript
    • example plugin: validate HTML 5
    • example plugin: validate CSS
    • example plugin: compute accessibility score
    • example plugin: JUnit XML output
    • example plugin: OpenDocument spreadsheet output
    • example plugin: Excel output
    • example plugin: CSV output
    • example plugin: JavaScript engine
    • example plugin: follow hashbang URLs
  • beautiful source code