Tag Archives: Work

how to upgrade MongoDB 2.6 to 3.x on Ubuntu

sudo mv /etc/apt/sources.list.d/mongodb* /tmp/
echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release -sc)"/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list
sudo apt-get update && sudo apt-get install -y mongodb-org

And I also had to do fix my replica set in the MongoDB shell (necessary for Meteor oplog tailing):

var a = {"_id" : "rs0", "version" : 1,"members" : [{"_id" : 1, "host" : "localhost:27017"}]};
rs.reconfig(a, {force:true});

Oplog: a tail of wonder and woe

First, your TL;DR:

  1. Stress test your Meteor app.
  2. Oplog tailing may be less efficient for particular workloads.

Background

My work involves using crowdsourcing to assess and improve technical skill. We’re focusing on improving basic technical skills of surgeons first because—no surprise here—it matters. A more skilled surgeon means patients with less complications. Being healthy, not dying. Good stuff.

One way we gather data is a survey app where crowdworkers watch a short video of real human surgery and answer simple questions about what they saw. For example:

  • Do both hands work well together?
  • Is the surgeon efficient?
  • Are they rough or gentle?

Turns out the crowd nails this! Think of it this way: most anyone can recognize standout performers on the basketball court or a playing a piano, even if they’re not an expert at either. Minimal training and this “gut feel” are all we need to objectively measure basic technical skill.

Meteor

So, a survey app. Watch a video, answer a few questions. Pretty straightforward. We built one in-house. Meteor was a great choice here. Rapid development, easy deployment, JavaScript everywhere, decent Node.js stack out of the box, all that.

And of course we used oplog tailing right from the start because much of what read about oplog tailing made it sound like it was the only way to go. Sure, you’ll want oplog tailing for realtime (<10sec delayed) data when you have multiple apps connecting to the same MongoDB database. But if you don’t need that, you may not need it at all, and you may not want it.

Traffic pattern

Our traffic is very bursty. We publish a HIT on Amazon Mechanical Turk. Within minutes, the crowd is upon our survey app. Our app generally does fine, but folks complained of very slow survey completion times when we started hitting somewhere around 80 DDP(?) sessions in Kadira. Each DDP session in our survey app should equate to one simultaneous active user (hereafter “user”).

Here’s what we want to know:

  1. Why does our app slow down when it does?
  2. Can it scale [linearly]?
  3. Are there any small code or configuration changes we could do to get a lot more performance out of the existing hardware?

Spoilers:

  1. Meteor pegs the CPU when oplog tailing is enabled.
  2. Yes, if we disable oplog tailing.
  3. Yes, disabling oplog tailing and clustering our app.

Stress test

We created a stress test to get a better feel for the performance characteristics of our app.

The test uses nightwatch to emulate a turker completing a survey. Load the survey app, click radio buttons, enter comments, and throw in a few random waits. Many threads of the nightwatch test are spawned and charge on in parallel. The machine running nightwatch needs to be pretty beefy. I preferred a browser-based stress test because I noticed client-server interactions amplified the amount and frequency of DDP traffic (hello Mr. Reactivity). It was also easier to write and run nightwatch then pick the exact DDP traffic to send.

Notes on our app:

  • We use mup to deploy to Ubuntu EC2 servers on AWS.
  • Tested configuration uses one mup-deployed Meteor app.
  • The app connects to a local MongoDB server running a standalone one-member replica set (just to get the oplog).
    • I also tested with Modulus, scaled to one 512mb servo in us-east-1a. Non-enterprise Modulus runs with oplog tailing disabled, and the app connects to MongoDB on a host other than localhost.
  • Our app uses iron:router.
  • Our app doesn’t need to be reactive. Surveyees work in isolation. But this is how we wrote the app, so that’s what I tested.

Results

I ran a series of stress tests. Ramp up traffic, capture metrics, change code and/or server configuration, repeat. Here are the results.

Takeaways:

  • Each row in the spreadsheet represents one test.
  • Every test ran for 5 minutes.
  • When one “user” completes a survey, another one begins (so the number of users is kept more or less constant during evey 5-minute test).
  • There are lots of notes and Kadira screenshots in the results spreadsheet. For the Kadira screenshots, the relevant data is on the rightmost side of the graphs.
  • I think Kadira session counts are high. Maybe it isn’t counting disconnects, maybe DDP timeouts take a while, or maybe the nightwatch test disconnects slowly.
  • Row 3. At 40 users, the CPU is pegged. Add any more users and it takes too long for them to complete a survey.
  • Row 5. Notice how doubling the cores does not double the number of test passes (less than linear scaling along this dimension).
  • Row 6. Ouch, we’re really not scaling! Might need to investigate the efficiency of meteorhacks:cluster.
  • Row 7. Oplog tailing is disabled for this and all future tests. MongoDB CPU load is roughly doubled from the 40-user, 1-core, oplog-tailing-enabled test.
  • Row 9. Too much for one core: 6.5% of the tests failed.
  • Row 11. This is what we want to see! 2x cores, 2x users, 2x passes. In other words, with oplog tailing disabled and double the number of cores, we supported double the number of users and doubled test passes.
  • I should have also tested 160 users, 4 cores, oplog disabled. I didn’t. Live with it.
  • Disabling oplog tailing seemed to allow the processing load to shift more to MongoDB. MongoDB appeared to be able to handle same more… gracefully.
  • I didn’t get very far with Modulus. I’m very interested in their offering, but I just couldn’t get users (test runs) through our app fast enough to make further testing worthwhile.
  • A DNS issue prevented capturing Kadira status while running on Modulus.
  • cluster lives up to its promise—adding cores and spreading load.
  • I don’t think we’re moving much data, but any reactivity comes at a price at scale (even our so far little bitty scale).
  • Our survey app could and should be modified to use much less reactivity since, as I mentioned earlier, it is unnecessary.

Server-side profiles

This is somewhat of an addendum, but I figured it might be useful.

Here’s what the Meteor Node.js process does when 10 users hitting our survey app running on one core.

Oplog tailing enabled:

Pie chart server profile with oplog<br /><br />
tailing

Oplog tailing disabled:

Pie chart server profile without oplog<br /><br />
tailing

Takeaways:

  • Note that these pie charts only show %CPU usage. CPU and network are the primary resources our app uses, so this is fine.
  • The profile data for each slice (when you drill down) are very low-level. It’s hard to make any quick conclusions (or I just need more practice reading these).
  • When oplog tailing is enabled, the Cursor.fetch slice is about twice as big, and none of the methods causing that CPU load are ours. Perhaps this is the oplog “tailing” in action?
  • When oplog taling is disabled, drilling into Cursor.fetch shows us exactly what specific methods of ours are causing CPU load. Even if oplog tailing is more efficient, this level of introspection was priceless. We need this until we learn to better debug patterns in our code that lead to more CPU when oplog tailing is enabled.
  • The giant ~30% slice of “Other” is a bit of a bummer. What’s going on in there? Low-level/native Node.js operations like the MongoDB driver doing its thing? Sending/receiving network traffic?
  • Kadira monitoring isn’t free CPU-wise, but it is worth it.
  • What should these pie charts look like in a well-optimized application under load? Perhaps the biggest slice should belong to “Other”?

Further reading:

Feedback/questions/comments/corrections welcome! I’d espeically love to hear about your experiences scaling Meteor.

Looking for Online Video Cropping, Blurring, Hosting

Does anyone know of an online video editing service where I can upload my videos and edit them? This is for a work project. Here are the basic features I need:

  • crop – remove first x minutes, last y minutes
  • blur – faces, logos, etc
  • video storage
  • video streaming
  • encrypted, private

YouTube video editing comes really close, but I need more fine-grained control over blurring. I also need better licensing. Currently the only license and rights ownership choices are “Standard YouTube License” (they own it) or “Creative Commons – Attribution”. I want “Adam’s Private/Personal Video License”.

Adobe Premiere Express might have been perfect, were it not mothballed. Sounds like an online version of Adobe Premier.

Hire This Guy!

I’m looking for work. Here’s my resume.

I’m passionate about doing things that matter, and doing them well. I’m a leader with experience. I lead by serving my business, bosses, and coworkers. I solve problems efficiently and I always add value.

I’m a pro at pretty much any back-end, devops, sysadmin work. I have a strong preference for using Free/Libre/Open Source Software (and contributing back upstream often). Most of my career has been websites: PHP, Python, Java, Perl and the like. I’m experienced with SQL and NoSQL databases. I can do front-end work except graphic design. Most recently I’ve been coding a lot with PHP (specifically, Symfony2), AWS APIs and MongoDB.

I would love to do work right now which involved cloud automation, specifically, Amazon Web Services infrastructure provisioning. I could easily automate the setup for a very robust, high-traffic web/mobile service. I’ve had years of experience with AWS and want to work with it more. But that’s just one idea—I’m pretty flexible.

I love learning new tools and tech. I do so quickly, and I’m generally at my best (performance and happiness) while I’m learning. So if I’m not already versed in whatever tech your business needs to succeed, don’t worry, I will be soon.

Tweet of same

Update 2012-12-14: I’m now seeking full-time work in Seattle

How to securely connect an AWS load balancer to EC2 instances

Here’s the magic sauce to securely allow traffic to your webservers only from your load balancer. Run the following:

ec2-authorize --region REGION -C /path/to/cert.pem -K /path/to/key.pem ELB_NAME -u OWNER_ALIAS -o SOURCE_SECURITY_GROUP

The tricky bits for me were:

  • having to generate an X.509 key and cert just for this purpose (there’s gotta be a way to do that from the web console)
  • OWNER_ALIAS above and in the web console equates to SOURCE-OR-DEST-GROUP-USER in the ec2-authorize(1) manpage.
  • SOURCE_SECURITY_GROUP above and in the web console equates to SOURCE-OR-DEST-GROUP in the ec2-authorize(1) manpage.
  • to remember to include --region

The documentation for same is confusing to someone like me who doesn’t know much AWS security group terminology.

As far as I know, there’s no way to perform, view, or manage this special security setting through the web console.

offline HTML 5 validation

HTML 5 logo

I’m liking Henri Sivonen’s Validator.nu service. I’ve got it running locally, and it works well. I can use it as a web service and validate HTML from within Vim, using quickfix to rapidly resolve errors. My Jenkins CI server uses the same validator via phpunit tests.

Warning: it took me a very long time to get it running locally. Technically easy (just run a build script), but it downloads tons of libraries and files before it can do its job.

Debugging web tests on remote servers

I run “web tests” on a remote server. I use Selenium to act like a person interacting with a website, viewing and entering data. Selenium is pretty awesome, it can drive a real web browser like Firefox.

Even better is to have these web tests run automatically every time I commit code. I use Jenkins for this. Jenkins even fires up a headless desktop so Selenium can run Firefox.

When a web test breaks (especially in some way I can’t reproduce on my local desktop), sometimes it helps to actually see what Jenkins sees as it runs the test. Here’s a quick guide for doing so on an Ubuntu GNU/Linux server.

  1. Connect to the remote server using SSH. Install VNC server:
    sudo apt-get install vnc4-server
  2. On the remote server, become the user tests run as. For example:
    sudo su - ci
  3. Set a password for the VNC server using the vncpasswd command.
  4. Start headless X server by running vncserver. Note the given display. If example.com:1 is included in the output of vncserver, the display is :1.
  5. Figure out which port the VNC server is using. I usually do something like

    sudo netstat -nape | grep '^tcp.*LISTEN.*vnc.*'

    Here’s some example output:

    tcp        0      0 0.0.0.0:6001            0.0.0.0:*               LISTEN      107        3099855     13233/Xvnc4     
    tcp6       0      0 :::5901                 :::*                    LISTEN      107        3099858     13233/Xvnc4

    By trial and error, I figured out that 5901 was the port I should use.

  6. Port-forward VNC to your local machine.

    1. Disconnect from the server.
    2. Reconnect, including -L10000:localhost:5901 on your SSH command line.
    3. Leave this connection open.
  7. On your local machine, connect a VNC client to localhost:10000. An X terminal should be displayed.

  8. In the X terminal, run your web tests.

  9. When finished debugging, kill the X server using the display noted earlier.
    vncserver -kill :1