Getting started with a TensorFlow surgery classifier with TensorBoard data viz

Originally published at Opensource.com. Creative Commons License This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

The most challenging part of deep learning is labeling, as you’ll see in part one of this two-part series, Learn how to classify images with TensorFlow. Proper training is critical to effective future classification, and for training to work, we need lots of accurately labeled data. In part one, I skipped over this challenge by downloading 3,000 prelabeled images. I then showed you how to use this labeled data to train your classifier with TensorFlow. In this part we’ll train with a new data set, and I’ll introduce the TensorBoard suite of data visualization tools to make it easier to understand, debug, and optimize our TensorFlow code.

Given my work as VP of engineering and compliance at healthcare technology company C-SATS, I was eager to build a classifier for something related to surgery. Suturing seemed like a great place to start. It is immediately useful, and I know how to recognize it. It is useful because, for example, if a machine can see when suturing is occurring, it can automatically identify the step (phase) of a surgical procedure where suturing takes place, e.g. anastomosis. And I can recognize it because the needle and thread of a surgical suture are distinct, even to my layperson’s eyes.

My goal was to train a machine to identify suturing in medical videos.

I have access to billions of frames of non-identifiable surgical video, many of which contain suturing. But I’m back to the labeling problem. Luckily, C-SATS has an army of experienced annotators who are experts at doing exactly this. My source data were video files and annotations in JSON.

The annotations look like this:

[
    {
        "annotations": [
            {
                "endSeconds": 2115.215,
                "label": "suturing",
                "startSeconds": 2319.541
            },
            {
                "endSeconds": 2976.301,
                "label": "suturing",
                "startSeconds": 2528.884
            }
        ],
        "durationSeconds": 2975,
        "videoId": 5
    },
    {
        "annotations": [
        // ...etc...

I wrote a Python script to use the JSON annotations to decide which frames to grab from the .mp4 video files. ffmpeg does the actual grabbing. I decided to grab at most one frame per second, then I divided the total number of video seconds by four to get 10k seconds (10k frames). After I figured out which seconds to grab, I ran a quick test to see if a particular second was inside or outside a segment annotated as suturing (isWithinSuturingSegment() in the code below). Here’s grab.py:

#!/usr/bin/python
 
# Grab frames from videos with ffmpeg. Use multiple cores.
# Minimum resolution is 1 second--this is a shortcut to get less frames.
 
# (C)2017 Adam Monsen. License: AGPL v3 or later.
 
import json
import subprocess
from multiprocessing import Pool
import os
 
frameList = []
 
def isWithinSuturingSegment(annotations, timepointSeconds):
    for annotation in annotations:
        startSeconds = annotation['startSeconds']
        endSeconds = annotation['endSeconds']
        if timepointSeconds > startSeconds and timepointSeconds < endSeconds:
            return True
    return False
 
with open('available-suturing-segments.json') as f:
    j = json.load(f)
 
    for video in j:
        videoId = video['videoId']
        videoDuration = video['durationSeconds']
 
        # generate many ffmpeg frame-grabbing commands
        start = 1
        stop = videoDuration
        step = 4 # Reduce to grab more frames
        for timepointSeconds in xrange(start, stop, step):
            inputFilename = '/home/adam/Downloads/suturing-videos/{}.mp4'.format(videoId)
            outputFilename = '{}-{}.jpg'.format(video['videoId'], timepointSeconds)
            if isWithinSuturingSegment(video['annotations'], timepointSeconds):
                outputFilename = 'suturing/{}'.format(outputFilename)
            else:
                outputFilename = 'not-suturing/{}'.format(outputFilename)
            outputFilename = '/home/adam/local/{}'.format(outputFilename)
 
            commandString = 'ffmpeg -loglevel quiet -ss {} -i {} -frames:v 1 {}'.format(
                timepointSeconds, inputFilename, outputFilename)
 
            frameList.append({
                'outputFilename': outputFilename,
                'commandString': commandString,
            })
 
def grabFrame(f):
    if os.path.isfile(f['outputFilename']):
        print 'already completed {}'.format(f['outputFilename'])
    else:
        print 'processing {}'.format(f['outputFilename'])
        subprocess.check_call(f['commandString'].split())
 
p = Pool(4) # for my 4-core laptop
p.map(grabFrame, frameList)

Now we’re ready to retrain the model again, exactly as before.

To use this script to snip out 10k frames took me about 10 minutes, then an hour or so to retrain Inception to recognize suturing at 90% accuracy. I did spot checks with new data that wasn’t from the training set, and every frame I tried was correctly identified (mean confidence score: 88%, median confidence score: 91%).

Here are my spot checks. (WARNING: Contains links to images of blood and guts.)

Image Not suturing score Suturing score
Not-Suturing-01.jpg 0.71053 0.28947
Not-Suturing-02.jpg 0.94890 0.05110
Not-Suturing-03.jpg 0.99825 0.00175
Suturing-01.jpg 0.08392 0.91608
Suturing-02.jpg 0.08851 0.91149
Suturing-03.jpg 0.18495 0.81505

How to use TensorBoard

Visualizing what’s happening under the hood and communicating this with others is at least as hard with deep learning as it is in any other kind of software. TensorBoard to the rescue!

Retrain.py from part one automatically generates the files TensorBoard uses to generate graphs representing what happened during retraining.

To set up TensorBoard, run the following inside the container after running retrain.py.

pip install tensorboard
tensorboard --logdir /tmp/retrain_logs

Watch the output and open the printed URL in a browser.

Starting TensorBoard 41 on port 6006
(You can navigate to http://172.17.0.2:6006)

You’ll see something like this:

I hope this will help; if not, you’ll at least have something cool to show. During retraining, I found it helpful to see under the “SCALARS” tab how accuracy increases while cross-entropy decreases as we perform more training steps. This is what we want.

Learn more

If you’d like to learn more, explore these resources:

Here are other resources that I used in writing this series, which may help you, too:

If you’d like to chat about this topic, please drop by the ##tfadam topical channel on Freenode IRC. You can also email me or leave a comment below.

This series would never have happened without expert help from Eva Monsen, Brian C. Lane, Rob Smith, Alex Simes, VM Brasseur, Bri Hatch, Rikki Endsley and the all-star editors at Opensource.com.

Learn how to classify images with TensorFlow

Originally published at Opensource.com. Creative Commons License This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Recent advancements in deep learning algorithms and hardware performance have enabled researchers and companies to make giant strides in areas such as image recognition, speech recognition, recommendation engines, and machine translation. Six years ago, the first superhuman performance in visual pattern recognition was achieved. Two years ago, the Google Brain team unleashed TensorFlow, deftly slinging applied deep learning to the masses. TensorFlow is outpacing many complex tools used for deep learning.

With TensorFlow, you’ll gain access to complex features with vast power. The keystone of its power is TensorFlow’s ease of use.

In a two-part series, I’ll explain how to quickly create a convolutional neural network for practical image recognition. The computation steps are embarrassingly parallel and can be deployed to perform frame-by-frame video analysis and extended for temporal-aware video analysis.

This series cuts directly to the most compelling material. A basic understanding of the command line and Python is all you need to play along from home. It aims to get you started quickly and inspire you to create your own amazing projects. I won’t dive into the depths of how TensorFlow works, but I’ll provide plenty of additional references if you’re hungry for more. All the libraries and tools in this series are free/libre/open source software.

How it works

Our goal in this tutorial is to take a novel image that falls into a category we’ve trained and run it through a command that will tell us in which category the image fits. We’ll follow these steps:

a directed graph from label to train to classify

  1. Labeling is the process of curating training data. For flowers, images of daisies are dragged into the “daisies” folder, roses into the “roses” folder, and so on, for as many different flowers as desired. If we never label ferns, the classifier will never return “ferns.” This requires many examples of each type, so it is an important and time-consuming process. (We will use pre-labeled data to start, which will make this much quicker.)
  2. Training is when we feed the labeled data (images) to the model. A tool will grab a random batch of images, use the model to guess what type of flower is in each, test the accuracy of the guesses, and repeat until most of the training data is used. The last batch of unused images is used to calculate the accuracy of the trained model.
  3. Classification is using the model on novel images. For example, input: IMG207.JPG, output: daisies. This is the fastest and easiest step and is cheap to scale.

Training and classification

In this tutorial, we’ll train an image classifier to recognize different types of flowers. Deep learning requires a lot of training data, so we’ll need lots of sorted flower images. Thankfully, another kind soul has done an awesome job of collecting and sorting images, so we’ll use this sorted data set with a clever script that will take an existing, fully trained image classification model and retrain the last layers of the model to do just what we want. This technique is called transfer learning.

The model we’re retraining is called Inception v3, originally specified in the December 2015 paper “Rethinking the Inception Architecture for Computer Vision.”

Inception doesn’t know how to tell a tulip from a daisy until we do this training, which takes about 20 minutes. This is the “learning” part of deep learning.

Installation

Step one to machine sentience: Install Docker on your platform of choice.

The first and only dependency is Docker. This is the case in many TensorFlow tutorials (which should indicate this is a reasonable way to start). I also prefer this method of installing TensorFlow because it keeps your host (laptop or desktop) clean by not installing a bunch of dependencies.

Bootstrap TensorFlow

With Docker installed, we’re ready to fire up a TensorFlow container for training and classification. Create a working directory somewhere on your hard drive with 2 gigabytes of free space. Create a subdirectory called local and note the full path to that directory.

docker run -v /path/to/local:/notebooks/local --rm -it --name tensorflow tensorflow/tensorflow:nightly /bin/bash

Here’s a breakdown of that command.

  • -v /path/to/local:/notebooks/local mounts the local directory you just created to a convenient place in the container. If using RHEL, Fedora, or another SELinux-enabled system, append :Z to this to allow the container to access the directory.
  • --rm tells Docker to delete the container when we’re done.
  • -it attaches our input and output to make the container interactive.
  • --name tensorflow gives our container the name tensorflow instead of sneaky_chowderhead or whatever random name Docker might pick for us.
  • tensorflow/tensorflow:nightly says run the nightly image of tensorflow/tensorflowfrom Docker Hub (a public image repository) instead of latest (by default, the most recently built/available image). We are using nightly instead of latest because (at the time of writing) latest contains a bug that breaks TensorBoard, a data visualization tool we’ll find handy later.
  • /bin/bash says don’t run the default command; run a Bash shell instead.

Train the model

Inside the container, run these commands to download and sanity check the training data.

curl -O http://download.tensorflow.org/example_images/flower_photos.tgz
echo 'db6b71d5d3afff90302ee17fd1fefc11d57f243f  flower_photos.tgz' | sha1sum -c

If you don’t see the message flower_photos.tgz: OK, you don’t have the correct file. If the above curl or sha1sum steps fail, manually download and explode the training data tarball(SHA-1 checksum: db6b71d5d3afff90302ee17fd1fefc11d57f243f) in the local directory on your host.

Now put the training data in place, then download and sanity check the retraining script.

mv flower_photos.tgz local/
cd local
curl -O https://raw.githubusercontent.com/tensorflow/tensorflow/10cf65b48e1b2f16eaa826d2793cb67207a085d0/tensorflow/examples/image_retraining/retrain.py
echo 'a74361beb4f763dc2d0101cfe87b672ceae6e2f5  retrain.py' | sha1sum -c

Look for confirmation that retrain.py has the correct contents. You should see retrain.py: OK.

Finally, it’s time to learn! Run the retraining script.

python retrain.py --image_dir flower_photos --output_graph output_graph.pb --output_labels output_labels.txt

If you encounter this error, ignore it:
TypeError: not all arguments converted during string formatting Logged from file tf_logging.py, line 82.

As retrain.py proceeds, the training images are automatically separated into batches of training, test, and validation data sets.

In the output, we’re hoping for high “Train accuracy” and “Validation accuracy” and low “Cross entropy.” See How to retrain Inception’s final layer for new categories for a detailed explanation of these terms. Expect training to take around 30 minutes on modern hardware.

Pay attention to the last line of output in your console:

INFO:tensorflow:Final test accuracy = 89.1% (N=340)

This says we’ve got a model that will, nine times out of 10, correctly guess which one of five possible flower types is shown in a given image. Your accuracy will likely differ because of randomness injected into the training process.

Classify

With one more small script, we can feed new flower images to the model and it’ll output its guesses. This is image classification.

Save the following as classify.py in the local directory on your host:

import tensorflow as tf, sys
 
image_path = sys.argv[1]
graph_path = 'output_graph.pb'
labels_path = 'output_labels.txt'
 
# Read in the image_data
image_data = tf.gfile.FastGFile(image_path, 'rb').read()
 
# Loads label file, strips off carriage return
label_lines = [line.rstrip() for line
    in tf.gfile.GFile(labels_path)]
 
# Unpersists graph from file
with tf.gfile.FastGFile(graph_path, 'rb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())
    _ = tf.import_graph_def(graph_def, name='')
 
# Feed the image_data as input to the graph and get first prediction
with tf.Session() as sess:
    softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
    predictions = sess.run(softmax_tensor, 
    {'DecodeJpeg/contents:0': image_data})
    # Sort to show labels of first prediction in order of confidence
    top_k = predictions[0].argsort()[-len(predictions[0]):][::-1]
    for node_id in top_k:
         human_string = label_lines[node_id]
         score = predictions[0][node_id]
         print('%s (score = %.5f)' % (human_string, score))

To test your own image, save it as test.jpg in your local directory and run (in the container) python classify.py test.jpg. The output will look something like this:

sunflowers (score = 0.78311)
daisy (score = 0.20722)
dandelion (score = 0.00605)
tulips (score = 0.00289)
roses (score = 0.00073)

The numbers indicate confidence. The model is 78.311% sure the flower in the image is a sunflower. A higher score indicates a more likely match. Note that there can be only onematch. Multi-label classification requires a different approach.

For more detail, view this great line-by-line explanation of classify.py.

The graph loading code in the classifier script was broken, so I applied the graph_def = tf.GraphDef(), etc. graph loading code.

With zero rocket science and a handful of code, we’ve created a decent flower image classifier that can process about five images per second on an off-the-shelf laptop computer.

In the second part of this series, we’ll use this information to train a different image classifier, then take a look under the hood with TensorBoard. If you want to try out TensorBoard, keep this container running by making sure docker run isn’t terminated.

Encrypted partition path derivation via linear search through incrementally encoded packed data

locate is a lightning-fast command line search utility. It first hit the press in the early 80s when James A. Woods proclaimed the tradeoff of nightly updates is worth it for sub-second filesystem path matches.

The proposed architecture is simple but effective: incrementally encode all paths in a purpose-built binary database and perform matches with linear search. Since nearly all matches are partial, linear search generally outperforms binary search or other optimizations. Maintainers have followed this original architecture to the present day.

The indexer is called updatedb and it generally runs nightly, as root. If you have an encrypted home partition (and you should) nothing in your $HOME will be indexed. One workaround is to index it yourself. To maintain security I recommend storing the index inside your $HOME.

I like to use anacron since it automatically performs a catch-up run if necessary. This is handy for “daily would be nice” jobs that don’t need to run at an exact hour/minute of the day.

Here’s how to do it.

Add this to your crontab (this is one long line). This fires off your own personal anacron:

@hourly /usr/sbin/anacron -s -t $HOME/.anacrontab -S $HOME/.anacron

Add this to $HOME/.anacrontab to run your indexer daily (that’s the “1”) and after a 10 minute delay (that’s the “10”):

1 10 indexhome $HOME/bin/index-encrypted-homedir

Create the executable file $HOME/bin/index-encrypted-homedir with these contents:

#!/bin/bash
 
set -o errexit
set -o nounset
set -o pipefail
 
mkdir -p "$HOME/.var" "$HOME/.anacron"
updatedb -l 0 -n '.meteor .cache' -o "$HOME/.var/locate.db"

Finally, add this to your $HOME/.bashrc:

export LOCATE_PATH="$HOME/.var/locate.db"

Free Software Claus is Coming to Town

I help organize a conference for Free Software enthusiasts called SeaGL. This year I’m proud to report that Shauna Gordon McKeon and Richard Stallman (aka “RMS”) are keynote speakers.

I first invited RMS to Seattle 13 years ago, and finally in 2015 it all came together. In his words:

My talks are not technical. The topics of free software, copyright vs community, and digital inclusion deal with ethical/political issues that concern all users of computers.

So please do come on down to Seattle Central College on October 23rd and 24th, 2015 for SeaGL!

Sandstorm – personal cloud, self-organzing cluster

I’ve heard a lot of Meteor news lately, but somehow I missed Sandstorm. Your own personal cloud. Install services easier than installing apps on your phone. Add machines and they self-organize into a cluster. This sounds just way too awesome. Looks like they use Meteor heavily. Jade Wang (formerly of the Meteor Development Group) is a co-founder.

Apps must be packaged for Sandstorm (made into “grains”). The list of ported apps is pretty inspiring. Included are: draw.io, LibreBoard, HackerSlides, Let’s Chat, Paperwork… All were new to me, several are written in Meteor, and I was able to check out all of these in seconds. I’m hooked.

Oplog: a tail of wonder and woe

First, your TL;DR:

  1. Stress test your Meteor app.
  2. Oplog tailing may be less efficient for particular workloads.

Background

My work involves using crowdsourcing to assess and improve technical skill. We’re focusing on improving basic technical skills of surgeons first because—no surprise here—it matters. A more skilled surgeon means patients with less complications. Being healthy, not dying. Good stuff.

One way we gather data is a survey app where crowdworkers watch a short video of real human surgery and answer simple questions about what they saw. For example:

  • Do both hands work well together?
  • Is the surgeon efficient?
  • Are they rough or gentle?

Turns out the crowd nails this! Think of it this way: most anyone can recognize standout performers on the basketball court or a playing a piano, even if they’re not an expert at either. Minimal training and this “gut feel” are all we need to objectively measure basic technical skill.

Meteor

So, a survey app. Watch a video, answer a few questions. Pretty straightforward. We built one in-house. Meteor was a great choice here. Rapid development, easy deployment, JavaScript everywhere, decent Node.js stack out of the box, all that.

And of course we used oplog tailing right from the start because much of what read about oplog tailing made it sound like it was the only way to go. Sure, you’ll want oplog tailing for realtime (<10sec delayed) data when you have multiple apps connecting to the same MongoDB database. But if you don’t need that, you may not need it at all, and you may not want it.

Traffic pattern

Our traffic is very bursty. We publish a HIT on Amazon Mechanical Turk. Within minutes, the crowd is upon our survey app. Our app generally does fine, but folks complained of very slow survey completion times when we started hitting somewhere around 80 DDP(?) sessions in Kadira. Each DDP session in our survey app should equate to one simultaneous active user (hereafter “user”).

Here’s what we want to know:

  1. Why does our app slow down when it does?
  2. Can it scale [linearly]?
  3. Are there any small code or configuration changes we could do to get a lot more performance out of the existing hardware?

Spoilers:

  1. Meteor pegs the CPU when oplog tailing is enabled.
  2. Yes, if we disable oplog tailing.
  3. Yes, disabling oplog tailing and clustering our app.

Stress test

We created a stress test to get a better feel for the performance characteristics of our app.

The test uses nightwatch to emulate a turker completing a survey. Load the survey app, click radio buttons, enter comments, and throw in a few random waits. Many threads of the nightwatch test are spawned and charge on in parallel. The machine running nightwatch needs to be pretty beefy. I preferred a browser-based stress test because I noticed client-server interactions amplified the amount and frequency of DDP traffic (hello Mr. Reactivity). It was also easier to write and run nightwatch then pick the exact DDP traffic to send.

Notes on our app:

  • We use mup to deploy to Ubuntu EC2 servers on AWS.
  • Tested configuration uses one mup-deployed Meteor app.
  • The app connects to a local MongoDB server running a standalone one-member replica set (just to get the oplog).
    • I also tested with Modulus, scaled to one 512mb servo in us-east-1a. Non-enterprise Modulus runs with oplog tailing disabled, and the app connects to MongoDB on a host other than localhost.
  • Our app uses iron:router.
  • Our app doesn’t need to be reactive. Surveyees work in isolation. But this is how we wrote the app, so that’s what I tested.

Results

I ran a series of stress tests. Ramp up traffic, capture metrics, change code and/or server configuration, repeat. Here are the results.

Takeaways:

  • Each row in the spreadsheet represents one test.
  • Every test ran for 5 minutes.
  • When one “user” completes a survey, another one begins (so the number of users is kept more or less constant during evey 5-minute test).
  • There are lots of notes and Kadira screenshots in the results spreadsheet. For the Kadira screenshots, the relevant data is on the rightmost side of the graphs.
  • I think Kadira session counts are high. Maybe it isn’t counting disconnects, maybe DDP timeouts take a while, or maybe the nightwatch test disconnects slowly.
  • Row 3. At 40 users, the CPU is pegged. Add any more users and it takes too long for them to complete a survey.
  • Row 5. Notice how doubling the cores does not double the number of test passes (less than linear scaling along this dimension).
  • Row 6. Ouch, we’re really not scaling! Might need to investigate the efficiency of meteorhacks:cluster.
  • Row 7. Oplog tailing is disabled for this and all future tests. MongoDB CPU load is roughly doubled from the 40-user, 1-core, oplog-tailing-enabled test.
  • Row 9. Too much for one core: 6.5% of the tests failed.
  • Row 11. This is what we want to see! 2x cores, 2x users, 2x passes. In other words, with oplog tailing disabled and double the number of cores, we supported double the number of users and doubled test passes.
  • I should have also tested 160 users, 4 cores, oplog disabled. I didn’t. Live with it.
  • Disabling oplog tailing seemed to allow the processing load to shift more to MongoDB. MongoDB appeared to be able to handle same more… gracefully.
  • I didn’t get very far with Modulus. I’m very interested in their offering, but I just couldn’t get users (test runs) through our app fast enough to make further testing worthwhile.
  • A DNS issue prevented capturing Kadira status while running on Modulus.
  • cluster lives up to its promise—adding cores and spreading load.
  • I don’t think we’re moving much data, but any reactivity comes at a price at scale (even our so far little bitty scale).
  • Our survey app could and should be modified to use much less reactivity since, as I mentioned earlier, it is unnecessary.

Server-side profiles

This is somewhat of an addendum, but I figured it might be useful.

Here’s what the Meteor Node.js process does when 10 users hitting our survey app running on one core.

Oplog tailing enabled:

Pie chart server profile with oplog<br /><br />
tailing

Oplog tailing disabled:

Pie chart server profile without oplog<br /><br />
tailing

Takeaways:

  • Note that these pie charts only show %CPU usage. CPU and network are the primary resources our app uses, so this is fine.
  • The profile data for each slice (when you drill down) are very low-level. It’s hard to make any quick conclusions (or I just need more practice reading these).
  • When oplog tailing is enabled, the Cursor.fetch slice is about twice as big, and none of the methods causing that CPU load are ours. Perhaps this is the oplog “tailing” in action?
  • When oplog taling is disabled, drilling into Cursor.fetch shows us exactly what specific methods of ours are causing CPU load. Even if oplog tailing is more efficient, this level of introspection was priceless. We need this until we learn to better debug patterns in our code that lead to more CPU when oplog tailing is enabled.
  • The giant ~30% slice of “Other” is a bit of a bummer. What’s going on in there? Low-level/native Node.js operations like the MongoDB driver doing its thing? Sending/receiving network traffic?
  • Kadira monitoring isn’t free CPU-wise, but it is worth it.
  • What should these pie charts look like in a well-optimized application under load? Perhaps the biggest slice should belong to “Other”?

Further reading:

Feedback/questions/comments/corrections welcome! I’d espeically love to hear about your experiences scaling Meteor.

My Hadoop/MapReduce article in Linux Journal

I’m proud that LJ accepted my Hadoop/MapReduce article for the April 2013 issue! If you’re new to MapReduce and are interested in learning about same, this article is for you.

 

I’ll also be presenting a talk based on the article at LinuxFest Northwest 2013.

Google: Stuck on You

I require proprietary software to get through my day, but I like not being too dependent on it. With respect to that rule for myself and Google, I’ve failed.

I probably use the Internet mainly for search and email, and I need Google for both. Maps? All the time.

And there’s a doc I’d like to read now. The most important information to me is in the comments, but I can’t see the comments because this doc is “too popular”.

Google drive notice: file too popular

Dang.

See also: You Can’t Quit, I Dare You

Web Framework Flavor of the Month

I’ve been playing with Meteor a bit lately. It’s a “kitchen sink” system for writing web apps, complete with a database (MongoDB), server-side (Node.js), and client-side stuff. It’s all JavaScript.

It’s pretty fun for little experiments. I can imagine certain kinds of websites it would be good for (web-based chat, HTML5 games, collaborative editors, and one-webpage apps — same stuff I think vanilla Node.js excels at) and some it would not (mobile, CRUD with an RDBMS). I’m wondering if it would/should work well with larger web apps.

I’m afraid of JavaScript, but I think it’s finally time for me to overcome that fear. What better way to do so than to use JavaScript everywhere (database, server, client, APIs)?!

Meteor isn’t the only game around, it’s just the one I’ve looked at.