Getting started with a TensorFlow surgery classifier with TensorBoard data viz

Originally published at Opensource.com. Creative Commons License This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

The most challenging part of deep learning is labeling, as you’ll see in part one of this two-part series, Learn how to classify images with TensorFlow. Proper training is critical to effective future classification, and for training to work, we need lots of accurately labeled data. In part one, I skipped over this challenge by downloading 3,000 prelabeled images. I then showed you how to use this labeled data to train your classifier with TensorFlow. In this part we’ll train with a new data set, and I’ll introduce the TensorBoard suite of data visualization tools to make it easier to understand, debug, and optimize our TensorFlow code.

Given my work as VP of engineering and compliance at healthcare technology company C-SATS, I was eager to build a classifier for something related to surgery. Suturing seemed like a great place to start. It is immediately useful, and I know how to recognize it. It is useful because, for example, if a machine can see when suturing is occurring, it can automatically identify the step (phase) of a surgical procedure where suturing takes place, e.g. anastomosis. And I can recognize it because the needle and thread of a surgical suture are distinct, even to my layperson’s eyes.

My goal was to train a machine to identify suturing in medical videos.

I have access to billions of frames of non-identifiable surgical video, many of which contain suturing. But I’m back to the labeling problem. Luckily, C-SATS has an army of experienced annotators who are experts at doing exactly this. My source data were video files and annotations in JSON.

The annotations look like this:

[
    {
        "annotations": [
            {
                "endSeconds": 2115.215,
                "label": "suturing",
                "startSeconds": 2319.541
            },
            {
                "endSeconds": 2976.301,
                "label": "suturing",
                "startSeconds": 2528.884
            }
        ],
        "durationSeconds": 2975,
        "videoId": 5
    },
    {
        "annotations": [
        // ...etc...

I wrote a Python script to use the JSON annotations to decide which frames to grab from the .mp4 video files. ffmpeg does the actual grabbing. I decided to grab at most one frame per second, then I divided the total number of video seconds by four to get 10k seconds (10k frames). After I figured out which seconds to grab, I ran a quick test to see if a particular second was inside or outside a segment annotated as suturing (isWithinSuturingSegment() in the code below). Here’s grab.py:

#!/usr/bin/python
 
# Grab frames from videos with ffmpeg. Use multiple cores.
# Minimum resolution is 1 second--this is a shortcut to get less frames.
 
# (C)2017 Adam Monsen. License: AGPL v3 or later.
 
import json
import subprocess
from multiprocessing import Pool
import os
 
frameList = []
 
def isWithinSuturingSegment(annotations, timepointSeconds):
    for annotation in annotations:
        startSeconds = annotation['startSeconds']
        endSeconds = annotation['endSeconds']
        if timepointSeconds > startSeconds and timepointSeconds < endSeconds:
            return True
    return False
 
with open('available-suturing-segments.json') as f:
    j = json.load(f)
 
    for video in j:
        videoId = video['videoId']
        videoDuration = video['durationSeconds']
 
        # generate many ffmpeg frame-grabbing commands
        start = 1
        stop = videoDuration
        step = 4 # Reduce to grab more frames
        for timepointSeconds in xrange(start, stop, step):
            inputFilename = '/home/adam/Downloads/suturing-videos/{}.mp4'.format(videoId)
            outputFilename = '{}-{}.jpg'.format(video['videoId'], timepointSeconds)
            if isWithinSuturingSegment(video['annotations'], timepointSeconds):
                outputFilename = 'suturing/{}'.format(outputFilename)
            else:
                outputFilename = 'not-suturing/{}'.format(outputFilename)
            outputFilename = '/home/adam/local/{}'.format(outputFilename)
 
            commandString = 'ffmpeg -loglevel quiet -ss {} -i {} -frames:v 1 {}'.format(
                timepointSeconds, inputFilename, outputFilename)
 
            frameList.append({
                'outputFilename': outputFilename,
                'commandString': commandString,
            })
 
def grabFrame(f):
    if os.path.isfile(f['outputFilename']):
        print 'already completed {}'.format(f['outputFilename'])
    else:
        print 'processing {}'.format(f['outputFilename'])
        subprocess.check_call(f['commandString'].split())
 
p = Pool(4) # for my 4-core laptop
p.map(grabFrame, frameList)

Now we’re ready to retrain the model again, exactly as before.

To use this script to snip out 10k frames took me about 10 minutes, then an hour or so to retrain Inception to recognize suturing at 90% accuracy. I did spot checks with new data that wasn’t from the training set, and every frame I tried was correctly identified (mean confidence score: 88%, median confidence score: 91%).

Here are my spot checks. (WARNING: Contains links to images of blood and guts.)

Image Not suturing score Suturing score
Not-Suturing-01.jpg 0.71053 0.28947
Not-Suturing-02.jpg 0.94890 0.05110
Not-Suturing-03.jpg 0.99825 0.00175
Suturing-01.jpg 0.08392 0.91608
Suturing-02.jpg 0.08851 0.91149
Suturing-03.jpg 0.18495 0.81505

How to use TensorBoard

Visualizing what’s happening under the hood and communicating this with others is at least as hard with deep learning as it is in any other kind of software. TensorBoard to the rescue!

Retrain.py from part one automatically generates the files TensorBoard uses to generate graphs representing what happened during retraining.

To set up TensorBoard, run the following inside the container after running retrain.py.

pip install tensorboard
tensorboard --logdir /tmp/retrain_logs

Watch the output and open the printed URL in a browser.

Starting TensorBoard 41 on port 6006
(You can navigate to http://172.17.0.2:6006)

You’ll see something like this:

I hope this will help; if not, you’ll at least have something cool to show. During retraining, I found it helpful to see under the “SCALARS” tab how accuracy increases while cross-entropy decreases as we perform more training steps. This is what we want.

Learn more

If you’d like to learn more, explore these resources:

Here are other resources that I used in writing this series, which may help you, too:

If you’d like to chat about this topic, please drop by the ##tfadam topical channel on Freenode IRC. You can also email me or leave a comment below.

This series would never have happened without expert help from Eva Monsen, Brian C. Lane, Rob Smith, Alex Simes, VM Brasseur, Bri Hatch, Rikki Endsley and the all-star editors at Opensource.com.

Learn how to classify images with TensorFlow

Originally published at Opensource.com. Creative Commons License This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Recent advancements in deep learning algorithms and hardware performance have enabled researchers and companies to make giant strides in areas such as image recognition, speech recognition, recommendation engines, and machine translation. Six years ago, the first superhuman performance in visual pattern recognition was achieved. Two years ago, the Google Brain team unleashed TensorFlow, deftly slinging applied deep learning to the masses. TensorFlow is outpacing many complex tools used for deep learning.

With TensorFlow, you’ll gain access to complex features with vast power. The keystone of its power is TensorFlow’s ease of use.

In a two-part series, I’ll explain how to quickly create a convolutional neural network for practical image recognition. The computation steps are embarrassingly parallel and can be deployed to perform frame-by-frame video analysis and extended for temporal-aware video analysis.

This series cuts directly to the most compelling material. A basic understanding of the command line and Python is all you need to play along from home. It aims to get you started quickly and inspire you to create your own amazing projects. I won’t dive into the depths of how TensorFlow works, but I’ll provide plenty of additional references if you’re hungry for more. All the libraries and tools in this series are free/libre/open source software.

How it works

Our goal in this tutorial is to take a novel image that falls into a category we’ve trained and run it through a command that will tell us in which category the image fits. We’ll follow these steps:

a directed graph from label to train to classify

  1. Labeling is the process of curating training data. For flowers, images of daisies are dragged into the “daisies” folder, roses into the “roses” folder, and so on, for as many different flowers as desired. If we never label ferns, the classifier will never return “ferns.” This requires many examples of each type, so it is an important and time-consuming process. (We will use pre-labeled data to start, which will make this much quicker.)
  2. Training is when we feed the labeled data (images) to the model. A tool will grab a random batch of images, use the model to guess what type of flower is in each, test the accuracy of the guesses, and repeat until most of the training data is used. The last batch of unused images is used to calculate the accuracy of the trained model.
  3. Classification is using the model on novel images. For example, input: IMG207.JPG, output: daisies. This is the fastest and easiest step and is cheap to scale.

Training and classification

In this tutorial, we’ll train an image classifier to recognize different types of flowers. Deep learning requires a lot of training data, so we’ll need lots of sorted flower images. Thankfully, another kind soul has done an awesome job of collecting and sorting images, so we’ll use this sorted data set with a clever script that will take an existing, fully trained image classification model and retrain the last layers of the model to do just what we want. This technique is called transfer learning.

The model we’re retraining is called Inception v3, originally specified in the December 2015 paper “Rethinking the Inception Architecture for Computer Vision.”

Inception doesn’t know how to tell a tulip from a daisy until we do this training, which takes about 20 minutes. This is the “learning” part of deep learning.

Installation

Step one to machine sentience: Install Docker on your platform of choice.

The first and only dependency is Docker. This is the case in many TensorFlow tutorials (which should indicate this is a reasonable way to start). I also prefer this method of installing TensorFlow because it keeps your host (laptop or desktop) clean by not installing a bunch of dependencies.

Bootstrap TensorFlow

With Docker installed, we’re ready to fire up a TensorFlow container for training and classification. Create a working directory somewhere on your hard drive with 2 gigabytes of free space. Create a subdirectory called local and note the full path to that directory.

docker run -v /path/to/local:/notebooks/local --rm -it --name tensorflow tensorflow/tensorflow:nightly /bin/bash

Here’s a breakdown of that command.

  • -v /path/to/local:/notebooks/local mounts the local directory you just created to a convenient place in the container. If using RHEL, Fedora, or another SELinux-enabled system, append :Z to this to allow the container to access the directory.
  • --rm tells Docker to delete the container when we’re done.
  • -it attaches our input and output to make the container interactive.
  • --name tensorflow gives our container the name tensorflow instead of sneaky_chowderhead or whatever random name Docker might pick for us.
  • tensorflow/tensorflow:nightly says run the nightly image of tensorflow/tensorflowfrom Docker Hub (a public image repository) instead of latest (by default, the most recently built/available image). We are using nightly instead of latest because (at the time of writing) latest contains a bug that breaks TensorBoard, a data visualization tool we’ll find handy later.
  • /bin/bash says don’t run the default command; run a Bash shell instead.

Train the model

Inside the container, run these commands to download and sanity check the training data.

curl -O http://download.tensorflow.org/example_images/flower_photos.tgz
echo 'db6b71d5d3afff90302ee17fd1fefc11d57f243f  flower_photos.tgz' | sha1sum -c

If you don’t see the message flower_photos.tgz: OK, you don’t have the correct file. If the above curl or sha1sum steps fail, manually download and explode the training data tarball(SHA-1 checksum: db6b71d5d3afff90302ee17fd1fefc11d57f243f) in the local directory on your host.

Now put the training data in place, then download and sanity check the retraining script.

mv flower_photos.tgz local/
cd local
curl -O https://raw.githubusercontent.com/tensorflow/tensorflow/10cf65b48e1b2f16eaa826d2793cb67207a085d0/tensorflow/examples/image_retraining/retrain.py
echo 'a74361beb4f763dc2d0101cfe87b672ceae6e2f5  retrain.py' | sha1sum -c

Look for confirmation that retrain.py has the correct contents. You should see retrain.py: OK.

Finally, it’s time to learn! Run the retraining script.

python retrain.py --image_dir flower_photos --output_graph output_graph.pb --output_labels output_labels.txt

If you encounter this error, ignore it:
TypeError: not all arguments converted during string formatting Logged from file tf_logging.py, line 82.

As retrain.py proceeds, the training images are automatically separated into batches of training, test, and validation data sets.

In the output, we’re hoping for high “Train accuracy” and “Validation accuracy” and low “Cross entropy.” See How to retrain Inception’s final layer for new categories for a detailed explanation of these terms. Expect training to take around 30 minutes on modern hardware.

Pay attention to the last line of output in your console:

INFO:tensorflow:Final test accuracy = 89.1% (N=340)

This says we’ve got a model that will, nine times out of 10, correctly guess which one of five possible flower types is shown in a given image. Your accuracy will likely differ because of randomness injected into the training process.

Classify

With one more small script, we can feed new flower images to the model and it’ll output its guesses. This is image classification.

Save the following as classify.py in the local directory on your host:

import tensorflow as tf, sys
 
image_path = sys.argv[1]
graph_path = 'output_graph.pb'
labels_path = 'output_labels.txt'
 
# Read in the image_data
image_data = tf.gfile.FastGFile(image_path, 'rb').read()
 
# Loads label file, strips off carriage return
label_lines = [line.rstrip() for line
    in tf.gfile.GFile(labels_path)]
 
# Unpersists graph from file
with tf.gfile.FastGFile(graph_path, 'rb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())
    _ = tf.import_graph_def(graph_def, name='')
 
# Feed the image_data as input to the graph and get first prediction
with tf.Session() as sess:
    softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
    predictions = sess.run(softmax_tensor, 
    {'DecodeJpeg/contents:0': image_data})
    # Sort to show labels of first prediction in order of confidence
    top_k = predictions[0].argsort()[-len(predictions[0]):][::-1]
    for node_id in top_k:
         human_string = label_lines[node_id]
         score = predictions[0][node_id]
         print('%s (score = %.5f)' % (human_string, score))

To test your own image, save it as test.jpg in your local directory and run (in the container) python classify.py test.jpg. The output will look something like this:

sunflowers (score = 0.78311)
daisy (score = 0.20722)
dandelion (score = 0.00605)
tulips (score = 0.00289)
roses (score = 0.00073)

The numbers indicate confidence. The model is 78.311% sure the flower in the image is a sunflower. A higher score indicates a more likely match. Note that there can be only onematch. Multi-label classification requires a different approach.

For more detail, view this great line-by-line explanation of classify.py.

The graph loading code in the classifier script was broken, so I applied the graph_def = tf.GraphDef(), etc. graph loading code.

With zero rocket science and a handful of code, we’ve created a decent flower image classifier that can process about five images per second on an off-the-shelf laptop computer.

In the second part of this series, we’ll use this information to train a different image classifier, then take a look under the hood with TensorBoard. If you want to try out TensorBoard, keep this container running by making sure docker run isn’t terminated.

Encrypted partition path derivation via linear search through incrementally encoded packed data

locate is a lightning-fast command line search utility. It first hit the press in the early 80s when James A. Woods proclaimed the tradeoff of nightly updates is worth it for sub-second filesystem path matches.

The proposed architecture is simple but effective: incrementally encode all paths in a purpose-built binary database and perform matches with linear search. Since nearly all matches are partial, linear search generally outperforms binary search or other optimizations. Maintainers have followed this original architecture to the present day.

The indexer is called updatedb and it generally runs nightly, as root. If you have an encrypted home partition (and you should) nothing in your $HOME will be indexed. One workaround is to index it yourself. To maintain security I recommend storing the index inside your $HOME.

I like to use anacron since it automatically performs a catch-up run if necessary. This is handy for “daily would be nice” jobs that don’t need to run at an exact hour/minute of the day.

Here’s how to do it.

Add this to your crontab (this is one long line). This fires off your own personal anacron:

@hourly /usr/sbin/anacron -s -t $HOME/.anacrontab -S $HOME/.anacron

Add this to $HOME/.anacrontab to run your indexer daily (that’s the “1”) and after a 10 minute delay (that’s the “10”):

1 10 indexhome $HOME/bin/index-encrypted-homedir

Create the executable file $HOME/bin/index-encrypted-homedir with these contents:

#!/bin/bash
 
set -o errexit
set -o nounset
set -o pipefail
 
mkdir -p "$HOME/.var" "$HOME/.anacron"
updatedb -l 0 -n '.meteor .cache' -o "$HOME/.var/locate.db"

Finally, add this to your $HOME/.bashrc:

export LOCATE_PATH="$HOME/.var/locate.db"

Free Software Claus is Coming to Town

I help organize a conference for Free Software enthusiasts called SeaGL. This year I’m proud to report that Shauna Gordon McKeon and Richard Stallman (aka “RMS”) are keynote speakers.

I first invited RMS to Seattle 13 years ago, and finally in 2015 it all came together. In his words:

My talks are not technical. The topics of free software, copyright vs community, and digital inclusion deal with ethical/political issues that concern all users of computers.

So please do come on down to Seattle Central College on October 23rd and 24th, 2015 for SeaGL!

Yes, You Should Swap

If you’ve ever set up a machine by hand, you’ve probably had to decide how much of your disk to set aside as swap.

I’ve often wondered “why swap at all”? This quote by Nick Piggin from 2004 finally helped me answer the question.

no matter how much ram you have, swap can increase performance by allowing unused anonymous memory to be paged out, thereby increasing your maximum effective RAM

Found via this post on Hacker News, where the poster raises the point that some filesystem buffers might be extremely “hot” (frequently used), but might only fit in physical RAM (where they should be) if some swap space is available to page out other “cold” information.

Update 2016-12-22: except for Kubernetes nodes, apparently.

Debugging web tests on remote servers

I run “web tests” on a remote server. I use Selenium to act like a person interacting with a website, viewing and entering data. Selenium is pretty awesome, it can drive a real web browser like Firefox.

Even better is to have these web tests run automatically every time I commit code. I use Jenkins for this. Jenkins even fires up a headless desktop so Selenium can run Firefox.

When a web test breaks (especially in some way I can’t reproduce on my local desktop), sometimes it helps to actually see what Jenkins sees as it runs the test. Here’s a quick guide for doing so on an Ubuntu GNU/Linux server.

  1. Connect to the remote server using SSH. Install VNC server:
    sudo apt-get install vnc4-server
  2. On the remote server, become the user tests run as. For example:
    sudo su - ci
  3. Set a password for the VNC server using the vncpasswd command.
  4. Start headless X server by running vncserver. Note the given display. If example.com:1 is included in the output of vncserver, the display is :1.
  5. Figure out which port the VNC server is using. I usually do something like

    sudo netstat -nape | grep '^tcp.*LISTEN.*vnc.*'

    Here’s some example output:

    tcp        0      0 0.0.0.0:6001            0.0.0.0:*               LISTEN      107        3099855     13233/Xvnc4     
    tcp6       0      0 :::5901                 :::*                    LISTEN      107        3099858     13233/Xvnc4

    By trial and error, I figured out that 5901 was the port I should use.

  6. Port-forward VNC to your local machine.

    1. Disconnect from the server.
    2. Reconnect, including -L10000:localhost:5901 on your SSH command line.
    3. Leave this connection open.
  7. On your local machine, connect a VNC client to localhost:10000. An X terminal should be displayed.

  8. In the X terminal, run your web tests.

  9. When finished debugging, kill the X server using the display noted earlier.
    vncserver -kill :1