Getting started with a TensorFlow surgery classifier with TensorBoard data viz

Originally published at Opensource.com. Creative Commons License This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

The most challenging part of deep learning is labeling, as you’ll see in part one of this two-part series, Learn how to classify images with TensorFlow. Proper training is critical to effective future classification, and for training to work, we need lots of accurately labeled data. In part one, I skipped over this challenge by downloading 3,000 prelabeled images. I then showed you how to use this labeled data to train your classifier with TensorFlow. In this part we’ll train with a new data set, and I’ll introduce the TensorBoard suite of data visualization tools to make it easier to understand, debug, and optimize our TensorFlow code.

Given my work as VP of engineering and compliance at healthcare technology company C-SATS, I was eager to build a classifier for something related to surgery. Suturing seemed like a great place to start. It is immediately useful, and I know how to recognize it. It is useful because, for example, if a machine can see when suturing is occurring, it can automatically identify the step (phase) of a surgical procedure where suturing takes place, e.g. anastomosis. And I can recognize it because the needle and thread of a surgical suture are distinct, even to my layperson’s eyes.

My goal was to train a machine to identify suturing in medical videos.

I have access to billions of frames of non-identifiable surgical video, many of which contain suturing. But I’m back to the labeling problem. Luckily, C-SATS has an army of experienced annotators who are experts at doing exactly this. My source data were video files and annotations in JSON.

The annotations look like this:

[
    {
        "annotations": [
            {
                "endSeconds": 2115.215,
                "label": "suturing",
                "startSeconds": 2319.541
            },
            {
                "endSeconds": 2976.301,
                "label": "suturing",
                "startSeconds": 2528.884
            }
        ],
        "durationSeconds": 2975,
        "videoId": 5
    },
    {
        "annotations": [
        // ...etc...

I wrote a Python script to use the JSON annotations to decide which frames to grab from the .mp4 video files. ffmpeg does the actual grabbing. I decided to grab at most one frame per second, then I divided the total number of video seconds by four to get 10k seconds (10k frames). After I figured out which seconds to grab, I ran a quick test to see if a particular second was inside or outside a segment annotated as suturing (isWithinSuturingSegment() in the code below). Here’s grab.py:

#!/usr/bin/python
 
# Grab frames from videos with ffmpeg. Use multiple cores.
# Minimum resolution is 1 second--this is a shortcut to get less frames.
 
# (C)2017 Adam Monsen. License: AGPL v3 or later.
 
import json
import subprocess
from multiprocessing import Pool
import os
 
frameList = []
 
def isWithinSuturingSegment(annotations, timepointSeconds):
    for annotation in annotations:
        startSeconds = annotation['startSeconds']
        endSeconds = annotation['endSeconds']
        if timepointSeconds > startSeconds and timepointSeconds < endSeconds:
            return True
    return False
 
with open('available-suturing-segments.json') as f:
    j = json.load(f)
 
    for video in j:
        videoId = video['videoId']
        videoDuration = video['durationSeconds']
 
        # generate many ffmpeg frame-grabbing commands
        start = 1
        stop = videoDuration
        step = 4 # Reduce to grab more frames
        for timepointSeconds in xrange(start, stop, step):
            inputFilename = '/home/adam/Downloads/suturing-videos/{}.mp4'.format(videoId)
            outputFilename = '{}-{}.jpg'.format(video['videoId'], timepointSeconds)
            if isWithinSuturingSegment(video['annotations'], timepointSeconds):
                outputFilename = 'suturing/{}'.format(outputFilename)
            else:
                outputFilename = 'not-suturing/{}'.format(outputFilename)
            outputFilename = '/home/adam/local/{}'.format(outputFilename)
 
            commandString = 'ffmpeg -loglevel quiet -ss {} -i {} -frames:v 1 {}'.format(
                timepointSeconds, inputFilename, outputFilename)
 
            frameList.append({
                'outputFilename': outputFilename,
                'commandString': commandString,
            })
 
def grabFrame(f):
    if os.path.isfile(f['outputFilename']):
        print 'already completed {}'.format(f['outputFilename'])
    else:
        print 'processing {}'.format(f['outputFilename'])
        subprocess.check_call(f['commandString'].split())
 
p = Pool(4) # for my 4-core laptop
p.map(grabFrame, frameList)

Now we’re ready to retrain the model again, exactly as before.

To use this script to snip out 10k frames took me about 10 minutes, then an hour or so to retrain Inception to recognize suturing at 90% accuracy. I did spot checks with new data that wasn’t from the training set, and every frame I tried was correctly identified (mean confidence score: 88%, median confidence score: 91%).

Here are my spot checks. (WARNING: Contains links to images of blood and guts.)

Image Not suturing score Suturing score
Not-Suturing-01.jpg 0.71053 0.28947
Not-Suturing-02.jpg 0.94890 0.05110
Not-Suturing-03.jpg 0.99825 0.00175
Suturing-01.jpg 0.08392 0.91608
Suturing-02.jpg 0.08851 0.91149
Suturing-03.jpg 0.18495 0.81505

How to use TensorBoard

Visualizing what’s happening under the hood and communicating this with others is at least as hard with deep learning as it is in any other kind of software. TensorBoard to the rescue!

Retrain.py from part one automatically generates the files TensorBoard uses to generate graphs representing what happened during retraining.

To set up TensorBoard, run the following inside the container after running retrain.py.

pip install tensorboard
tensorboard --logdir /tmp/retrain_logs

Watch the output and open the printed URL in a browser.

Starting TensorBoard 41 on port 6006
(You can navigate to http://172.17.0.2:6006)

You’ll see something like this:

I hope this will help; if not, you’ll at least have something cool to show. During retraining, I found it helpful to see under the “SCALARS” tab how accuracy increases while cross-entropy decreases as we perform more training steps. This is what we want.

Learn more

If you’d like to learn more, explore these resources:

Here are other resources that I used in writing this series, which may help you, too:

If you’d like to chat about this topic, please drop by the ##tfadam topical channel on Freenode IRC. You can also email me or leave a comment below.

This series would never have happened without expert help from Eva Monsen, Brian C. Lane, Rob Smith, Alex Simes, VM Brasseur, Bri Hatch, Rikki Endsley and the all-star editors at Opensource.com.

Learn how to classify images with TensorFlow

Originally published at Opensource.com. Creative Commons License This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Recent advancements in deep learning algorithms and hardware performance have enabled researchers and companies to make giant strides in areas such as image recognition, speech recognition, recommendation engines, and machine translation. Six years ago, the first superhuman performance in visual pattern recognition was achieved. Two years ago, the Google Brain team unleashed TensorFlow, deftly slinging applied deep learning to the masses. TensorFlow is outpacing many complex tools used for deep learning.

With TensorFlow, you’ll gain access to complex features with vast power. The keystone of its power is TensorFlow’s ease of use.

In a two-part series, I’ll explain how to quickly create a convolutional neural network for practical image recognition. The computation steps are embarrassingly parallel and can be deployed to perform frame-by-frame video analysis and extended for temporal-aware video analysis.

This series cuts directly to the most compelling material. A basic understanding of the command line and Python is all you need to play along from home. It aims to get you started quickly and inspire you to create your own amazing projects. I won’t dive into the depths of how TensorFlow works, but I’ll provide plenty of additional references if you’re hungry for more. All the libraries and tools in this series are free/libre/open source software.

How it works

Our goal in this tutorial is to take a novel image that falls into a category we’ve trained and run it through a command that will tell us in which category the image fits. We’ll follow these steps:

a directed graph from label to train to classify

  1. Labeling is the process of curating training data. For flowers, images of daisies are dragged into the “daisies” folder, roses into the “roses” folder, and so on, for as many different flowers as desired. If we never label ferns, the classifier will never return “ferns.” This requires many examples of each type, so it is an important and time-consuming process. (We will use pre-labeled data to start, which will make this much quicker.)
  2. Training is when we feed the labeled data (images) to the model. A tool will grab a random batch of images, use the model to guess what type of flower is in each, test the accuracy of the guesses, and repeat until most of the training data is used. The last batch of unused images is used to calculate the accuracy of the trained model.
  3. Classification is using the model on novel images. For example, input: IMG207.JPG, output: daisies. This is the fastest and easiest step and is cheap to scale.

Training and classification

In this tutorial, we’ll train an image classifier to recognize different types of flowers. Deep learning requires a lot of training data, so we’ll need lots of sorted flower images. Thankfully, another kind soul has done an awesome job of collecting and sorting images, so we’ll use this sorted data set with a clever script that will take an existing, fully trained image classification model and retrain the last layers of the model to do just what we want. This technique is called transfer learning.

The model we’re retraining is called Inception v3, originally specified in the December 2015 paper “Rethinking the Inception Architecture for Computer Vision.”

Inception doesn’t know how to tell a tulip from a daisy until we do this training, which takes about 20 minutes. This is the “learning” part of deep learning.

Installation

Step one to machine sentience: Install Docker on your platform of choice.

The first and only dependency is Docker. This is the case in many TensorFlow tutorials (which should indicate this is a reasonable way to start). I also prefer this method of installing TensorFlow because it keeps your host (laptop or desktop) clean by not installing a bunch of dependencies.

Bootstrap TensorFlow

With Docker installed, we’re ready to fire up a TensorFlow container for training and classification. Create a working directory somewhere on your hard drive with 2 gigabytes of free space. Create a subdirectory called local and note the full path to that directory.

docker run -v /path/to/local:/notebooks/local --rm -it --name tensorflow tensorflow/tensorflow:nightly /bin/bash

Here’s a breakdown of that command.

  • -v /path/to/local:/notebooks/local mounts the local directory you just created to a convenient place in the container. If using RHEL, Fedora, or another SELinux-enabled system, append :Z to this to allow the container to access the directory.
  • --rm tells Docker to delete the container when we’re done.
  • -it attaches our input and output to make the container interactive.
  • --name tensorflow gives our container the name tensorflow instead of sneaky_chowderhead or whatever random name Docker might pick for us.
  • tensorflow/tensorflow:nightly says run the nightly image of tensorflow/tensorflowfrom Docker Hub (a public image repository) instead of latest (by default, the most recently built/available image). We are using nightly instead of latest because (at the time of writing) latest contains a bug that breaks TensorBoard, a data visualization tool we’ll find handy later.
  • /bin/bash says don’t run the default command; run a Bash shell instead.

Train the model

Inside the container, run these commands to download and sanity check the training data.

curl -O http://download.tensorflow.org/example_images/flower_photos.tgz
echo 'db6b71d5d3afff90302ee17fd1fefc11d57f243f  flower_photos.tgz' | sha1sum -c

If you don’t see the message flower_photos.tgz: OK, you don’t have the correct file. If the above curl or sha1sum steps fail, manually download and explode the training data tarball(SHA-1 checksum: db6b71d5d3afff90302ee17fd1fefc11d57f243f) in the local directory on your host.

Now put the training data in place, then download and sanity check the retraining script.

mv flower_photos.tgz local/
cd local
curl -O https://raw.githubusercontent.com/tensorflow/tensorflow/10cf65b48e1b2f16eaa826d2793cb67207a085d0/tensorflow/examples/image_retraining/retrain.py
echo 'a74361beb4f763dc2d0101cfe87b672ceae6e2f5  retrain.py' | sha1sum -c

Look for confirmation that retrain.py has the correct contents. You should see retrain.py: OK.

Finally, it’s time to learn! Run the retraining script.

python retrain.py --image_dir flower_photos --output_graph output_graph.pb --output_labels output_labels.txt

If you encounter this error, ignore it:
TypeError: not all arguments converted during string formatting Logged from file tf_logging.py, line 82.

As retrain.py proceeds, the training images are automatically separated into batches of training, test, and validation data sets.

In the output, we’re hoping for high “Train accuracy” and “Validation accuracy” and low “Cross entropy.” See How to retrain Inception’s final layer for new categories for a detailed explanation of these terms. Expect training to take around 30 minutes on modern hardware.

Pay attention to the last line of output in your console:

INFO:tensorflow:Final test accuracy = 89.1% (N=340)

This says we’ve got a model that will, nine times out of 10, correctly guess which one of five possible flower types is shown in a given image. Your accuracy will likely differ because of randomness injected into the training process.

Classify

With one more small script, we can feed new flower images to the model and it’ll output its guesses. This is image classification.

Save the following as classify.py in the local directory on your host:

import tensorflow as tf, sys
 
image_path = sys.argv[1]
graph_path = 'output_graph.pb'
labels_path = 'output_labels.txt'
 
# Read in the image_data
image_data = tf.gfile.FastGFile(image_path, 'rb').read()
 
# Loads label file, strips off carriage return
label_lines = [line.rstrip() for line
    in tf.gfile.GFile(labels_path)]
 
# Unpersists graph from file
with tf.gfile.FastGFile(graph_path, 'rb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())
    _ = tf.import_graph_def(graph_def, name='')
 
# Feed the image_data as input to the graph and get first prediction
with tf.Session() as sess:
    softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
    predictions = sess.run(softmax_tensor, 
    {'DecodeJpeg/contents:0': image_data})
    # Sort to show labels of first prediction in order of confidence
    top_k = predictions[0].argsort()[-len(predictions[0]):][::-1]
    for node_id in top_k:
         human_string = label_lines[node_id]
         score = predictions[0][node_id]
         print('%s (score = %.5f)' % (human_string, score))

To test your own image, save it as test.jpg in your local directory and run (in the container) python classify.py test.jpg. The output will look something like this:

sunflowers (score = 0.78311)
daisy (score = 0.20722)
dandelion (score = 0.00605)
tulips (score = 0.00289)
roses (score = 0.00073)

The numbers indicate confidence. The model is 78.311% sure the flower in the image is a sunflower. A higher score indicates a more likely match. Note that there can be only onematch. Multi-label classification requires a different approach.

For more detail, view this great line-by-line explanation of classify.py.

The graph loading code in the classifier script was broken, so I applied the graph_def = tf.GraphDef(), etc. graph loading code.

With zero rocket science and a handful of code, we’ve created a decent flower image classifier that can process about five images per second on an off-the-shelf laptop computer.

In the second part of this series, we’ll use this information to train a different image classifier, then take a look under the hood with TensorBoard. If you want to try out TensorBoard, keep this container running by making sure docker run isn’t terminated.

Encrypted partition path derivation via linear search through incrementally encoded packed data

locate is a lightning-fast command line search utility. It first hit the press in the early 80s when James A. Woods proclaimed the tradeoff of nightly updates is worth it for sub-second filesystem path matches.

The proposed architecture is simple but effective: incrementally encode all paths in a purpose-built binary database and perform matches with linear search. Since nearly all matches are partial, linear search generally outperforms binary search or other optimizations. Maintainers have followed this original architecture to the present day.

The indexer is called updatedb and it generally runs nightly, as root. If you have an encrypted home partition (and you should) nothing in your $HOME will be indexed. One workaround is to index it yourself. To maintain security I recommend storing the index inside your $HOME.

I like to use anacron since it automatically performs a catch-up run if necessary. This is handy for “daily would be nice” jobs that don’t need to run at an exact hour/minute of the day.

Here’s how to do it.

Add this to your crontab (this is one long line). This fires off your own personal anacron:

@hourly /usr/sbin/anacron -s -t $HOME/.anacrontab -S $HOME/.anacron

Add this to $HOME/.anacrontab to run your indexer daily (that’s the “1”) and after a 10 minute delay (that’s the “10”):

1 10 indexhome $HOME/bin/index-encrypted-homedir

Create the executable file $HOME/bin/index-encrypted-homedir with these contents:

#!/bin/bash
 
set -o errexit
set -o nounset
set -o pipefail
 
mkdir -p "$HOME/.var" "$HOME/.anacron"
updatedb -l 0 -n '.meteor .cache' -o "$HOME/.var/locate.db"

Finally, add this to your $HOME/.bashrc:

export LOCATE_PATH="$HOME/.var/locate.db"

Visualize per-character differences in a unified diff file

Crosio Kinderstreich 1

When someone sends you a patch, it is most easily viewed with syntax highlighting. The thing you need highlighted is what changed, at a character level.

You get for free with many tools including git (with git diff --word-diff), but this doesn’t help you with a stand-alone patch (diff) file.

Luckily, git ships with diff-highlight! Send a unified diff to that script’s stdin and you get beautiful syntax highlighting, including per-character changes. Here’s a wrapper for diff-highlight for npm users. On my system I found the script at /usr/share/doc/git/contrib/diff-highlight/diff-highlight, and I just run it with the Perl interpreter that ships with my Ubuntu 16.04 desktop.

 

UDP 1, 2, 3: netcat vs. socat

TCP is handy for simple, reliable communications like this tiny toy logger. I run the server and clients in separate consoles on the same machine:
# TCP log server
nc -kl 8000 > server-log.txt
 
 
# TCP logging from netcat client
date | nc 127.0.0.1 8000
 
 
# TCP logging from socat client
date | socat STDIN TCP:localhost:8000
 
 
# TCP logging from Bash client
date > /dev/tcp/127.0.0.1/8000
The only bummer about TCP is that–in my example–other clients have to wait in line. We are logging so I want fast, one-way communication from any number of clients to the server, and reliability of every log message is probably not critical. Let’s try UDP! I could just add -u to the netcat server args to use UDP datagrams, but a netcat UDP server gets a little wonky. The easy workaround is to use socat as the server instead. socat happily accepts any datagram from multiple clients, simultaneously.
# UDP log server
socat UDP-RECV:8000 STDOUT > server-log.txt
 
 
# UDP logging from netcat client
date | nc -q1 -4 -u 127.0.0.1 8000
 
 
# UDP logging from socat client
date | socat STDIN UDP-DATAGRAM:localhost:8000
 
 
# UDP logging from Bash client
date > /dev/udp/127.0.0.1/8000
Use at your own risk. The TCP version is surely simplest, safest (ahem, still no auth – this is just a toy) and reliable. I don’t know much about what’s going on under the hood here. Insight welcome! Messages from different clients might get mangled together, too. Tested on Ubuntu 14.04.

Group chat me crazy

Group chat (IRC, Rocket.Chat, Let’s ChatMattermost, Zulip, Slack, etc) rocks! Definitely use it. But, fair warning:

My thoughts on group chat:

  1. Be available sometimes, especially when your coworkers are. Aim for healthy overlap.
  2. Be unavailable sometimes. Focus on your work and get stuff done.
  3. Managers: support your team doing both #1 and #2 above.
  4. Discuss and curate tribal knowledge in group chat, but distill often into other permanent, public, shared resources for your “knowledge base”, such as mailing lists, wikis, and (gasp) formal documentation.

Updating multiple git repositories at once

1. myrepos

http://myrepos.branchable.com – manage multiple repos (source)

2. One-liner

Assuming repo1, repo2 and repo3 are subdirs of the current dir, try:

parallel --tag -j0 git --git-dir={}/.git pull --ff-only ::: repo1 repo2 repo3

Note this assumes you’re using GNU Parallel. On Ubuntu 14.04, I had to do sudo apt-get install parallel. This uninstalled moreutils, which was a minor bummer.