locate is a lightning-fast command line search utility. It first hit the press in the early 80s when James A. Woods proclaimed the tradeoff of nightly updates is worth it for sub-second filesystem path matches.
The proposed architecture is simple but effective: incrementally encode all paths in a purpose-built binary database and perform matches with linear search. Since nearly all matches are partial, linear search generally outperforms binary search or other optimizations. Maintainers have followed this original architecture to the present day.
The indexer is called
updatedb and it generally runs nightly, as root. If you have an encrypted home partition (and you should) nothing in your
$HOME will be indexed. One workaround is to index it yourself. To maintain security I recommend storing the index inside your
I like to use anacron since it automatically performs a catch-up run if necessary. This is handy for “daily would be nice” jobs that don’t need to run at an exact hour/minute of the day.
Here’s how to do it.
Add this to your crontab (this is one long line). This fires off your own personal anacron:
@hourly /usr/sbin/anacron -s -t $HOME/.anacrontab -S $HOME/.anacron
Add this to
$HOME/.anacrontab to run your indexer daily (that’s the “1”) and after a 10 minute delay (that’s the “10”):
1 10 indexhome $HOME/bin/index-encrypted-homedir
Create the executable file
$HOME/bin/index-encrypted-homedir with these contents:
set -o errexit
set -o nounset
set -o pipefail
mkdir -p "$HOME/.var" "$HOME/.anacron"
updatedb -l 0 -n '.meteor .cache' -o "$HOME/.var/locate.db"
Finally, add this to your
Here are follow-up materials for my talk at SeaGL 2017.
- Squirrels live ~12 years in the wild, but have lived up to 24 years in captivity.
- Our kids helped with this project.
Yesterday my friend Khan posted this:
(plain text source code)
Continue reading It’s up to interpretation
TCP is handy for simple, reliable communications like this tiny toy logger. I run the server and clients in separate consoles on the same machine:
# TCP log server
nc -kl 8000 > server-log.txt
# TCP logging from netcat client
date | nc 127.0.0.1 8000
# TCP logging from socat client
date | socat STDIN TCP:localhost:8000
# TCP logging from Bash client
date > /dev/tcp/127.0.0.1/8000
The only bummer about TCP is that–in my example–other clients have to wait in line. We are logging so I want fast, one-way communication from any number of clients to the server, and reliability of every log message is probably not critical. Let’s try UDP! I could just add -u to the netcat server args to use UDP datagrams, but a netcat UDP server gets a little wonky
. The easy workaround is to use socat as the server instead. socat happily accepts any datagram from multiple clients, simultaneously.
# UDP log server
socat UDP-RECV:8000 STDOUT > server-log.txt
# UDP logging from netcat client
date | nc -q1 -4 -u 127.0.0.1 8000
# UDP logging from socat client
date | socat STDIN UDP-DATAGRAM:localhost:8000
# UDP logging from Bash client
date > /dev/udp/127.0.0.1/8000
Use at your own risk. The TCP version is surely simplest, safest (ahem, still no auth – this is just a toy) and reliable. I don’t know much about what’s going on under the hood here. Insight welcome! Messages from different clients might get mangled together, too. Tested on Ubuntu 14.04.
I’ve heard a lot of Meteor news lately, but somehow I missed Sandstorm. Your own personal cloud. Install services easier than installing apps on your phone. Add machines and they self-organize into a cluster. This sounds just way too awesome. Looks like they use Meteor heavily. Jade Wang (formerly of the Meteor Development Group) is a co-founder.
Apps must be packaged for Sandstorm (made into “grains”). The list of ported apps is pretty inspiring. Included are: draw.io, LibreBoard, HackerSlides, Let’s Chat, Paperwork… All were new to me, several are written in Meteor, and I was able to check out all of these in seconds. I’m hooked.
I’ll be speaking about Meteor dev with Vim at the Seattle Meteor Meetup on Worldwide Meteor Day. Come on down!
If you’ve ever set up a machine by hand, you’ve probably had to decide how much of your disk to set aside as swap.
I’ve often wondered “why swap at all”? This quote by Nick Piggin from 2004 finally helped me answer the question.
no matter how much ram you have, swap can increase performance by allowing unused anonymous memory to be paged out, thereby increasing your maximum effective RAM
Found via this post on Hacker News, where the poster raises the point that some filesystem buffers might be extremely “hot” (frequently used), but might only fit in physical RAM (where they should be) if some swap space is available to page out other “cold” information.
Update 2016-12-22: except for Kubernetes nodes, apparently.
The Seattle Disaster Relief Trials was a blast! Great idea, Jesse.
The balloon got wedged into my helmet at the 1st checkpoint and just stayed there the whole ride.
There’s also a *very* brief cameo of me buckling in a bucket of water at the 2nd checkpoint (1min 52sec in) in http://q13fox.com/2013/06/21/bike-heroes-prepare-for-disaster/
I’m proud that LJ accepted my Hadoop/MapReduce article for the April 2013 issue! If you’re new to MapReduce and are interested in learning about same, this article is for you.
I’ll also be presenting a talk based on the article at LinuxFest Northwest 2013.