stdoutCoding and Technology

Every project is going to start accumulating a list of things that would be nice to have, or that you just need to get around to, but not right at this moment. There are all sorts of tools you can use to track these things, from Github Issues to Tomboy all the way down to the lowly todo.txt file. That’s exactly what we’re here to talk about.

I’ll admit that most of my coding this summer has been hacking on side projects and small experiments, and I’ve found that even Trello can be a bit too heavy for these kinds of projects.

So, I’m using the digital equivalent of some sticks and tinder to manage the short-term wants lists for my different projects: shell functions! This doesn’t scale if you have more than one person working on a project, but for small experiments it works great. I’ve written a number of small scripts to automate working with my todo.txt file, which I stick in the root folders of my different projects.

One important thing to note, I’ve made sure to ignore todo.txt globally in my ~/.gitignore file. You’ll need to also make sure you configure git to use this global ignore file by running:

git config --global core.excludesfile '~/.gitignore' .

I started by writing the function todo. It’s a really simple function, which does just two things. First, it calls grep with the recursive flag to search for TODOs that might be hiding anywhere in the current directory. Secondly, it checks for the presence of a todo.txt file and, if it exists, it prints it out with some pretty formatting.

function todo () {
  echo "Searching for TODOs..."
  grep -R "TODO" .
  if [ -e "todo.txt" ]; then
    echo "\033[32mFound todo.txt:\033[0m"
    while read item; do
      echo " * $item"
    done < todo.txt

The output of todo looks like this (color codes have been stripped):

Found todo.txt:
 * add robots.txt
 * Update twitter badge to point to pace_bl account
 * Projects page needs love
 * About page needs to be built

As a complementary piece to this todo function, I wrote another function called “needs”. It seemed like the most natural language to use, and it lets me type “needs some more work on index page” (who doesn’t love vague tickets?).

function needs () {
  if [ ! -e "todo.txt" ]; then
    touch todo.txt
  if [ -n "$1" ] && [ -f "todo.txt" ]; then
    echo "Appending to todo.txt: $@"
    echo "$@" >> todo.txt

It just concatenates and appends the todo item to todo.txt, if it exists. If it doesn’t exist naturally it creates the file and proceeds.

I definitely know I’ll outgrow this solution at some point, but for now it’s working very well. I could see myself writing another function, or probably a ruby script, to handle switching my todo.txt files over to Github issues or Trello board items. Even then, I’ll probably keep using this solution for personal projects and let them grow organically as the need arises.

A Quick Update

I received some feedback from a kind developer, Jan Andersen, in the Programming community on Google+. Jan pointed out that I didn’t need to parse todo.txt line by line with bash. I could do the heavy lifting of formatting each output line by simply invoking sed instead.

Update #2

I’ve found that this system worked even better for me with some small tweaking and wanted to share the revisions it’s gone through. I’m leaving my original solution up above because I think a lot of this is personal preference.

I found the todo function a bit too far reaching, so I shrank its scope. I also implemented the sed suggestion from Jan. Now it simply reads todo.txt if it exists, mostly like before.

function todo () {
  if [ -e "todo.txt" ]; then
    echo "\033[32mFound todo.txt:\033[0m"
    sed -n 's/^/ \* /p' todo.txt

The grep functionality was useful and didn’t move far. It’s new home is over in greptodo. Eagle-eyed readers will notice I’m also passing the -I option now, which ignores binary files.

function greptodo () {
  grep -IR "#TODO" .

I also realized I wanted a quick way to browse through all these todo.txt files I’ve been creating. I wrote findtodo for just that. It’s just a one line wrapper around find and sed. Find searches your current directory and below, printing each todo file and its contents in an easy to read format.

function findtodo () {
  find . -name "todo.txt" -exec sh -c 'echo "\n{}"; sed -n "s/^/ \* /p" {}' \;

I find this function the most useful out of them all. If I cd to my directory for open source/side projects and run it, I’ll get a nice menu of all the things I might want to work on. I’ve found it a lot easier to keep multiple side projects going by working this way.

Instead of fumbling with “What do I work on right now?” I can just ask my projects folder, glance through the contents and pick something that sounds fun. I’ve actually found it a little too conducive to keeping me motivated, as the urge to finish just one more task can be overwhelming. Obviously this is true no matter what system is used, but what’s important is making sure you do use something. Doing this with even the tiniest of projects has definitely kept me more organized and productive.

I’ve recently been trying out Vagrant for a couple different tasks in my development workflow, and I’ve found it to be truly wonderful in certain roles. For those that haven’t heard of it, Vagrant is basically a wrapper that makes managing development virtual machines much less painful. I’m using it with VirtualBox, but it also supports software such as VMWare and server environments including Amazon EC2.

However, this isn’t a “Getting Started with Vagrant 101” blog post. The main documentation is a great resource and it’s very easy to get things up and running. What I found a little bit harder was figuring out how to fit Vagrant into my workflow and feel comfortable using it. I wanted to be able to step between my running Vagrant box and my local development environment and barely be able to notice, and this is something that took a little more tinkering to get just right.

I won’t go into too much depth trying to sell you on all the reasons you might want to use Vagrant, but I think it’s a good idea to mention some major motivations. First, If you use Vagrant, you can keep your development server configured exactly the same as production. Second, you can wipe your entire development setup at a moments notice, and get a clean copy back in about the time it takes to get coffee. No more “weird, works on my machine”, and no more worrying you might mangle your local development environment and break other project setups.

Your main interaction with Vagrant is through your project’s Vagrantfile. The standard documentation has you create one for your new project, shows how you’ll select a “box” from the Vagrant Cloud (and even share your own, if you’re inclined to) and then tweak that base configuration to get the environment you need. Additional configuration is accomplished by providing provisioning shell scripts, which can be ran either as an elevated user or the developer account in the VM. Note this is far from the only way to provision your VM, and Vagrant supports more robust solutions such as Chef and Puppet.

Other configuration options are exposed directly in the Vagrantfile, which is really just a Ruby file. These options let you control Vagrant itself, adding things like port forwarding, SSH agent forwarding and folder syncing.

If you want to dig into the actual meat of how the above pieces fit together, I highly recommend following along with the docs before continuing here.

After I finished setting up my first VM, my first two thoughts were “This is so cool!” and “This feels so weird”. Jumping into my VM with vagrant ssh was really convenient, but it didn’t feel quite like home. It didn’t feel like a place I could be quite productive. Suddenly I had to keep track of whether I was in the VM or on my local machine, whether I could fiddle with Rake tasks or run git commands instead.

I wanted to improve this experience. It took some digging to find some of the other configuration options that Vagrant offers. You can actually create a vagrant.d folder in your home directory to add extra provisioning instructions that are user global. You can see how I’ve set up my directory here.

It should be very familiar compared to the per-project setup you likely have already performed. Drop a Vagrantfile in here and add some config settings and you’ll be able to modify the behavior of every VM you provision as your user. Pretty cool. Mine just has three lines. I turn on ssh agent forwarding to make sure I can take advantage of it everywhere without explicitly turning it on for each project, and I call two shell scripts. One runs privileged, while the other does not.

The two scripts I added are base_pkg and dotfiles. The first makes sure the apt-get mirrors are up to date and installs tools I wouldn’t feel at home without (git, vim, zsh). Then the second clones my entire dotfiles repo and runs the bootstrap script in the root of it. I found this was the most insulated way to get my environment customizations into my VMs, but some may find it a tad hacky. It also requires a manual git pull if I need to pull down any dotfile changes.

However, this lets me basically run vagrant ssh and live completely inside my development VM while editing files from my real machine. Combining SSH forwarding with my dotfiles and pre-installed git, it’s a seamless jump between the two. I haven’t ran into any real complications with this setup yet, and in fact it’s been working quite well for me.

This is my first step into the world of development with Vagrant, and I’ve only been at it for a few weeks. There might be much nicer ways to interact with my boxes that I haven’t yet discovered, and I’m sure things could be improved. If you have any suggestions or want to share how you use Vagrant, feel free to reach out to me on twitter. I’ll be likely to mention you here, especially if you know of a better way of approaching this problem. In the meantime, I hope this helps some people feel more comfortable using Vagrant while they develop.

Rust is a new open source systems programming language which guarantees memory safety and supports concurrency without data races. It’s not actually an official Mozilla project, but rather something to come out of the Mozilla community. It’s a compiled, minimal runtime language that manages to feel like C++ and Ruby at the same time. The current “stable” release is 0.10, and many things are in flux currently as developers work hard to push towards the 1.0 release.

I’ve just been scratching the surface of the features of Rust, but I was very intrigued by the built in unit testing framework and how it integrates with Rust code.

Your unit tests can live within the same file as your actual code. I’m not sure whether this is Rust best practice or not, the standard idioms for Rust code are still being worked out too. If functions are marked with a #[test] identifier, it tells the rust compiler that the function is a unit test. Here’s an example.

fn three_divides(num: int) -> bool {
  num % 3 == 0

fn main() {
  for num in range(1, 10) {
    if three_divides(num) { println("Three divides " + num.to_str()) }
    else { println("Three doesn't divide " + num.to_str()) }

fn test_three_doesnt_divide() {

fn test_three_divides() {

As a quick aside, notice that the statement in the function three_divides is not terminated with a semi-colon. Rust interprets this in a special way, and automatically returns the value of the statement as the function’s result.

To run your application, it’s pretty straightforward. Invoke the rust compiler, rustc with your source code and it’ll hopefully output a binary. Run that binary and watch as your code runs. The test functions were ignored, their symbols never even added to your executable.

λ ~/ rustc
λ ~/ ./divide 
Three doesn't divide 1
Three doesn't divide 2
Three divides 3
Three doesn't divide 4
Three doesn't divide 5
Three divides 6
Three doesn't divide 7
Three doesn't divide 8
Three divides 9

When it’s time to run your tests, compile your application again but this time pass the --test flag to rustc. It will remove your main function, instead compiling your application with a test runner that automatically knows to run all your functions marked with #[test]. Note that assert! calls a function which returns a boolean, and if assert! receives true it passes the unit test. Obviously the reverse will fail the test.

λ ~/ rustc --test
λ ~/ ./divide 

running 2 tests
test test_three_divides ... ok
test test_three_doesnt_divide ... ok

test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured

Having this kind of unit testing support baked right into the language is really nice, and I couldn’t stop thinking about how convenient this kind of testing is. It lowers the barrier to writing tests and removes a lot of headache about setting up a testing framework or environment. If you know some other cool features of the rust testing system, or I’ve performed a best practices faux pas, drop me a line on Twitter.