When I hear about a command line application that I might want to use, often times the first step in installing the application starts with pip install or gem install. This would install the application and all of its dependencies, but it would also possibly interfere with some other application and its own dependencies.

Just use virtualenv or rbenv

The solution to the above dependency clashes is usually “just use a virtualenv”. Which works, but only in a certain directory, and only if you remember to source ./bin/activate and so on. Maybe if I knew a bunch more about virtualenv and rbenv (or whatever folks use in the Ruby world, which I feel like changes from time to time), I’d know of a better solution to this problem. If you know of one, feel free to leave a comment.

But I’ve found another way that works well for me.

Give these apps their own containers

I run these kinds of CLI apps in their own containers. For example, I recently installed sceptre, which is a tool that deploys Cloudformation. The normal installation instruction for this are pip install sceptre. So to build it in a container, I use a Dockerfile like this:

FROM ubuntu
RUN apt-get update && apt-get upgrade -y
RUN apt-get install -y python-pip awscli
RUN pip install -U pip
RUN pip install troposphere
RUN pip install sceptre
RUN mkdir /Users
VOLUME /Users
RUN useradd -M -d /Users/myuser myuser
ENTRYPOINT ["sceptre"]
CMD []

I could probably use a smaller base image instead of Ubuntu, but disk is cheap. And if I need to troubleshoot something inside the container, I know Ubuntu very well and my time isn’t cheap, so Ubuntu makes sense for me.

Sceptre, being an AWS focused tool, relies on troposphere and the awscli, so I’m installing them too, but this is obviously specific to this particular application.

Then I’m creating a /Users directory and a user account for myself. This would match the same user account I use on my development Macbook. So inside the container, my user account and its home directory match the user account and home directory that I also use outside of the container (on OSX).

How I invoke the container

Remember, I’m using this approach because I want the command line application to “just work”. I don’t want to have to remember to cd into some directory every time I need this command.

So I place a wrapper script in /usr/local/bin, named with the same name as the command I’m installing. In this case, it’s /usr/local/bin/sceptre (on OSX, not inside the container).

#!/bin/bash

docker run -it --rm -w $(pwd) -v /Users/myuser:/Users/myuser --user myuser sceptre "$@"

I run this script on OSX, and it invokes my container for me. I’m using --rm because I want to throw away the container after each time it runs, I don’t need to reuse it. I’m using -w $(pwd) to set my working directory inside the container to match the same directory where I’m at outside the container. I’m using -v /Users/myuser:/Users/myuser to attach a volume to the container so that my home directory outside the container also matches my home directory inside the container. sceptre is the command I want to run, and "$@" represents any command line flags or options that I’ve given to the command.

It works well for me

It works well for me. Each app has its own container, so there’s no possibility of dependency clashes. If I need to upgrade to a newer version of any app, I can use Docker tags to keep and roll back to an older version if needed. Overall I’m happy with this approach.

Let me know in the comments if you have any improvements to this approach, or if you have a virtualenv based approach that gives me the benefits outlined above but without building any containers.