@madpilot rants

Using openssl to encrypt development scripts that have secrets

I like having scripts that can pull production data down to development – the best kind of fake data is real data. Usually that involves taking a database dump and downloading any uploaded files. Unfortunately, that also means remembering long command line arguments, and dealing with credentials for both the database and the file server.

Up until now, I’ve kept a secure notepad of command line arguments with secrets in my password manager, but that doesn’t really scale if you want to be able to supply different arguments in different contexts (The number of times I’ve forgotten to change an argument…) – what I really wanted was a script that I could call that abstracted the nasty command line call with long arguments to a slightly less nasty command line call with shorter arguments. Ideally it could also be stored in source control so other could use it.

The app I’m currently working on has a little helper bash script that does common tasks, like bringing up docker, running bundler – things that cost me valuable keypresses – and I really wanted to add a sync from production script. And this is how I did it.

The bash script (let’s call it /bin/lazy.sh) is just a stupid set of functions and case statements:

command = $1

function run {
  local command = $1
  shift

  case $command in
    backend)
      docker-compose run backend $@
    ;;
    frontend)
      docker-compose run frontend $@
    ;;
}

case $command in
  run)
    run $@
  ;;
esac

This allows me to build up a nice little DSL of helper commands, meaning I don’t have to type so much.

$ lazy.sh run backend

Now, the fancy bit. Create another script, called lazy.secret (not stored in source control)

command = $1

function sync {
  local command = $1
  shift

  case $command in
    database)
      PGPASSWORD=supersecretpassword pgdump -h -h database-server.aaaaaa.ap-southeast-2.rds.amazonaws.com -d database_name -U username
    ;;
    uploads)
      AWS_ACCESS_KEY_ID=amazonaccesskey AWS_SECRET_ACCESS_KEY=longstringofrandomcharacters s3_get bucket
    ;;
}

case $command in
  sync)
    sync $@
  ;;
esac

And then encrypt it:

openssl enc -aes-256-cbc -salt -in lazy.secret -out /bin/lazy.encrypted

When this happens, openssl will ask you to provide a passcode to decrypt the file. The encrypted file that has the secrets can by stored in source control (it’s not readable without the passkey), and the passcode can be stored in your password manager.

If you want to be slightly more paranoid, delete lazy.secret – you can decrypt the file if you need to modify it.

openssl enc -aes-256-cbc -d -in $dir/lazy.encrypted -out lazy.secret

Of course, remembering the command line argument to decrypt the file is way too hard, so let’s get the original /bin/lazy.sh script do that for us (it will ask you for the passkey first):

command = $1

function sync {
  # This tells bash to look for the file in the same directory as the source script, not the current working dir
  dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
  # Decrypt the file, and pipe it into bash. We need to supply /dev/stdin so the command line arguments passthrough.
  openssl enc -aes-256-cbc -d -in $dir/lazy.encrypted | /bin/bash /dev/stdin $@
}

function run {
  local command = $1
  shift

  case $command in
    backend)
      docker-compose run backend $@
    ;;
    frontend)
      docker-compose run frontend $@
    ;;
}

case $command in
  run)
    run $@
  ;;
  sync)
    sync $@
  ;;
esac

Kablamo!


$ /bin/lazy.sh sync database
enter aes-256-cbc decryption password:

Limitations

It’s not perfect, but it’s a nice simple way to share complicated scripts that require credentials with your team. Things to consider:

  1. Running ps while the script is running will expose all the secrets. But this also a problem when running the scripts by hand
  2. Everyone needs to know the passkey – Perhaps openssl has a way of having different keys unlock the same script? Let me know in comments
  3. If the key becomes exposed, your source control has a history that allows an attacker to decrypt and get the secrets at that point in time – in this case, just rotate all the credentials in the script, change the passkey and push. You should be doing this each time a team member arrives or leaves anyway.

Setting up Buildbox when you run your own Git server

I’ve been trying to get a CI server for 88 Miles sorted out for ages. I tried to cobble something together before, but lost interest trying to keep track of previous builds and Ruby environments and other stuff. Me being me, I run my own Git server, so many of the existing CI servers out there won’t work for me (They assume GitHub), and I don’t really feel comfortable sending out source on a private project to a third party server. I also have a VM in my office that runs various dev machines, so I have the hardware to do it. Well, it turns out that a buddy of mine has been building a CI server that is a perfect fit for my purposes! It’s called Buildbox, and it comprises of an agent that you run on your hardware that manages builds for you. This is roughly what I did to get it running.

The Build Server

I setup a VM running Gentoo, with 1Gb of RAM and two virtual CPUs. It is as close to my prod environment as I can get it, running RVM – this is important, as there is a bit of a trick to this. I also set up a specific user (called build), that will do the work. I initiate the Buildbox daemon using monit, so if it ever dies, it should get restarted automatically.

Note: I’ve omitted the Buildbox setup instructions – The website does a better job than I could.

The wrapper script

Because I’m running RVM, and because monit is fairly dumb in setting up bash environments, I wrote a small wrapper script that will start and stop the Buildbox executable:

#!/bin/bash
case $1 in
        start)
                if [[ -s "$HOME/.rvm/scripts/rvm" ]]; then
                        source "$HOME/.rvm/scripts/rvm"
                elif [[ -s "/usr/local/rvm/scripts/rvm" ]]; then
                        source "/usr/local/rvm/scripts/rvm"
                else
                        printf "ERROR: An RVM installation was not found.\n"
                        exit -1
                fi

                rvm use ruby-2.0.0-p247
                buildbox agent:start &
                echo $! > /var/run/buildbox/buildbox.pid
                ;;
        stop)
                kill `cat /var/run/buildbox/buildbox.pid`
                rm /var/run/buildbox/buildbox.pid
                ;;
        *)
                echo "Usage: buildbox-wrapper {start|stop}";;
esac
exit 0

If you call buildbox-wrapper start it sets up a RVM environment, then runs the buildbox agent and then saves the PID to /var/run/buildbox/buildbox.pid (Make sure the /var/run/buildbox directory is writable by the build user). Calling buildbox-wrapper stop reads the pid file and kills the agent.

The monit config

Add the following to your monitrc:

check process buildbox with pidfile /var/run/buildbox/buildbox.pid
        start program = "/bin/su - build /bin/bash --login -c '/home/build/scripts/buildbox-wrapper start'"
        stop program = "/bin/su - build /bin/bash --login -c '/home/build/scripts/buildbox-wrapper stop'"

This will change to the build user, then run the wrapper script.

My build script

You enter this into the code section of the web UI (I was a little confused by this – I didn’t realise it was editable!)

#!/bin/bash
set -e
set -x

if [ ! -d ".git" ]; then
  git clone "$BUILDBOX_REPO" . -q
fi

git clean -fd
git fetch -q
git checkout -qf "$BUILDBOX_COMMIT"

bundle install
bundle exec rake db:schema:load
bundle exec rake minitest:all

This is pretty much a cut and paste from the Buildbox website, except I run Minitest. I also tmp/screenshots/**/* and coverage/**/* to the artifacts section. Artifacts are get uploaded after a build is complete. I use them to upload my screenshots from all of my integration tests, as well as my coverage reports.

Git post-receive

This script belongs on your Git server in the hooks directory. Name is post-receive

#!/bin/bash

while read oldrev newrev ref
do
  branch=`echo $ref | cut -d/ -f3`
  author=`git log -1 HEAD --format="format:%an"`
  email=`git log -1 HEAD --format="format:%ae"`
  message=`git log -1 HEAD --format="format:%B"`

  echo "Pushing to Buildbox..."

  curl "https://api.buildbox.io/v1/projects/[username]/[projectname]/builds?api_key=[apikey]" \
          -s \
          -X POST \
          -F "commit=${newrev}" \
          -F "branch=${branch}" \
          -F "author[name]=${author}" \
          -F "author[email]=${email}" \
          -F "message=${message}"
  echo ""
done

Replace username, projectname and apikey with your own details. Make the file executable, and then push a change, and a build should start!

A quick and dirty way to ensure your cron tasks only run on one AWS server

88 Miles has a cron job that runs every hour that does a number of things, one of which is billing. I currently run two servers (Yay failover!), which makes ensuring the cron job only runs once problematic. The obvious solution is to only run cron on one server, but this is also problematic, as there is no guarantee that the server running cron hasn’t died and another spun up in it’s place. The proper sysops solution is to run heartbeat or some other high-availability software to monitor cron – if the currently running server dies, the secondary will take over.

While that is the correct solution that I may implement later, it’s also a pain to setup, and I needed something quick that I could implement now. So came up with a this lo-fi solution.

Each AWS instance has a instance_id with is unique (it’s a string that looks something like this: i-235d734c). By fetching all of the instance_ids of all the productions servers in my cluster, and then sorting them alphabetically, I’ll pick the first one, and run the cron job on that. This setup uses the AWS ruby library, and assumes that your cron job fires off a rake task, which is where this code lives.

So step one is go get the ip address of instance that the job is running on.

uri = URI.parse('http://169.254.169.254/latest/meta-data/local-ipv4')
ip_address = Net::HTTP.get_response(uri).body

That URL is some magic URL that when called from any AWS instance will return metadata about the server – in this case the local ip address (also unique).

Next, pull out all of the servers in the cluster.

servers = []
ec2 = AWS::EC2.new :access_key_id => '[YOUR ACCESS KEY]', :secret_access_key => '[YOUR SECRET KEY]'
ec2.instances.each do |instance|
    servers << instance if instance.tags['environment'] == 'production'
end
servers.compact!

I tag my servers by environment, so I want to filter them based on that tag. You might decided to tag them differently, or just assume all of your server can run cron jobs – that bit is left as an exercise for the reader. Replace [YOUR ACCESS KEY] and [YOUR SECRET KEY] with your access key and your secret key.

Now, sort the servers by instance id

servers.sort{ |x, y| x.instance_id <=> y.instance_id }

Finally, check to see if the first server’s local ip address matches the ip address of the server we are currently running on

if servers.first.private_ip_address == ip_address
    # Success! We are the first server - run that billing code!
end

See? Quick and dirty. There are a few things to keep in mind…

  • If a server with a lower instance_id gets booted just as the cron jobs are due to run, you might get a race condition. You would have to be pretty unlucky, but it’s a possibility.
  • There is no testing to see if the nominated server can actually perform the job – not a big deal for me, as the user will just get billed the next day. But you can probably work around this via business logic if it is critical for your system – crons fail, so you need a way to recover anyway…

As I said – quick and dirty, but effective!

Delivering fonts from Cloudfront to Firefox

I use both non-standard and custom icon fonts on 88 Miles, which need to be delivered to the browser in some way. Since all of the other assets are delivered via Amazon’s Content Delivery Network (CDN) Cloudfront, it would make sense to do the same for the fonts.

Except that didn’t work for Firefox and some version of Internet Explorer. Turns out they both consider fonts as an attack vector, and will refuse to consume them if they are being delivered via another domain (commonly know as a cross domain policy). On a regular server, this is quite easy to fix: You just set the:

Access-Control-Allow-Origin: *

HTTP header. The problem is, S3 doesn’t seem to support it (It supposedly supports CORS, but I couldn’t get it working properly). It turns out though, that you can selectively point Cloudfront at any server, so the simplest solution is to tell it to pull the fonts from a server I control that can supply the Access-Control-Allow-Origin header.

First, set up your server to supply the correct header. On Apache, it might look something like this (My fonts are in the /assets folder):

<Location /assets>
  Header set Access-Control-Allow-Origin "*"
</Location>

If you don’t use Apache, google it. Restart your server.

Next, log in to the AWS console and click on Cloudfront, and then click on the i icon next to the distribution that is serving up your assets.

Amazon Cloudfront settings

Next, click on Origins, and then Create Origin button.

In the Origin Domain Name field enter the base URL of your server. In the Origin ID field, enter a name that you’ll be able to recognise – you’ll need that for the next step.

Hit Create, and you should see the new origin.

New origin added

Now, click on the Behaviours tab, and click Create Behavior. (You’ll need to do this once for each font type that you have)

Enter the path to you fonts in Path pattern. You can use wildcards for the filename. So for woff files it might look something like:

Setting up the cache

Select the origin from the origin drop down. Hit Create and you are done!

To test it out, use curl:

> curl -I http://yourcloudfrontid.cloudfront.new/assets/fonts.woff

HTTP/1.1 200 OK
Content-Type: application/vnd.ms-fontobject
Content-Length: 10228
Connection: keep-alive
Accept-Ranges: bytes
Access-Control-Allow-Origin: *
Date: Mon, 16 Sep 2013 06:05:35 GMT
ETag: "30fb1-27f4-4e66f39d4165f"
Last-Modified: Sun, 15 Sep 2013 17:14:52 GMT
Server: Apache
Vary: Accept-Encoding
Via: 1.0 ed617e3cc5a406d1ebbf983d8433c4f6.cloudfront.net (CloudFront)
X-Cache: Miss from cloudfront
X-Amz-Cf-Id: ZFTMU-m781XXQ2uOw_9ukQGbBXuGKbjlVsilwW44IS2MvHbeBsLnXw==

The header is there. Success! Now, pop open your site in Firefox, and you should see all of your fonts being served up correctly.