Technical musings from an opinionated Platform Engineer/Leader

A bash alias for GitHub Pull Request creation


I am fond of feature branch git workflows. If the team prefers additional structure, gitflow is also a great tool. In my current position, the bulk of my work takes place in repositories with few maintainers and a less-structured workflow is more comfortable.

Most of my work happens on GitHub, so the feature branch technique is mildly altered to correspond to some of the GitHub specifics. This is almost exclusively the Pull Request; which is not a concept specific to GitHub, but creating a GitHub pull request is not something that git itself supports.

Read more ⟶

Automatically deploy Hugo to S3 and Cloudfront


This could be better automated, but for now, it was easy enough to set up and it works well for my needs.

S3cmd

S3cmd is mature and full featured. Since Hugo ships a binary only (awesome!), using s3cmd seems like the easiest corresponding solution to get your static site onto S3. Out of the box, s3cmd gives us the following awesome features:

  • sync behavior - only upload changed files
  • --acl-public - set public ACL (this way you do not need to set a policies on the bucket)
  • --delete-removed - when something is removed locally, remove it from s3

First install s3cmd and run s3cmd --configure with appropriate IAM credentials for your target bucket. Be sure this is configured for the user who will edit your site (or add the explicit config file option to the s3cmd command).

Read more ⟶

ELK (Elasticsearch, Logstash, Kibana) index restore with Hubot


After getting ELK (Elasticsearch, Logstash, Kibana) up and running, an early challenge is managing your indices. I’m assuming Elasticsearch 1.0.0 and greater, and given that version the API provides dead simple mechanisms for everything from closing/deleting indices to snapshots and restore procedures (closure/delete have been around since pre-1.0.0, snapshot/restore is 1.0.0 and newer).

It’s easy to put together a quick shell script and cron job to manage the regular tasks (I did: elasticsearch-logstash-index-mgmt). However, consistency is important, and now that Elasticsearch is managing the entire ELK stack, their own Curator is worth careful investigation.

Read more ⟶

Asgard and Hubot: Simplified AMI deployment to AWS


If you are not familiar with these tools, you should spend some time investigating Asgard (deployment management for AWS; from NetflixOSS) and Hubot (extensible chatbot; from GitHub).

The Goods

hubot-asgard (github) | hubot-asgard (npm)

’npm install hubot-asgard’ should get you most of the way there, check the readme on either github or npm for additional details about environment variables and configuration. Once installed, ‘hubot help asgard’ will show all the commands.

The Problem

Concerns about AMI management and deployment to Amazon Web Services quickly lead to the nebulous AutoScaling Group feature of EC2 (why isn’t this in the management console?). Once you find yourself with a handful of scripts using some sdk from Amazon that amount to launch configuration creation and autoscaling group updates with a dash of load balancer add/remove, you start to wonder if there’s an easier way. There are several. Elastic Beanstalk and OpsWorks aim to address this problem, and they do a good job. The only thing I will add is that if you like fine control of your environment, you’ll quickly move from Elastic Beanstalk to OpsWorks (in beta at time of writing), and if that starts to leave you wanting control, eventually you’ll get to Asgard.

Read more ⟶

Create nodejs debian package with debuild


I ran into a couple issues when building a debian package for nodejs.

  • Python build processes create a bunch of *.pyc files that cause problems when re-running debuild after errors
  • The python configure script doesn’t compare well with more classic configure scripts (notably –build vs –dest-cpu)
  • Make encounters issues when passing standard params
  • Auto test runs into problems with server validation when run via debuild

I took the quick and easy solution in all cases. I didn’t resolve the *.pyc issue, I simply started with a clean directory for each pass. The rest of the solutions came by making a barebones build run via debian/rules:

Read more ⟶