Consumer-Driven Contract Tests

The most useful way I’ve seen such contract tests work is that the team that consumes the messages creates and publishes an artifact in their build pipeline for use by the creators of the messages. For this example let’s have it create a tarball with a shell script entry point. The inputs to the shell script can be a URL to the api-server and any other parameters required, like user IDs, oauth tokens, etc. The pipeline of the api-server team downloads the current production version of the contract tarball (and maybe the latest build as well). They will probably have a bunch of tarballs to download depend on how many teams use that api. The pipeline extracts and runs each contract. If the tests succeed, the api team know that their changes will not break production. If the tests fail, they know which team they need to talk to and schedule downtime/releases/etc.

Since the contract tests try to use the same code that they will use in production (eg. on the browser) to parse and understand the responses from the api, there is less chance that the contract test and production code will diverge and a regression will make it into production. Obviously it is up to each client team to decide how to write the tests. If they are happy with a schema validator and a wrapper shell script, that’s up to them. I haven’t had a lot of success with that, but I am ready to be surprised. My dislike for schema validators stems from differences between them and how production/runtime code maps from responses to domains. If you perform any transformation on the fields, change the structure, handle legacy conversions, etc, what you get at the end of your response parsing may not be the same as the response that you parsed. All of that logic has to be correct and somehow duplicated by the schema validator.

I really want to encourage client & server teams to become more courageous in terms of making changes. If schema validation works for that then that’s ok. I want us to trust our tests and automation and have little fear about pushing out new versions.

Tags: , ,

Building Clouds

I've spent this year building networks using Amazon Web Services and teaching people how to do it. So I'd like to share the code that I've used as teaching examples and as seeds for the creation of some pretty cool environments.

  • AWS PY was my first published attempt at interacting with AWS in python & Puppet to instantiate, provison and control EC2 instances, as well as the seed for an incredibly cool project at the start of this year.
  • AWS RB followed on to duplicate the instantiation and provisioning of EC2 instances using Amazon's Ruby APIs and Chef Solo. This was initially done as an itch that I had to scratch but since then it has been used as the seed for some of my paid AWS work.
  • AWS VPC once again uses Amazon's Ruby APIs and Chef Solo, this time to provision a Virtual Private Cloud. Amazon provides excellent documentation on what a VPC is and how to provision one using its web-based admin console, but I wanted to create a cloud from the ground up using repeatable scripts with no admin console interaction. The only pre-requisite is that you have an AWS account and have API keys (and have provided all the necessary details to Amazon so that they will allow you to create EC2 instances).
  • AWS VPC PY is my latest example and still a work in progress. It is designed to showcase how to create an AWS VPC using python, Boto and Fabric rather than Amazon's Ruby APIs. I'm not wedded to either approach so it is interesting to try out which one works & feels better in different contexts.

May you find these as useful as I have, and still do.

Tags: , , , , , , , ,

How to use rsync on OSX

I don’t really want to copy dot files (eg. .DS_Store), and I want to avoid the bug that rsync exhibits with time-capsule where it loops creating multiple ..DS_Store.xxxx files.

rsync -vrW --ignore-existing --exclude ".*" --progress ~/Movies/ /Volumes/Backup/Movies/

Tags: , , ,