As a follow-up to my previous post
, I've fleshed out my CI/(almost-)CD workflow a bit more. I write "(almost-)CD", because I've decided that I don't really want deployment to production to be completely automatic; I want to be able to manually mark a build as ready for production, at least for now.
When we last left off, I had Buildbot watching our git repository, and when it detected a change, it ran unit tests, and if the tests passed, updated the code on our VPS and triggered a reload. Since then, I've added email notifications (for all failures, and for success on the final step), and we've switched over to a Docker-based deployment. Here's what the process looks like now:
Buildbot still watches our git repository. When it detects a change, it checks out the code, and runs a "docker build" on a remote Docker instance to build a new image of the application; the buildbot slave is still running on our VPS, which does not support Docker, so we need to run Docker on a separate host. I could have also created a new buildbot slave on a Docker-capable host, but that seems like it would have been more work.
The new image is tagged with the git commit hash, as well as the "staging" tag. Next Buildbot runs the unit tests on the image itself. If the tests all pass, it pushes the image to our Docker repository on Tutum
with the "staging" tag. Tutum watches for changes in the Docker repository, and when a new "staging" image is pushed, it redeploys our staging server. In the meantime, Buildbot sends me an email telling me that the build has passed.
Up to this point, everything since the git push has been automatic. After I get the email from Buildbot, I do a quick sanity check on our staging server, just in case the unit tests missed anything, and if all goes well, I re-push the Docker image, but this time with the "latest" tag. Again, Tutum will notice a new "latest" image, and this time redeploys our production server.
There are a couple things I really like about this setup. First of all, the number of manual steps involved is minimal; the only thing I do after pushing the code is checking the stage site, and re-pushing the image. Everything else is done automatically, which means that there's less chance of me forgetting to do something. Secondly, by using Docker images, I'm sure that the staging and production environments are exactly the same (or at least as close as possible).
One downside is that redeploying an image means there's a slight amount of downtime. This can be solved by using Blue/Green deployment
instead of production/staging, but it's a bit more complicated to set up. This will probably be the next thing for me to look into.0 Comments