About a year ago, I transitioned from my role working on TextNow’s Android integration test framework to the build and release team. We were to work with various engineering teams to support their transition to a CI/CD (Continuous Integration / Continuous Delivery) pipeline. Since then, we have created more complete CI/CD pipelines for the vast majority of TextNow’s code bases, leveraging the power of Jenkins and Docker. In this article, let’s walk through some of the technical decisions we made, along with the reasoning behind those decisions.
Here’s the problem we had: How can we effectively manage dependencies across multiple Jenkins slaves? We were (and still are) using Puppet to manage dependencies for our production systems, but Puppet doesn’t handle conflicting dependencies well. If we were to use Puppet to manage dependencies, we’d have to run different nodes with different dependency versions. What we want is to have any job run against any node, so when one team is in crunch time all resources can be utilized.
The solution? Docker. With Docker, each platform can specify their dependencies as a “dockerfile”, ensuring that their dependencies are accurate and segregated from other platform’s dependencies. This means that one set of unit tests can run against ruby 2.0.0, and another can run against ruby 2.3.1, without using Ruby Version Manager. It also ensures that every single node has a common set of dependencies, and properly-tagged Docker containers mean you can easily roll back to the last working set of dependencies, without worrying about a rollback strategy.
Some of our platforms benefited from Dockerization more than others. It was very easy to create a Docker container that ran our unit tests, and ran builds for TextNow.com. Dockerizing our website worked so well, in fact, that we transitioned our production site from Puppet running on an AWS EC2 instance, and moved to Amazon ECS. Gone was the one-hour manual deploy process, which involved removing one EC2 instance from the load balancer, updating that instance, and then putting the updated instance back in the load balancer. By comparison, our new Jenkins jobs ensured we could push new code to the canary server, add another canary to a production load balancer, with a third rolled in out to production, in one button push. Doesn’t take an hour, either — this whole process takes us about fifteen minutes.
Some platforms required a little massaging to work well with Docker. Our Android platform has a massive number of dependencies, and supports some pretty old Android versions. Using Android Studio to pull down dependencies is easy, but requires a GUI and it doesn’t sync well across multiple Jenkins slaves. And while there is an Android CLI interface, all the documentation points you away from it, and it can be difficult to find the dependencies you need. The Android CLI even has a few licences that you cannot download through the terminal. On top of that, there are Android licences that require someone to accept. Some of these licenses require you to approve them through Android Studio and add the license file to your Docker container. While this problem was complex, in some ways containerization made this problem easier — once the Android application was building correctly on your laptop, you were guaranteed that it would work on every single Jenkins slave. Docker allowed me to solve a hard problem once, and only once.
There were many lessons learned while working with Docker as well. When we started working with Docker, diagnosing what what happening inside the containers and on the physical host was a major concern to some of our senior engineering staff. As a result, most of our Docker containers are still based on Ubuntu. Basing off of an image like Alpine would allow us to pull down changes faster, and would reduce the amount of disk space that our Docker images would occupy.
While our docker implementation strategy has had its ups and downs, I am proud of what the build and release team has built. All engineering departments now rely on the infrastructure that we have designed and built, and using Docker is a key part of that strategy.