CI/CD Tools: How to Choose Yours Wisely

Continous integration (CI) and continuous deployment (CD) tools allow teams to merge, build, test, and deploy code branches automatically. Implementing them along with conventions like “commit frequently” forces developers to test their code when it’s combined with other people’s work. Results include shorter development cycles and better visibility of code evolution among different teams.

Once you commit to using CI/CD in your software development cycle, you’re immediately faced with a galore of options: Travis, Jenkins, GitLab, CodeShip, TeamCity, and CircleCI, among others. Their names are catchy, but they hardly describe what the tools do. So here’s a roadmap for choosing the right tool for your needs.

CI/CD Tools: Choosing Wisely

What Platforms and Integrations Should It Support?

Whether you’re part of an enterprise or a startup, you’ll first need to figure out all your platform requirements. Think about your operating systems and their versions, programming languages, access to third-party APIs, libraries, frameworks, and testing suites; collect all these data and check that each tool is able to support them all. You don’t want painful troubleshooting sessions or loss of commercial provider support because of a version mismatch.

There are some shortcuts that I can provide:

  • BuildBot is a popular choice among Pythonistas. It’s also written in Python, and it’s very flexible. FOSS projects like Mozilla Firefox and MariaDB currently use BuildBot for their multi-platform builds and testing.
  • People using JIRA or Bitbucket will find that Bamboo and Bitbucket pipelines fit their ecosystem. But beware of price increases as your number of environments and need for parallelization grow.
  • Jenkins’ impressive list of plugins offers compatibility with every major developer tool in the market. Java shops will probably be comfortable deploying Jenkins because it’s written in that language.
  • If you’re building .NET projects to run on a Microsoft stack, you’ll find two natural choices: Visual Studio Team Services (VSTS) and TeamCity. Both include Docker support.

Where Will It Run and Who Will Maintain It?

Some enterprises have regulations to meet, so they can’t put their codebases in cloud services. Startups lack the manpower and time to run their own tools. Teams in both groups might be wary of security breaches caused by a wrong decision when using publicly available services. Whichever is your position, here’s a set of tips to navigate the cloud vs. on-premises conundrum:

  • Cloud services ease the workload for our teams—but always at the expense of fees above certain usage.
  • Extra charges from CI/CD cloud services come in the form of higher concurrency, build events, build time, and/or user count.
  • Some cloud services have a zero-cost plan for small teams. You could test the waters using that in a greenfield project.
  • On-premises options like GitLab, Travis-CI, and TeamCity have hosted offerings that let you test before committing to run your own copy.
  • On-premises tools require resources ready to orchestrate your builds and tests. That means more complex tasks to run, troubleshoot, and maintain.
  • Teams should be aware of legal and business requirements when uploading code and data to cloud services.

Who Will Use It and How Often?

It’s only a matter of time before the results of using CI/CD become visible to management. At that point, your organization will be increasingly dependent on CI/CD. User adoption and build frequency will increase. Take a look at these items before deciding your team’s headcount and licensing requirements:

  • Provide some managers read access to your CI/CD. Giving them visibility improves the chances of them supporting these efforts.
  • Choose a tool that integrates with your organization’s favorite authorization method. Nobody wants one more password.
  • Get a list of desired environments and take note of how often your teams would hypothetically push code to them.
  • Recalculate your usage when you add a new team or user. Don’t let your bills get out of control.

What Are Our Estimated Costs?

Depending on the nature of your company and team, you’ll either prefer an operating expense or a capital expenditure. I’m an advocate of pay-as-you-go services, personally. They allow you to keep costs down when you’re starting, and they remove complexity. But as soon as your SaaS yearly bills reach what you’d pay an engineer, you might want to get some of those toolsets under your control or migrate to other providers with lower costs. Here are some gotchas when estimating costs:

  • TeamCity and Bamboo require extra payment when you need more than one remote agent, so each additional environment costs more.
  • VSTS is free for teams of five people or less. After that, you pay per additional user each month.
  • GitLab only supports multiple active directory sources in its enterprise edition. Jenkins, on the other hand, supports it through a plugin.
  • Some SaaSs charge per build time. If this is the case with your chosen option, configure your pipelines to use prebuilt images with resolved dependencies.
  • Jenkins, GoCD, BuildBot, and GitLab-CE have OSI-compliant licenses while Bamboo, TeamCity, Travis-CI Enterprise, and GitLab-EE are commercially available. Some of them charge based on the number of users, remote agents, and features.

Availability of Documentation, Training, and Support

It might not be evident now, but you’ll end up deploying to production with this new tool. So you’ll want to set up the same kind of expectations you currently have for all your other tooling in production because, at some point, you will probably

  • need training for more than your vanguard CI/CD team.
  • troubleshoot problems that affect your availability to internal and external customers.
  • extend your implementation to more environments, additional third-party integrations, and new business needs.
  • require emergency support beyond what a user community may offer.
  • provide onboarding documentation to new team members.

Metrics and Vendor Lock-In

I decided to leave these two strategic factors as final arguments—not for their lack of importance, but because they’re only important once you’ve successfully adopted a CI/CD tool. While it’s great to implement a delivery pipeline, it’s hard to see improvements over time if you’re not measuring and comparing to a baseline.

Here’s a set of questions that will help you know what’s relevant to measure in your line of business:

  • What’s your business focus? Is it maintaining a reliable platform? Is it increasing your user base? Or is it adding software features ASAP?
  • Does your team deploy software to production and let it fail as part of debugging?
  • How many bad builds are slipping through and failing in production?
  • What is your mean time to recover from a failure in production?
  • How does your delivery pipeline design affect a push to production during an outage incident?

Now, regarding the vendor lock-in factor, the CI/CD ecosystem is blooming. Odds are that you’ll probably change tools when your requirements evolve. Or perhaps your SaaS provider goes out of business. Don’t forget to consider how much effort it would take your team to migrate your delivery pipeline to a different tool or provider in case a change is necessary. Here’s some food for thought:

  • Can you export workflows and configurations?
  • Do exports come in formats that might be automatically transformed as inputs for other tools?
  • How tightly integrated is your CI/CD tool to specific computing resources?

Take the Next Step

If your current software development cycle is made of stages where people step on each others’ toes while trying to survive the minefield created by libraries, APIs, and interservice dependencies, perhaps it’s time to choose a CI/CD tool and start optimizing your software delivery pipeline.

Have you experienced this process already? Share your tips with us in the comments section!

This post was written by Carlos “Kami” R Maldonado is an engineer helping his company transition to DevOps. He specializes in Linux automation, and he’s experienced in all layers of infrastructure, from the application layer down to the cable. He’s transitioning from static VM based infrastructure to on-premises Kubernetes deployments.

Leave a Reply

Your email address will not be published. Required fields are marked *