Please note that this post, first published over a year ago, may now be out of date.
At the beginning of my IT adventure, my manager asked me to produce a specific visual output from the application I was developing. The application served as a dashboard for plotting different spatial variables on the map, and I was asked to plot one of them. As I knew that the feature was already implemented and tested, I assumed it would be simple. In fact, I couldn’t have been more wrong.
The feature was apparently not working, and, to make things worse, I didn’t know why, as I couldn’t spot any errors during debugging. The map was simply empty. It was very suspicious, since I clearly remembered hours spent on testing this particular solution. I got a little bit stressed, as the time to deliver the screenshots was limited, and I had no idea what caused this situation.
During the process of thorough verification, it turned out that the spatial files to plot were not in their old location on the file system we used. My colleague moved them and didn’t update the path in the code. I got even more stressed, and as it was only the beginning of my professional journey, when - as I reluctantly admit - I thought the working code is the ultimate goal, I pointed it out to my colleague in a way which turned out to be too direct for him. Obviously, moving all the files required to make the app working without any notification was not very desirable, but our discussion quickly changed to very unproductive blaming. My manager spotted it and gave us 15 minutes to come up with an efficient solution, instead of wasting time on our unfruitful discussions.
There could be only one solution to this problem, although it was not easy to come up with. We worked in a team of economists, were both self-taught, and were both also still very much learning. Nevertheless, at that time I have already started pursing my interest in IT, and I was very happy to propose my idea.
Naturally, we needed a set of rules of conduct, including all the features properly tested, and all the changes applied in a very careful way, including proper communication. Obviously, what we needed was a CI/CD process.
What is CI/CD?
CI/CD stands for Continuous Integration and Continuous Delivery or Continuous Deployment. It is a set of rules and tools following the principle that “what can (reasonably) be automated, should be automated”. These rules and tools ensure that your version control system is the single source of truth and automate the testing, integration, and deployment of an application in one or multiple environments. That means that your team doesn’t need to spend hours deploying or testing a new feature only to find out that introducing other changes will break it (as I did in the above mentioned example).
The process not only saves your developers time, as they can actually focus on code development, and your company’s money, but - what is even more important - eases the process of communication between developers and minimises the risk of potential conflicts in the team, while enhancing team spirit and the sense of working towards a common goal. When the team has written and agreed to an automated test, you get buy in. Having a bot tell a colleague that they made a mistake just doesn’t feel as awkward as pointing out the exact same thing directly in a code review.
What is the difference between CI and CD? Continuous Integration eases the process of maintaining the codebase in the version control system by the principle of committing as frequently as possible - at least daily, which reduces the merging overhead and minimises divergence from the main (or trunk) branch (avoiding merge hell, so to speak). Furthermore, CI is about automating the build process, as well as testing procedures, instead of performing these manually, yielding in minimising probability of failure (and, in turn, time reduction in the case of failure firefighting, as previous versions are easy to rollback). Build and test happen whenever a commit is made to the main branch (hence, the continuous integration).
Continuous Delivery, in turn, is the natural extension of Continuous Integration and supports the deployment process in an automated manner. That being said, there is no longer need to perform the deployment manually by developers, which means significant time savings leading in turn to more time to work on the code.
A typical CI/CD process involves multiple stages: code commit or merge, build, test, and deploy. You can build the CI/CD pipeline on your own, but it is worth bearing in mind that there already exist a variety of ready-to-use, proven tools in the market.
Jenkins is an open source automation server written in Java supporting building, testing, and deployments of applications. It runs as a standalone application, without cloud-based infrastructure, which means you need to deploy and maintain it using your own infrastructure. Jenkins’ controller-agent architecture enables distributed builds (you can also run an active-passive controller set up if you use the paid version of Jenkins from CloudBees).
You configure Jenkins either via its web GUI (or by invoking the API commands that back that web GUI). More commonly, you define a Jenkins pipeline by writing code in a dialect of Groovy, known as a
Jenkinsfile. That can be a benefit if the team are already familiar with Java or Groovy (Groovy is derived from Java), but is otherwise a drawback.
Most other CI/CD solutions have settled on semi-declarative configuration using YAML. By contrast, Jenkins has very specific requirements for technical knowledge, and so Jenkins skills are therefore not very easily transferable. Jenkins does support a variety of plugins, which make customisation possible. For example, there are plugins to manage a pool of EC2 spot instances so that you only pay for build agents that are actually in use.
The more plugins you add, the more overhead for Jenkins (meaning slower builds), and the bigger the risk that something breaks. You need to take care of plugin compatibility each time you upgrade Jenkins.
Obviously, the main disadvantage of Jenkins is no cloud infrastructure underneath. This, combined with lack of YAML support and very specific language used for
Jenkinsfile (pipeline declarations) position Jenkins as a little bit outdated. Nevertheless, with Jenkins one can configure the whole pipeline manually without writing code, which makes Jenkins a good start - it is an easily accessible software, even for people without DevOps background.
2. Travis CI and CircleCI
Travis CI and CircleCI are both cloud-based environments which allow for easily customisable test, build and deploy procedures. They have an intuitive GUI, which makes debugging and version management very simple. The pipelines can be constructed in the YAML language - you just need to add
config.yml file to your repository. YAML is a popular and human-friendly data serialisation language with an advantage of being very concise and easily both writable and readable. On top of it, most CI/CD tools use YAML as well (therefore these skills are easily transferable). You can easily choose the environment and software on which your pipeline will be run. Both tools support parallel testing, which means you can spread tests across different executors and make your pipeline time-efficient. You can run Travis CI and CircleCI with GitHub, GitLab or Bitbucket repositories, which shouldn’t be a limitation, as these tools cover the most popular version control systems (you can also configure your own integration with CircleCI by CircleCI Orbs). Nevertheless, you need to bear in mind that the free tier for private repositories is limited. CircleCI uses cache and workflow features to reduce time of execution and more efficient usage of resources, which is especially important when performing expensive operations. On the other hand, Travis CI supports build matrix allowing for running tests on different versions of software and packages, allowing your solution to be bulletproof. It may be of significant importance when not using Docker.
3. Tools integrated with version control hosting services
There are a number of CI/CD tools integrated into version control hosting services. These are GitHub Actions, GitLab CI/CD, Bitbucket Pipelines, Azure Pipelines and AWS CodePipeline. Their advantage is undoubtedly simplicity: you have the same interface for both version control and CI/CD (which is always triggered by actions like commit or merge, so it is very useful to store everything in one place). CI/CD processes are usually easy to set up and follow on the user interface. On the other hand, hosting your CI/CD pipeline in the same place where your code is stored couples your tooling with your version control system, e.g. if you move from GitHub to GitLab you need to rewrite your CI/CD pipeline. Fortunately, YAML syntax makes it relatively easy. As these solutions are relatively new, they usually do not support as many configuration options as Travis CI or CircleCI, but should be completely sufficient at the beginning. All of the above mentioned solutions are paid, but include a generous free tier (scope of which differs among tools).
4. Flux CD and Argo CD
Both Flux CD and Argo CD follow the GitOps principle of using Git repositories as the ultimate source of truth and defining the desired application state with code. Using a version control system one can prepare a declarative definition of the CD process (watch out, these tools do not support CI!), as well as the needed infrastructure. Flux CD and Argo CD are designed specifically for Kubernetes and support generating Kubernetes manifest using Helm charts or through Kustomize. In Flux CD you can define a manifest to scan a specific image repository with container images to check whether there was a new release in an automated way, while Argo CD needs more manual intervention. It may be both an advantage or disadvantage - depending on what you value more: automation or having control. Although Flux CD has CLI-first approach, you can still install a third-party GUI, e.g. offered by Weaveworks. Argo CD has support for both CLI and GUI.
What are you waiting for?
CI/CD is a good software practice that you should use to make your software products more reliable and easy to rollback in case of failure. To go even further, it helps your team’s productivity and eases the collaboration between developer and operation teams. As in the real-life example described above, it can really help mitigate conflicts in your team. What is important to bear in mind according to the DevOps philosophy, is that failures in IT happen mostly because of processes inefficiency rather than people’s mistakes, so implementing a robust pipeline may be a key to success.
Even though one can try to build their customised CI/CD solution from scratch (e.g. using AWS Lambda), the market offers a variety of CI/CD tools with different configuration capabilities to choose from and to make the process as effortless as possible.
If you’re still not convinced, let’s imagine that as a SaaS business you want to push new features to your software product in order to grow fast and beat the competition. It’s natural that you need to make sure that new features won’t break existing ones. CI/CD tools can definitely help with testing and ensuring resiliency of your application, as well as easy rollbacks in case of potential failure. You can set up a testing pipeline and run it easily in the cloud (e.g. using Travis CI or CircleCI), without bothering with setting up your own infrastructure. If you want to have more time to test it internally before pushing to production, or maybe forward only a percentage of traffic to the test deployment, CI/CD tools may be helpful with deploying code to another environment.
Keeping on top of all the latest features can feel like an impossible task. Is practical infrastructure-modernisation an area you are interested in hearing more about? Book a free chat with us to discuss this further.
This blog is written exclusively by The Scale Factory team. We do not accept external contributions.