Introduction to DevOps
If you’re reading this, chances are you’re no stranger to the term DevOps. Many articles on this blog feature DevOps in one way or another. But do you really know what DevOps entails? Do you practice it in your own company? Or maybe you always wanted to start implementing it but didn’t know if it was right for you?
Regardless of your background, we hope this article will convince you that embracing DevOps is easier than you think, and that you might actually be ready to do it right now. How is this possible? Simple, DevOps is neither a methodology, a role, nor a set of tools. It’s a mindset shared by the entire team that is orthogonal to the methodology or technology used within the team. Of course, using tools that support the DevOps philosophy can be helpful, but is by no means a requirement. You can write code without buying a fancy IDE, and in the same way, you can practice DevOps without hoarding tools often associated with it.
But Isn’t It a Cloud Thing?
Probably the biggest doubt people raise when they think about implementing DevOps is “my company doesn’t use cloud computing, what can DevOps offer me?” This is another question that’s pretty easy to answer. By no means does DevOps require you to utilize cloud computing. It’s true that DevOps in the cloud can take a project to new heights, but there is much to be gained simply by embracing the DevOps mindset.
That’s right, the mindset: to be a DevOps practitioner, you need to develop a particular way of solving problems and approaching the available solutions. Project development is mostly time-constrained. You have a particular deadline to deliver a set of features, and after you deliver them you can start something new. All the future bugs or new feature requests mark the opening of a new project (Project 2.0) or a new iteration of the current one. Either way, you can quite clearly establish the start and end dates, between which the work takes place. In this sense, project development is discrete when seen from a bird’s-eye view.
DevOps, on the other hand, requires thinking on a different plane. First of all, the time-space is continuous, because once the project is deployed, it has to be actually running in order to generate revenue. Second, people practising DevOps are usually productively lazy. This means that they want to solve the current problem right now, but always keep in mind how this solution can be reused in the future. If a job can be done once and then simply tweaked a bit for each subsequent project, productive laziness is at play. As this way of thinking scales beyond the notion of clouds, clusters, networks, and single machines it can be applied in each of these environments in the same way.
On-Premises Automation to the Rescue
From this productive laziness stems the cult of automation in DevOps environments. A template that can be performed by a machine and then filled in according to the current needs of a project doesn’t just save time. Sometimes, the cost of automating tasks can be higher than the cost of performing them manually. Does this mean automation is useless in such cases? Not at all. First, tasks written as code provide unambiguous documentation that provides added value in and of itself. Such knowledge can then be easily passed to other engineers, increasing the bus factor of a project. Secondly, such code can later be reused in other projects, thus increasing its value over time. Finally, code scales way better than manual operations, so if a need arises to increase the service’s throughput, automation is there to help.
What’s more, automated builds, tests, and deployments are all prerequisites for a CI/CD pipeline. All this automation is there for a reason. We want to automate all the boring and tedious stuff so people don’t have to do it. People tend to make mistakes when they’re bored, so time spent automating everyday chores is usually time saved debugging later on. Of course, we don’t want to overdo it. Automating a thing you do once a year and takes five minutes might be taking things a step too far. When in doubt, you can always consult the famous xkcd strip.
Most of the popular tools associated with DevOps don’t require cloud APIs to work. Let’s have a look at some examples of how can they be useful in your daily work.
This one probably needs no introduction, as it’s discussed almost everywhere now. In short, Docker is a set of tools that makes working with app containers easy and convenient. App containers are all the services you usually encounter in the many applications that are bundled together with all of their dependencies and run in isolated environments.
The biggest benefit of this approach is that containers should behave exactly the same way in production as they do in development. You can run a container on your machine, test it, upload it to a Docker Hub (where images are stored to be accessed by others), download it to a production machine, and run it there. The configuration and all the dependencies are already bundled, so the behaviour should be exactly the same. Using containers this way makes testing easier, and also encourages the development of more loosely coupled components instead of difficult to maintain monoliths.
Keep in mind that one of our products, Kubernetes-as-a-Service, sets you up with a working Kubernetes cluster where you can orchestrate your Docker containers without you having to worry about the usual administrative tasks.
If you want to create and configure virtual machines in a repeatable manner, make Vagrant your tool of choice. It can manage local instances through VirtualBox or libvirt, allowing you to create development environments preloaded with the necessary tools or virtual production environments for testing. One interesting feature of Vagrant is that if you decide to move part of your business to the cloud at some point, it will also work with the VMs offered by the major cloud providers.
Configuration management seems to be one of the favorite spaces for DevOps tools. Although some are more popular than others, there is no shortage of alternatives, with each having an eager following. The biggest names here are Chef, Puppet, Ansible, SaltStack, and StackStorm. They are not like-for-like replacements for each other, as each has its own strengths and weaknesses. They address problems in slightly different ways.
What they do have in common is that they allow you to store all of the provisioning steps required to get from a bare operating system installation to a fully working service. Sure, the same can be achieved with shell scripts, but configuration managers provide you with various modules that remove the burden of writing conditionals and checking exit statuses of each command run.
The de facto standard for continuous integration at the moment is Jenkins. Although it’s starting to show its age and the SaaS competition is growing stronger, it’s still the go-to solution if you want an on-premises CI server.
It’s crucial to have some kind of CI in your company, as it helps limit the time spent debugging. We’ve all witnessed “But it works on my machine!” moments. A piece of software seems to work reliably when running on the developer’s machine, but fails to start or behaves unpredictably on any other machine in the company or once it’s been deployed. By having each commit built by a CI server, such situations can be recognized immediately and fixed accordingly.
But that’s just for starters. Tools like Jenkins can also help you test each commit, integrate various elements of your service, prepare artifacts for deployment, or even perform the deployments themselves. There are numerous ways such a tool can be used, but the more automated elements you already have, the more benefits you can reap right away.
Have you ever managed to deploy a production server, only to find out later on that one of the services that was supposed to be disabled actually wasn’t? This and other mistakes can be quickly caught by running Serverspec against the server. Serverspec is derived from RSpec, a Ruby testing framework. Because of this, it can be thought of as unit testing for servers.
Various assertions can be provided in code, and the framework contains helpers for common operations. It’s by no means a substitute for other kinds of tests (application unit tests, integration tests, end-to-end tests), but rather aims to provide an early warning against either failed deployments or, when run periodically, abnormal situations.
A Cloud in Your Own Server Room
You now know that practising DevOps on-premises is not a problem. But there is another approach you may be willing to try. Although it requires some changes to be made, we at Stratoscale can take care of the hard work for you.
One of our products, Symphony Compute, lets you create a private cloud with an AWS-compatible API, using your own servers. Implementing DevOps in this way brings even more benefits, as it unleashes the full potential of all your tool’s cloud features.
This article introduced a few DevOps practices that you can start implementing immediately. Even though moving to a DevOps way of doing things requires no specialized tools, having a good set of helpers can, of course, speed up the transition. We’ve presented some of the most popular choices available right now, but this list is by no means complete. What the presented tools have in common, though, is that they can be used with little effort in existing on-premises installations. As DevOps is forward-looking by its nature, using these tools means you also have an easy way to move to the cloud in the future, be it public cloud, or a private one hosted in your own data center.