One of the core fundamentals of DevOps is to ensure that deployments are fully automated. The technology stack that you are supporting should be fully production ready at all times. This means that DevOps does not stop at IaC automation. The full SDLC needs to be constructed in such a way that it enables more of a GitOps model rather than a series of disconnected pieces.

Let's face it; unless we are heavily invested in FOSS, we work for a company that is trying to bring in revenue to support payroll. I am not saying that FOSS does not bring in revenue, but I would say many of the most successful FOSS projects are funded through the generosity of the community and businesses which leverage their technology. This means that any DevOps model that most of us are producing against is centered around some concept of value engineering. Value engineering is a tough topic to cover holistically, but the general idea is that technology must support some kind of product, service, widget, etc. that brings value to the market.

My experience is that the emphasis on value engineering is far too constraining to the internal consumers of your DevOps pipelines. Sure, we work for a business that needs to make revenue to be successful, but; I believe that the idea of value engineering goes both into the market and inversely into the team producing the technology. This article will touch more on the idealistic side of what DevOps can do for teams with the idea of ancillary market benefits being gained by a team doing DevOps well.

Related Articles

Primary Considerations for DevOps Deployments

Goals and Values

This is probably one of the toughest considerations to grasp and do well. Goals and values are closely aligned and intertwined yet it is very simple for them to shift directions very quickly. A business is always going to have some kind of goal that they are trying to reach, and a technology team will also have goals and aspirations that they are reaching for as well. Let's all remember that as technologists, we got into this for the love of technology no matter how jaded we may have become by trying to accomplish a never-ending stream of difficult customer requirements.

Since DevOps is as much about the team as it is about the business, make sure that goals/values swing in both directions. There may be a business-related goal out there to reduce defects by X% or increase the speed of delivery of a feature by Y timeline, but those goals are only part of the story. As technologists, we owe each other an awesome experience developing and delivering technology. Make sure the team takes time to learn and understand each other in a constructive way, and set goals through understanding each other values.


A DevOps pipeline should be approachable to the consumers of the pipeline. Not only should a team be able to use it, but a team should also be able to enhance it. Generally, this is boiled down to technology choice, but I believe that is the second facet of approachability. A group must first determine what activities their pipelines should be responsible for. Some tools are good at certain parts of pipeline development and some are not, and by determining the activities; a team can make good technology choices that support the rest of the considerations below.

To illustrate this point, here is an example of what I mean by approachability. The gold standard for a complete DevOps pipeline is one that allows you to commit code and it is safely delivered to production. Starting a journey at its finale is rarely a good idea. That means a group must determine their jumping-off point which will usually lead a team into some kind of configuration management. Ensuring 100% of configurations are stored in a code repository can be an extremely daunting task simply due to the vast array of methods and technologies available to accomplish that goal. Start small by getting the team involved in adding and pushing configuration management. Listen to the concerns of the team and work your way toward addressing those concerns. Ensure that the team around you are bought into your ideas and methods. Do not be afraid to pivot based on feedback to ensure that your methods are well received before moving on to the next phase which may be orchestration, testing, monitoring, etc.

If the team is struggling to get going with DevOps, I would encourage them to read through Practicing DevOps – A Surprising Alternative which may give them some interesting insights on how to get engaged.

Repeatability and Dependability

When designing a pipeline, a focus should be on how DevOps can help with the repeatability of known tasks and how dependably can automation take hold to accomplish tasks which used to be in the domain of mere mortals. This is often harder to quantify than you may think. Let's take something as simple as adding metadata to a build that ties back to a commit hash. In Golang you will be using an -ldflags param to a go build command which must be able to run a git command to understand the hash of the head commit. If your build environment has both Windows and Linux build agents, can both of them reliably run your command?

Another way to think about Dependability is to consider the end consumers of what you are DevOps-ing. Do they expect your pipeline to fail safe or fail in an unsafe way? As long as they can rely on specific failure scenarios, then they can at least depend on your pipeline doing the same thing. If your pipeline is consistently failing in new and unforeseen ways, then the only thing that the teams around you can depend on is that your pipeline is unreliable.


The first thing that a business will talk about is "Speed of Delivery". If we can ship more products/products faster, then the business will make more revenue. Companies like GitHub and Amazon got it right by fully automating their deployments. In a post from 2012, GitHub was deploying roughly 200 times per day to production. Amazon deploys to production every 11.7 seconds equaling roughly ~7400 deployments per day. This speed to production ensures that code does not sit and languish in a code repository for days or worse, weeks.

Amazon and Github are actually ensuring updates are reaching other engineers and architects quickly reducing the overall feedback loop and increasing the opportunities for learning. By ensuring that a higher number of smaller units are going to production either as minor improvements or feature-gated enhancements, the teams that are working on their products are able to shorten their feedback loop which pushes decision-making as close to the individual contributor as possible. This is called an individual contributor first approach. The more we can empower the people around us with information, the better we will all be and the faster rumor mills will be stopped dead in their tracks.


More often than not, design choices have been made for you which you can not really walk back to ensure that all technology fits a specific pipeline or delivery flow model. This is quite alright when you design with flexibility in mind. In my experience, the general overall steps of a DevOps pipeline will always be the same. I will cover those steps in a subsequent blog post. If you take a step back and look at what you are producing and turn them into more of an interface-style design, then the only thing that is really changing is your concrete implementation of functionality. That is the panacea of flexibility because only a portion of your overall work changes. You are able to get an economy of scale out of other materials that team members owe each other like validations, documentation, expectation setting, etc.

Feedback Loops

Getting feedback into the hands of stakeholders will help aid in good decision-making along with the overall stability of a solution. The most important consideration here is this: make sure your feedback loops are consumable and actionable. Spamming your stakeholders, business or technical, can create a TON of churn which doesn't do anyone any good. Ensure that any feedback provided is directly actionable.

Sometimes this is easier said than done. Starting out, you will end up with a deluge of information that feels insurmountable. The most dangerous thing you can do is either silence the information or assume that you are seeing false positives. False positives can happen, and it is very much worth digging in and understanding why. On the other hand, silencing information and subsequently ignoring it can lead to mishaps in the future. Dive in and get to know your tooling and ensure you are doing everything you can to logically keep them clean.


An incomplete DevOps pipeline with some jank will inevitably lead to issues that erode confidence in the technology stack. There is no gold standard of 100% complete here, but you can transpose completeness with confidence in most cases. If your team and business partners feel that the pipeline is doing everything that it should, then you are in great shape. If your pipeline feels like there is something left to be desired, don't ignore that feeling and continue to build confidence through completeness.


A GitOps Methodology

First, I think we need to define what GitOps is. GitOps is the idea that on a push to a repository, a full deployment pipeline can take place which deploys code into production. On top of that, all controls, settings, adjustments, etc. happen from the Git repository rather than a smattering of distributed user interfaces. This is a developer/technologist first approach which means that all changes will need to be visibly pushed to central source control before electricity is run through them out in the wild.

GitOps being a developer/technologist first approach is an important consideration to make because not everyone who is normally part of the technology delivery flow is necessarily a technology-minded individual contributor. When making a change over to GitOps, you will need to consider what barrier to entry this creates for the end users and design your GitOps flow to be inclusive of all contributors, not just the contributors who are in the know.


Selection of the right tools to do the job is both a critical juncture in any project and sparks the most religious debates showing where allegiances really lie. Don't fall into that trap. A tool performs an action through specific syntax. At it's core, my preferred technology stack consists of something which orchestrates actions, something which performs tests, and something which performs configuration management. This generally means Ansible, Inspec, and Chef/Puppet. Could you skip the Chef/Puppet and do everything 100% in ansible? Absolutely. Could you skip ansible and do everything in Chef/Puppet? Absolutely! I think the point is that anyone who argues that there is a right or wrong way to go about tool selection is flat-out wrong. The tools must work for the team dynamic and then work for the technology. If you start at technology first, more often than not, that effort will fail.


As you can see above, there are a lot of things to consider when putting together a DevOps pipeline, most of which have nothing to do with technology at all. One of the best things I have learned in my time building and deploying DevOps pipelines is this: If the pipeline does not work for the people, then the pipeline does not work at all. Consider how you are going to push on the soft skills side of your DevOps practices more than the technological side when starting out. As you grow in your understanding of what DevOps really is technologically, you will come to appreciate that 90% of the problem is solved before the first line of code is ever put into a repository.