DevOps Patterns: Scaling with Declarative Pipelines Using Template Modularization
Heads Up! This post assumes a baseline understanding of declarative pipelines (YAML, for example), as well as a basic understanding of DevOps in general.
We're going to be diving into a successful pattern for scaling your DevOps capabilities. This pattern is not exclusive to a specific CI/CD platform, but I will make reference to specific platforms when providing examples.
The Goal: Copy Paste Your Way To Success
Writing declarative templates can become complicated quickly as your organization and applications mature. Oftentimes, you'll see DevOps patterns and practices grow organically and reach a tipping point where your team realizes just how difficult it is to maintain your team's automation.
In my experience, the best goal to have is to write your templates in a way that ensures your declarative automation is easy enough to use that it no longer requires your best DevOps engineer to get an application out there. I like to call it "Copy Paste Your Way To Success".
The goal is to:
Have an accessible, self-documenting, repository for your automation modules.
Write each module in a way that allows teams to choose the correct template, and fill in the corresponding infrastructure details.
The Pattern: Pipeline Template Modularization
We'll be discussing Pipeline Template Modularization. As the name would suggest, this pattern focuses on creating modular pipeline templates. The pattern's core, it uses concepts you're already familiar with; DRY, Abstraction, and Inheritance.
The components include:
Modules: Small, repeatable logical groups of automation activities. This includes everything from sending a notification via Slack, to pushing a container to a repository.
Templates: A series of related modules representing a complete CI/CD process.
Implementations: Repositories and projects that consume and extend the central base pipeline templates.
In this example, base-react-k8s is implementing build-react and build-java. Within the "team-project" repository, it is leveraging the unmodified base-react-k8s template, only passing the parameters required for deployment.
The content of the pipeline in the team-project repository may be as simple as:
- template: pipeline-templates/base-java-k8s parameters: environment: dev build_path: ./ cluster: your-cluster
Getting Started: Keep It Simple!
At this point, you're probably balking at creating 2 more repositories and some way-too-complicated inheritance strategy. Don't! The reality is, you start small. Create a single template for a very common set of automation within your organization. From there, you can begin building your catalog of predefined templates that are specific to your organization.
Here's a very simple example:
Scenario: Assume an organization is using React as their primary front-end framework. They use Azure App Services as their primary hosts. The dev team primarily uses Slack to communicate and would like detailed notifications when a deployment is complete.
Step 1: Decomposing problem: Break up the automation into small parts of the whole. In this example, we'd have the following potential modules:
React build and package
Azure App Service deployment
Deployment result Slack notifications
Step 2: Build your first module: Don't tackle everything at once, start by building just the React build and package module. When going through the exercise, keep your template inputs in mind.
Step 3: Template implementation: Find a project that requires React builds and give your brand new template a try. Take notes on what went well, and what needs to be adjusted and iterate on your first version of the template.
Bonus: Pipeline Template Modularization Tips
Establish module, template, and implementation naming conventions first! The last thing you want is to end up with a very difficult to read group of discordant names. I've typically used the following naming convention:
[type]-[descriptive noun(s)]-[verb] (Example: stage-react-build.yml)
Design the parameters of your templates with readability in mind. If done properly, the pipeline implementations themselves become a very tidy source of infrastructure documentation.
Default parameters when you can. There will be some pipelines that, unavoidably, will have a boat-load of parameters to make them work. Take a step back and identify those that can and should be defaulted.