Streamlining Developer Workflows Across Your Organization
@kjuulh2023-04-11
In this blog post, we'll delve into distributing a continuous integration and deployment system throughout an organization, enabling robust tools and workflows to be shared across various languages and domains.
As an organization evolves, its developer tooling undergoes various stages based on its size, maturity, and technology stack. A typical progression includes:
- Creating shell scripts for project deployment.
- Packaging shell scripts or developing basic tools, then sharing them via brew, copying, etc.
- Developing dedicated tools, often by forming a dedicated operations team.
- Establishing opinionated workflows and endorsed paths.
- Assigning responsibilities to operations feature squads and consolidating them in endorsed paths.
Of course, this process isn't one-size-fits-all, and it can vary depending on an organization's size, the products being developed, and other factors.
We'll concentrate on steps 4 and 5, examining their implications for your company and how they can empower and accelerate your operations teams.
First, let's define opinionated workflows.
Opinionated workflows and endorsed paths
For a medium-sized company, a smart strategy is to mature a specific tool stack by focusing on a handful of technologies and creating exceptional tooling around them.
It's often believed that if you're using Kubernetes, Fargate, service mesh, etc., you can choose any language for a given service. However, many companies have discovered that building a service involves more than just selecting a new language; it also requires investments in infrastructure, libraries, monitoring, expertise, and more.
For instance, if we select Java for a project where we typically build C# with .NET, we'll find that the languages share very little in terms of tooling.
To run these services in production on Kubernetes, you'll need:
- Dockerfiles for declarative deployment
- CI tools for testing and static code analysis
- Libraries for:
- Logging
- Monitoring
- Error handling
- Circuit breaking
- Retrying
- API abstraction layer
- Database ORM/querier
- Authorization
- And numerous other convenience libraries.
While it's not essential to build all these solutions yourself, you do need to consider and adapt them for your needs.
Thus, it's wise to choose an endorsed path, providing developers with a consistent set of tools and clear expectations.
An endorsed path is a predetermined service journey that automates developers' decision-making, offering immediate and ongoing support out of the box.
You can tailor your path to your developers' needs. In our case, we designed a journey from day-0 to day-2. For example, we didn't prioritize deprecation, but it's acceptable for our use case, even if it involves some manual work.
This encompasses:
- Service/Library creation
- Dependency management, allowing teams to automatically receive updated dependencies
- CI, with Jenkins jobs automatically created based on service type
- CD, performed automatically according to service type and configuration, eliminating the need for teams to create Dockerfiles or Kubernetes manifests themselves
- Libraries, including logging, monitoring, and more (see the library list above)
However, it's crucial to note that we only endorse a few languages and types and maintain different maturity levels for them.
Think of this as developing a product centered around the platform, as I recently discussed in a post: Platform Engineering: The Next Era of Ops.
The advantages of these services include reduced cognitive load on developers and shifting incidental complexity to expert squads. This enables feature squads to concentrate on developing features, only tapping into the platform layer when necessary to extend, modify, or enhance the offerings, usually with the assistance of the platform team.
Keep in mind, this isn't a one-size-fits-all solution; it's most effective for homogenous, large feature sprawl or services. If you have only a single product or a few, invest time in building highly performant and tailored platforms for them.
One issue with this architecture is the potential for unclear ownership and the challenge of applying software development principles to platform squads. We'll address this in the next section:
Distributing Responsibilities for platform squads and friends
When establishing a platform organization as a product-oriented entity interacting with the actual business organization, an API must be developed to govern how the business domain interacts with the platform. Building this can be incredibly challenging (think AWS or any other large product you've used). However, unlike these organizations, you likely have direct feedback from your users and the ability to create more opinionated solutions, allowing you to exert more control over the workloads running on your platforms.
An alternative model involves assigning responsibilities for specific parts of your product to each squad within your platform organization while consolidating these responsibilities into a single API for your developers.
This results in a single tool with a wide range of capabilities, enabling:
$ platform-tool build
$ platform-tool test
The challenge now lies in determining ownership for the entire build, test, and the components that make up the pipeline. Drawing inspiration from product development and software engineering, it becomes evident that we need to create modules and plugin architectures for sharing these tools, allowing teams to follow this organizational structure:
- Team responsible for the
platform-tool
itself, as well as creating individual commands such as build, test, etc. - Teams responsible for building parts of the tools, such as templating for Kubernetes, integration testing, static analysis, code generation, etc.
There might come a time when you want to empower feature squads to develop parts of the platform tailored to their needs, such as specific tools they've created that necessitate a proper developer journey.
You might wonder why this hasn't been done yet or why this approach is more challenging than it appears. The problem stems from a gap in tooling between standard software development tools and the tools used for building the software itself. These tools are often monolithic, require configuration, and are not well-suited for distribution (see Dockerfiles, Helm, bash, makefiles). Most of these products are designed to be defined within a single application, which works for their use case but isn't scalable enough for our needs.
Large companies have gone the other way, developing highly scalable tools for their own requirements, but these tools are generally not practical for small to medium-sized companies to adopt (e.g., Bazel, Buck, Fabricator, etc.). Additionally, they typically focus on a single specific use case and excel at it.
The goal here is to introduce flexibility and autonomy into the pipeline, enabling teams to leverage their expertise using a standard software development paradigm.
To do so, we want to adopt a product strategy internally for the platform teams:
- The platform organization agrees on a protocol for sharing among plugins, templates, etc.
- The customer-facing platform tool composes these tools using the agreed-upon protocol.
- Each platform/feature team can own its features, which are then assembled in an opinionated way by the platform team responsible for the developer journey.
Example: Golang feature service
Let's consider the example of building a Go service.
A customer-facing platform team would have already defined the main API functions, such as build, test, code coverage, etc.
The developer journey team then composes plugins and templates into these APIs.
func Build(ctx context.Context) error {
session := ci.BootstrapSession(ctx)
defer session.Close()
if err := golangbin.Build(ctx, session); err != nil {
return err
}
if err := golang.Test(ctx, session); err != nil {
return err
}
if err := sast.Scan(ctx, session); err != nil {
return err
}
if err := docker.Publish(ctx, session); err != nil {
return err
}
return nil
}
While this is a simplified example and real-world situations would be far more complex with numerous interdependencies, it illustrates that each team can own their packages. For instance, a team might be responsible for the SAST plugin, having their own submodule or repo, defining their own tests, workflow, etc.
When we publish our tool through our organization's preferred distribution mechanism, each team can build a Go feature service using the same tooling, automatically receiving all platform features relevant to their domain.
The same concept applies to a Node.js service:
func Build(ctx context.Context) error {
session := ci.BootstrapSession(ctx)
defer session.Close()
if err := nodebin.Build(ctx, session); err != nil {
return err
}
if err := node.Test(ctx, session); err != nil {
return err
}
if err := sast.Scan(ctx, session); err != nil {
return err
}
if err := docker.Publish(ctx, session); err != nil {
return err
}
return nil
}
Each team can use the same tooling and benefit from platform features tailored to their specific needs, streamlining the development process and fostering a consistent developer experience across different languages and projects. This modular approach enables teams to focus on their core responsibilities, delivering high-quality features and improvements while leveraging the shared expertise and resources of the platform organization.
Despite the changes, various tools can still be reused, even if the build and test components have evolved. While this may seem trivial to some, for those transitioning from a shell-driven workflow, this new approach unlocks the full potential of a traditional software development workflow, extending it to platform squads as well.
Conclusion
This blog post expands upon a previous post, Platform Engineering: The Next Era of Ops, by advocating for a distributed, modular approach to crafting shared tools and workflows throughout an organization. Implementing this strategy paves the way for a more scalable, adaptable, and efficient development process, benefiting platform squads and feature teams alike.