Streamlining Tooling Management: The Idea Behind `Cuddle`



In a previous post I explained that I've got my own internal cloud platform I am hosting for my own applications. In this post I'd like to drill down into a foundational tool I am using to manage the code both running the cloud, but also the applications underneath.

First a small story to explain some of the motivation: When I started out programming I like many others didn't know about git, or other vcs, so I just used files and folders in brute force manner, when I had to share some code I just packed it up in a zip file and yeeted it wherever it needed to go.

That was fine for a while, but eventually I found git and began using GitHub Desktop, which probably still is the cleanest option for getting someone up and running with git, even if it can become a shackle quite quickly, as you really need to know git to use git.

However, now armed with git, github and so on. I began to have a problem that kind of like the zip situation, everytime I started a new project I basically copied an old project, removed the cruft that I didn't need and then simply began working anew. A process filled with rm -rf, sed and more.

I also didn't like the mono repository approach, my reasons include that I wasn't as proficient in building CI systems, and that my projects are so diverse that they don't need to be in lock step. I would also like to share some stuff publically, which complicates the whole mono repo approach.

A mono repository is a great way to remove some of the repetition, but you pay for it in terms of continuous integration complexity, as such you need a much more sophisticated building and testing solution than a regular multi-repository approach, where each significant component has its own repo.

Anyways, I tried to solve the issue with various templating tools, but that simply led to another problem, templating is great at scaffolding, but horrible at maintenance. You get drift for various parts of the system.

My repositories usually consist of a few layers:

  1. Project setup
  2. CI definitions
  3. Code
  4. Various artifacts and docs

Most of these can be scaffolded, but in later projects I may change my mind and add some stuff to CI, a new way of handling environment variables, or changing the general infrastructure code.

And will I go back and update 20 other repositories manually with the same changes? no.

There are some solutions to this problem, specifically code-doctoring, or modding. However, that is hitting a nail with a sledge hammer, or rather a screw. It is simply an incompatible problem, with some overlap. (it is primarily built for handling small breaking changes in large open source projects, nextjs have used this approach in the past, and is fairly common in the web scene).


At work we've got an open source tool called lunarway/shuttle. It is basically a golang cli application, which allows a repository to link to another for various parts shown above.

It can link (not as in ln, but on a more implicit basis)

  1. Project setup such as .env files, docker-compose setups, various commands etc. Such as spinning up a dev environment etc.
  2. It can contain a skeleton of a CI system. such that a project will only need either a small bootstrap ci, basically just telling it to use shuttle to handle it, or nothing if using Jenkins (which we do a work).
  3. Various artifacts and docs (setup)

This is extremely nice, as we remove a lot of the boilerplate from our projects, so that we can focus on what is important the code. This tool as we've found out kind of gives the same benefit as having a monorepository, though with a staggered update cycle.

It is run like so:

shuttle run build
shuttle run test
shuttle run *

Each of these commands will trigger some shell scripts either in the local repository or in the parent plan.

The same shuttle is used in CI, to kick of various steps, again, such as build, test, generate-k8s-config etc.

A shuttle spec at its most basic is just a file pointing at a parent plan, along with some variables to be used in templating or discovery purposes.

plan: ""
  name: my-plan
  squad: some-squad

A parent plan looks the same but is called plan.yaml instead of shuttle.yaml

scripts can also be defined in either the plan or shuttle files

      - shell: go build main.go
      - shell: $scripts/

You are free to choose whatever floats your boat. I've also added native golang actions, which doesn't require this setup, but that isn't relevant for this post.

This is a very useful tool, and I could go and just use that. But I like to tinker with my own things, so I've built my own to expand on its capabilities, some of which I would need buy in from, in the company, which I am not interested in for personal projects.

Shuttle itself is also a fairly simple tool, as what is important is what it provides, not the tool itself.


As such I've built a tool called cuddle, which is a CLI written in rust. My vision for cuddle is that it can support the same features, but on a wider spectrum, as well as making people able to go one step further.

It runs in nearly the exact way as above

One of the problems with shuttle, is that it heavily implies that commands should be written in shell, this is great for hacks, and small tools, but not great for delivering a product. I actually solved this for shuttle allowing it to call natively into golang without having to write a line of shell script. lunarway/shuttle#159 it works pretty well, if I have to say so myself, and if you don't have golang installed, it will use docker in the background to build the plugins needed for the commands to be executable.

I want some of the same features myself. I've already gotten rhai and lua to work in cuddle, but I want something more. I want to use rust and I want it to be a bigger focus in the tooling allowing for greater expandability and pluggability.

Code sharing

Right now shuttle always has this structure

shuttle service -> shuttle plan

This means that a repository can inherit stuff from just a single plan, which can then include the pipeline and what not. But the plan itself cannot inherit from more plans, in turn allowing a deep dependency chain. A shuttle plan can act like a shuttle service inheriting from another plan, but that way it won't allow it to distribute the base plans files.

I already have solved this for cuddle, such that we can have a deep as we want dependency chain. However, I would like to flip this on its head a bit. See my post of distributing continuous integration.

Cuddle right now has a dependency graph like so

cuddle service -> cuddle plan -> cuddle plan ->*

This basically means that cuddle can have infinite plans (or as deep as the nesting in file systems allow), however only one at a time However I'd like to split this out into more well defined components.

Cuddle components

Kind of like a more traditional software development flow.

Such as:

cuddle service ->* cuddle component
               ->  cuddle plan ->  cuddle plan ->*
                               ->* cuddle *component

A cuddle component is technically a hybrid between a library and plugin. It builds like a library, but functions as a plugin. That is because it should be cross platform executable like a step in a CI platform is, but provide a more fine grained features and api. Such as a cli script, but should either execute as a docker run, a webassembly function, or one of the built in scripting languages. A compiled language is typically a nogo, it is simply too slow for immediate execution. Unless you use golang, because it is typically fast enough for this usecase.

Now you may well have a good question, why not just use a regular package manager and execution environment like: rust/cargo or ts/deno or another language of choice.

Cuddle constraints

There are a few reasons, to show them I will first have to highlight why this is different than regular software development:

Cuddle is a traditional cli, as such it needs to uphold a few guarantees.

  1. Firstly cuddle as a tool needs to be fast, fast enough that you don't notice that it runs a lot of stuff underneath.
  2. It needs to provide a good developer experience. cuddle provides its tools as a product, as such we need a good experience using said products.
  3. cuddle calls needs to be compose-able, such that you can pipe cuddle output into regular unix or windows tools, depending on your needs.
  4. cuddle services should not require maintenance to be up to date. Unless the developers choose to using some of the various escape hatches.

Also I see cuddle as an enabler. This means that workflows should be built around it. You may want to script the usage of cuddle runs yourself. This should only be for the individual. If a squad needs a curated list of tools, they can simply maintain either their own component or plan and inherit from that.

For example I've built a tmux workflow around it, which opens a new tap, splits the window into multiple panes, giving me an auto runner for tests, as well as the binary (so I can access the webserver), a shell, and access to a test or local database for debugging purposes.

This is highly opinionated towards me, and won't in its present form be useful for others.

Releasing plans and components

As such a traditional package manager won't work. This is mainly because package managers rely on versioning and lock files to maintain a consistent set of libraries to use. This is pretty good for a tools need. But not great if we don't want to offload that burden on developers. If we choose that approach, we would have a few problems.

  1. Each time a cuddle or one of it dependent components were updated, we would need to release a new semantic version, which would require the developers to update. This may be moving quite fast, as such it is nearly a full-time job for developers with big portfolios to maintain said dependencies.
  2. Another as we've done in lunar is simply pulling a fresh plan every time. This makes sure we're always up to date, or at least as long as the projects are actually run and released. Here we allow various escape hatches, for setting static commits, branches, tags what have you.

Without sacrificing too much developer experience on the publishing side, we need to come up with a good approach for decoupling development from releasing. Like traditional software.

In this case, the plugins and services will internally use semver, for signaling breaking changes. This is useful for showing diffs and what now to developers using the tool.

However, when we release stuff, releasing it on a channel instead allows a great deal of benefit, first. We can choose which appetite you want your service to run on. You may choose to use, either pre-release, or default (stable).

pre-release allows me to dog-food the plans, during testing, without breaking all my tools, and services. Stable which is default, will as mention provide a more thoroughly reviewed change set.

It is required to have a semver release, to release to a channel. This is for a few reasons, but mostly for providing release artifacts. The services shouldn't need to build anything themselves. This is to maintain speed, and usability.

Each component will simply function like regular libraries, releasing software as normal.

Each plan will curate a set of components to release, and will handle them like normal software releases, i.e. version and lock files and all that jazz. For each release it will receive pull-requests with updated dependencies provided by renovate.

This allows each plan to curate an experience for developers. A backend engineer will not have the same needs, as a frontend engineer, or a db or an SRE etc.

However, this should provide a sufficiently sophisticated dependency chain that stuff can actually be built with it, that is maintainable, and stable enough.

Plans as binaries

This means that each plan on release can be turned into binaries, either regular elf binaries, or wasm. I haven't decided yet, but wasm may have too many constraints to be viable.

When cuddle runs for the first time in a service, will simply look at the binary, its self reported included files, such as a cuddle spec, and other included files, it will then form the dependency graph as it goes, downloading all plans as it navigates the chain.

This is done serially for now, as it would require a registry to form these graph relationships, which isn't needed right now, while the projects are small.

A cuddle service can also contain components, however, those will be built adhoc, and function like a normal software project, no way to get around that other than surfacing the components as binaries as well, which may become a tad bit complicated to manage.

Options for not breaking git history.

Right now the cuddle services rely on an external project to function, this makes history non viable out of the box, because it implies that everything in the service has to be forward compatible. For example would git bisect be able to run on a 3yr old cuddle plan, including changes to cuddle itself. Probably not, and it doesn't fit the spirit of bisect, as you wouldn't get the same binaries.

Instead, what should be done, is that cuddle will detect if running under a bisect or some such, I haven't figured out entirely how to do this yet. And then pick a release from a release date, that is older than the commit itself.

This should get as close as we can to getting reproducible builds, though it is definitely a downside, so if this is a deal breaker then cuddle or shuttle for that matter isn't for you. It isn't something I did myself that often, so it isn't for me. It sadly is mostly one of those tools you don't need, until you really need it.


In this post I've gone over my own home built cuddle code sharing tool, explained why it is useful, and what is wrong with current workflows in multi-repository organisations. it is a bad bit more complicated than it needs to be, but it provides a useful way of exploring new usecases and removing pain-points I am currently experiencing.