This is why I love monorepos — but it’s not for everyone
I’ve been taking care of monorepos for a good few years now, and after recently moving on to someplace new, I had some chance to reflect on a lot of work.
There’s really only one thing you need in order to succeed with your monorepo. And it has nothing to do with tooling.
Say you’ve got teams who work together. You built a monolithic application, be it a single backend service or a single frontend app, and while it worked great when starting off, you’re now realizing that it would be better to modularize parts of it to increase your maintainability as well as your ability to ship iteratively.
You’re also thinking it would be nice to standardize some of your coding practices, maybe some template code or a component library would fit nicely into a potential monorepo, if you did not already have one.
A Game-Changing Feedback Loop
Monorepos allow you to quite easily ensure that teams can’t break APIs or cause other breaking behavior, as the automatic tests from the consumer packages would fail if the underlying packages broke them. Thus, refusing to let you merge anything to production.
Often when you work on features in separate repos, you make breaking changes that are simply unnecessary or aren’t worth the hassle they’ll produce. Assuming you employ testing strategies in your company, a monorepo helps to let you know of the breakage, and it lets you decide whether it’s worth changing the API change you may not even have intended to change.
Using the feedback loop to fix things
Since most packages would follow the same developer commands to perform basic tasks, it also incentivizes you to actually fix the breakage — (“You break it you fix it”) in the consumer package — saving the teams from unnecessary sync time with the classic “Others need to implement this change before [insert made-up deadline here].”
This is the part I love about monorepos, but it’s not for everyone.
Definition of a monorepo
As I’ve seen the term “monorepo” used for various setups, I will clarify what I believe is a common definition for it.
• Multiple packages with similar tech stacks and platforms, i.e., multiple backend services and/or several micro-frontend applications, with their own versions and specified dependencies.
• A workflow that ensures changes to a shared dependency cannot be deployed to production unless its dependents have been built and have had their automatic tests pass.
• Some level of tool centralization that standardized common needs like building a package or installing third-party package dependencies.
• Multiple teams working in the monorepo
It Comes Down to Culture
A lot of people will agree that a monorepo sounds like a great idea, but they also tend to forget that a monorepo is just an enabler for a collaborative workplace.
In a company I worked for, a part of the organization believed in the collaborative powers of a monorepo as well as the other benefits, but there was also initial skepticism that we’d be coupling our precious autonomous teams together.
We started using a monorepo for new development, which over time grew to 14 teams working in it, with around 500k lines of code.
The debate of whether to keep working in a monorepo kept resurfacing whenever we were facing challenges, but the real problems were not with the monorepo, rather it was our naïve implementations of it, and a lacking motivation to improve.
For some people, issues in the monorepo was an extreme burden as they were responsible for the monorepo in large parts, and combined with the unwillingness to collaborate (not cooperate), the monorepo is struggling even today.
I want to go through some of the mistakes we made with you to help you prepare for them. I will also show you that these problems are mostly engineering and cultural challenges, not an argument that monorepos don’t work.
Situation #1: A Team Keeps Breaking the Main Branch
Let me give you some examples. After all, how could a pull request get merged and later cause failure in the main branch?
If you’ve worked with Static Site Generation in the frontend, you’ll know that building a site with it sometimes requires a lot of backend calls at build time, and a brittle backend service (or one that breaks APIs occasionally) is something that could easily affect your overall build stability.
Some of our teams were doing SSG, and for one project it seemed to never stop breaking the main branch on an intermittent basis. At worst, this prevented teams from shipping changes to shared libraries that the app depends on.
In this situation, it is easy to say that a monorepo doesn’t scale. After all, you don’t want a main branch to fail and SSG seems to cause failures all the time.
What we should be talking about here is resilience and dependency elimination. If you have an unstable backend that you can’t control or influence, remove it from the static site generation step, or at the very least add some retry logic for it.
If you own the backend service, consider moving it to the monorepo to tighten the API change feedback loop, or if it is brittle because of external services, add some caching or other protection against that brittleness.
Situation #2: Updating External Dependencies in the Monorepo Causes Production Failure
In a monorepo: If you don’t write tests, nobody will guarantee the safety of your code. If you rely on something outside the monorepo, you’ll have no guarantee that they won’t break loads of test suites.
In this case, imagine a design system component library that is for some reason in its own repository. They create new releases often. Often with unintended breakage (how could they know all the situations where things break anyway? They’re outside the monorepo feedback loop).
While it might be tempting for the design system team to create some kind of beta release process, you’re just extending your release cycle and risking introducing even more breakage while waiting for a release. Also, there’s more overhead of having two release tracks when urgent fixes come up.
On the other hand, if you decided to create a testing period for every small and potentially breaking change, you’d be very soon stuck in the classic announcements: “All teams have until this (imaginary) deadline to use the latest version” followed by retrospectives with “nobody is testing when we make changes.”
Communication is an important but finite resource, and teams will automatically defer what is a lot of work for very little return, leaving you frustrated, possibly trying to set even stricter rules for these testing periods that’ll most likely just deteriorate motivation and productivity even more.
Move your dependencies closer to your teams and the monorepo.
Now, imagine we moved the component library, a central dependency, to the monorepo.
It is extremely important to help teams realize that, if a central dependency changed in the monorepo, and their consumer app broke, it’s not because of the changes in the central dependency! If your test suite had picked up on the breaking change, that underlying package would have executed those tests and the team making those changes would not be able to merge their changes to production.
Having this mindset and giving it to your teams creates a sense of both importance and real urgency towards having a good test coverage, and I would say that in my years of development, I’ve never seen a more powerful way to make teams who (mostly unwillingly) compromise or don’t even write tests, start writing tests.
Anyone who thinks writing tests is a cost they can’t cover is implicitly saying they want an unmaintainable app. Later, when they realize they actually need maintainability, the harder it will be to recover from that since there’s a strong difference between writing testable code and code where testing aspects weren’t considered from the beginning.
If you’re not used to writing tests, my advice in this area is usually to focus on the unit tests and don’t create too many big complex all-in-one tests (they’ll be brittle and slow). If you haven’t written a single test, take it in small steps by adding a few unit tests or a very small browser test (mock the backend away if you have to. It’s easy using Cypress).
Realize where your code needs to change to be more testable, and don’t be overwhelmed by the large number of tests you may be missing.
Situation #3: The Monorepo Builds Are Slow, Affecting Your Deploys and Releases
While the myth of slow monorepo build performance has long been debunked and solved many times over thanks to remote build caching, selective build calculation, and various other means, it is still important to remember that changing a shared dependency in the monorepo will trigger a lot of test suites for every consumer of that package. Of course, you want to test all affected packages, but what can you then do about it? There’s a few things worth mentioning here.
Shared dependencies with high churn
If you have a situation where a shared dependency is changing often, and is used across many consumers, let’s try to understand why.
In a monorepo I was part of, we thought it would be a great idea to have a central frontend REST client to simplify our network requests across the monorepo, and to detect breaking changes when a backend API had changed. It generated frontend code from Open API schemas (package link).
What we realized was that since this package was changing a lot due to teams doing regeneration of their backend schemas, it wasn’t scalable and negatively affected our build times (a change in a central package triggers the builds of all consumers, remember?).
We then asked ourselves, what are we actually looking to centralize?
We wanted to ensure that teams didn’t generate one microservice into frontend code in multiple places. If someone changed the backend schema, some teams would not be made aware of it if they had generated code of an older backend API that was no longer there, and their test suite would never execute against the new backend API.
We also wanted to centralize the generation scripts. So, what we ended up doing was to create a tool in the monorepo that did the generation, and kept track of which output paths the generated code was in for each microservice.
We then let teams create as many REST clients as they wanted, dividing them up between consumer areas in the company (e.g., a B2B rest client and a B2C rest client).
Long story short, while it sounds very tempting to have these mega-central packages, if they also have a lot of churn, you’re better off splitting it up further for each consumer area. You could apply this thinking to a big component library as well, and think about how you’d end up doing it.
Stop coupling releases with your deploys
This is a big one and is worth repeating many times over.
Waiting for a deploy to finish when releasing is, these days, not the most modern way to release features. This goes double for rollbacks that could cause a massive loss of revenue for every minute you’re waiting on a “rollback build” to finish.
Instead, incorporate some kind of release managing tool, either managing release snapshots, or using feature flags, also known as feature toggles.
Feature toggles (which I sometimes refer to as “glorified if-conditions” for people who don’t know what they are) are a great way to control your releases.
You can use it to release your code changes to percentages of your customers using some arbitrary rule (“only enable this feature for admin users”) and for turning the feature off at the click of a button. The deploy could have been done a week before the actual release, hence a longer deploy time becomes irrelevant.
If the problem “the slow build is affecting our releases and rollbacks” surfaces in your company, trying to lower the build time is, while sometimes nice to do in general, solving the wrong problem. Instead, the problem statement should be “We don’t have a release and rollback strategy” because as harsh as it sounds, you can’t really count in a CI pipeline, because it’s just not the right tool for release management.
Using feature toggles additionally enables you to use fewer test environments since the separation between code changes happens on a code level, which plays well into the collaborative aspect.
It takes at least some level of collaborative culture to really make monorepos a viable option. It also does not hurt for the organization to know that spending time on tooling and developer experience pays off in the long run.
This is all speaking from experience in a divided company, and I hope this can help you find better footing if you wanted the same thing I did without at first really knowing it — a company consisting of teams that feel empowered by collaboration.
Having a frontend monorepo with more standardized rules and template packages will lead to a more consistent site from a UI perspective and is yet another strong argument for why a monorepo can help your product through a strong collaborative culture.
How to test your collaborative level
Without going all in for a monorepo, try creating something useful but incomplete, like a project generator with a tool like yeoman. The goal would be to try to standardize and help teams be more productive with it, as well as to see if they try to contribute back to it and see it as living documentation, legitimizing your technical direction. If you get decent traction and feedback, you may have a culture that can drive a powerful monorepo.
If you don’t get any traction, maybe you’re better off elsewhere, or perhaps you need to find strong allies within the company who wants to make the culture more collaborative. And remember that all communication starts with yourself and make note of what rhetoric you use when you successfully convinced others — because you’re bound to repeat yourself for the many times where a monorepo debate will come up.
Bonus: My Favorite Monorepo Managing Tool
I’m a strong advocate of rushjs. It’s like TypeScript for monorepos.
Rushjs helps structure large organizations in a monorepo. For example, it ensures teams don’t rely on ten different react versions (although you can always opt-out of it for each specific dependency). It also has remote build caching and many other well-designed features.
Thanks for reading.