Opinion: Why monoliths are not always "bad" even in a cloud-native world
"I’ve heard people balk at the suggestion of this mixed approach, because they don’t deem it feasible to convert monoliths to microservices later in the application’s lifecycle..."
Modernisation typically means transforming monolithic applications into containerized microservices. However, you don’t have to automatically turn everything in your applications into microservices and completely abandon the monolith. In fact, you can have it both ways!
If there’s one thing that has defined online discourse of the 2020s, it’s that opinions tend to be extremely binary, writes Deepak Goel, CTO, D2iQ. Things are either good or bad. There is little room in between, and when it comes to software architecture–whether to go with a microservices or monolith approach–I can’t help but feel like some nuance is badly needed here.
It is true that microservices have become the default way we think of modernising software, largely because of the scalability, flexibility, and agility that the architecture style affords. But microservices and monoliths each have their distinct merits for different circumstances, and we shouldn’t necessarily be thinking about the architecture in our applications as an either-or scenario.
We’ve seen recently how that nuance can get lost when Amazon’s Prime Video team cut 90% of its cost by moving the deployment of its audio/video service away from Serverless Step Functions to Elastic Container Service. It sparked debate in the developer community about whether this constituted a monolithic approach or just one big microservice, yet plenty of headlines in the media described this move as Amazon ‘dumping’ microservices and ‘switching to monolith’. Renowned tech advisor Adrian Cockcroft didn’t shy away from critiquing what he felt were “bad takes”.
See also: So many bad takes — What is there to learn from the Prime Video "monolith" story?
What really happened was the team replaced an assortment of distributed microservices handling video/audio stream analysis processes with an architecture that instead had all components running inside a single Amazon ECS task. This is still a distributed application; it’s just not a purely serverless-based one anymore. We shouldn’t be focusing on the semantics of monolith vs. microservices, rather on how we can optimise the architecture of our software.
How do we know when an architecture should be fully distributed or not? I’m glad you asked.
Believe it or not, microservices add complexity - you don’t always want that
The case for microservices seems simple. They’re small units of deployment that each can be deployed and tested independently. You don’t need to coordinate check-ins from hundreds of developers before deploying, and if you’re a small team, there’s an agility to being able to deploy frequently. The promise of being able to update specific components in an application independently, like a logging tool, without negatively affecting anything else is a huge selling point for microservices.
But the downside is that it also means you have to debug, troubleshoot, update, and deploy a distributed application, and the process for troubleshooting and debugging distributed applications can be much more difficult than monolithic apps. There is a lot more to network, which can lead to complicated performance issues and weird bottlenecks. Look at it this way: An entire industry has sprung up around how to understand the performance of a distributed system–and it's still really hard. Unless you’re already immersed in the trenches of microservices, you might find this an extremely daunting task.
See also: Kubernetes has standardised on sigstore in a landmark move
That is to say, microservices aren’t a “win button” you can press. They’re a tradeoff between ease of deployment and complexity, and when you split your applications too much the scales will inevitably tip further towards complexity than many developers are equipped to deal with. Even so, I’ve (frustratingly) seen some incredibly talented people choose complexity for the sake of complexity by overprovisioning their applications.
It’s true that we have some nice technologies for managing microservices such as containerization via Kubernetes. It’s super easy to spin up a Kubernetes cluster. But even the tools for managing microservices have challenges with which developers from legacy application backgrounds might struggle because taking that cluster to production is difficult.
The good news is you don’t have to worry about this if you’re deploying monolithic applications. Again, this should not be construed as a seal of approval to just “monolith” everything. Remember, it doesn’t have to be a microservice vs. monolith decision.
Some parts of your applications will need scalability - others won’t
Not everything in your applications needs to be scalable. You can mix it up. There’s no reason to jump head-first into the lava pit of complexity on Day 1 if you’re a startup. You can make the core of your application monolithic, and the other components microservices-based. You’ll be thankful for this if your application explodes in popularity because it means you can dynamically react to that growth by increasing the proportion of your application broken down into independent services.
Now, I’ve heard people balk at the suggestion of this mixed approach because they don’t deem it feasible to convert monoliths to microservices later in the application’s lifecycle. The reason for this difficulty comes from the fact that monolith components make functional calls whereas microservices components make API calls, and these differ greatly from one another. Although it’s not easy, you can get around this if you treat your function like an API and clearly define boundaries.
The hassle here is substantially less than the headache of an all-microservice approach from the get-go that forces a developer team to deal with a mountain of complexity before it’s even remotely necessary. On the flip side, you don’t want an all-monolith architecture because, aside from the fact that such an architecture generally triggers lazy application writing habits, it could limit your ability to scale the application later on.
Stick with a mixed architecture and you’re more likely to straddle the line of simplicity and scalability. Istio, an open source service mesh, is a good example of this approach done right. It went through a bumpy start of initially adopting a completely microservice-based architecture, even splitting the control plan into different services, which made deployment radically more complex than it needed to be. In later versions, the Istio team consolidated its two components (Pilot and Mixer) into one. A simpler architecture ended up being a better choice (and it still deploys as a container in a pod).
Platform engineering can make microservices more manageable
How about when you’ve eventually reached a level of maturity and requirements that forces you to make your whole application microservices-based? Abstraction will be key here.
Kubernetes helps manage microservices, but Kubernetes in itself is a very challenging technology to work with once you get to Day 2. It’s the car that requires you to be a wizard mechanic to drive from point A to B, and not everyone in today’s software development climate is an expert mechanic. Most have been taught how to drive just by using the steering wheel, brakes, accelerator, etc. They weren’t taught how to repair or modify an engine; that’s what the mechanics are for.
Similarly, we shouldn’t expect everyone to be an expert in Kubernetes at the infrastructure level, especially not with today’s skills gap. A better route would be to build out small bespoke platform engineering teams–the Kubernetes mechanics–who have inside-out knowledge of the underlying infrastructure behind internal developer platforms that can simplify user interfaces and automate Kubernetes functions for developers. We should be empowering developers to focus more on innovation, not maintenance.
Having user-friendly tools that help manage microservices will make them feel less frighteningly complex. Just don’t make the mistake of thinking you have to make everything microservices on Day 1. Technology should serve developers, and by extension, their end users, not be used as a means to flex one’s technical problem-solving skills.
Deepak Goel serves as Chief Technology Officer at D2iQ, leading the design, development and buildof products on its Kubernetes platform and enabling day two operations in multi-cluster, multi-tenant Kubernetes environments. Deepak brings over 10 years of experience in the computer industry across networking, distributed systems and security, has co-authored several research papers and holds a number of patents in computer networks, virtualization and multi-core systems.