Evolution is a constant feature of the Information Technology (IT) Industry. The environment is in constant flux. The landscape constantly changing. New opportunities arise for nimble, better-adapted systems to fill new gaps in the new ecosystem, and existing inhabitants get squeezed, possibly even out of existence. When confronted with such a changing environment, you evolve, or you get replaced. When the landscape changes as quickly as it does in IT systems, you must constantly be wary of becoming obsolete. That’s just one of the facts of life that makes IT so exciting.

This evolution is a factor even at the micro levels. Experienced programmers have a concept known as Code Smell. They recognise just by looking at some section of code, and seeing how it relates to the whole, that something smells fishy. They notice perhaps that somewhere, a fundamental design principle is being violated, or they find some anachronistic code relating to some vestigial, broken functionality. Such code is said to smell, and it will trouble any developer working in that section of code because they will not be able to trust their instincts. Things which should work, will break, and to get things to work, a principle might need to be bent a little. It’s a sure sign that refactoring is required. If left too long, or not quarantined appropriately, it will most certainly cause contagion, and eventually lead to some catastrophic failure. In the face of such pressures, companies lose their best developers, who begin to move on to more cutting-edge technologies. Then things spiral down quickly, and the smell of decay pervades. Yes, sometimes systems develop a bad smell simply because the world has moved on.

A well-built system resembles a well-pruned tree. Perfect in balance and form. But it immediately comes under pressure from new demands, and can lose structure as it transforms organically to meet them. Perhaps there’s a requirement to move to new platform, provide mobile access, or maybe add some new technology like voice recognition. Soon all that balance is lost, and it becomes difficult to navigate through the dense growth. It too soon begins to smell.

When the smell gets bad enough, it’s an opportune time to re-evaluate fundamental designs and consider refactoring. And when you refactor, it’s important to examine the current landscape and be cognisant of emerging technologies and architectures.

Some pressures are so profound, that drastic measures are required to maintain structure. Some of these changes are universal in nature, so that all systems must shift to accommodate it. One such change was the move to cloud, and how existing systems dealt with this demand would have profound effects on their future prospects.

The pressure to move to the cloud was to a great part driven by procurement difficulties. Reacting to anticipated demand used to take months, as forecasts were examined, equipment procured, installed & tested, only perhaps to be left idle as demand failed to materialise, or for services to degrade where demand was unexpectedly high. The cloud offered a seemingly inexhaustible supply of virtual hardware. Procurement took only moments, and when demand was low, anything surplus to requirements could be relinquished. Companies like Amazon and Google were becoming leaders in the latest methodologies for high availability. If your Data Centre Infrastructure did not form part of your core business value, then there was no sense in trying to compete, so many chose to migrate to the public cloud.

In many cases, existing systems were just fork-lifted onto the virtual platforms with minimal modification. Yes, they were on the cloud, but they were never designed to leverage cloud architecture to its potential and would soon become more unstable as they tried to respond to new demands. The landscape had changed drastically. The ubiquitous nature of the cloud, meant new opportunities like mobile access, became desirable. Legacy systems struggled to respond. Some, particularly those designed around a Service Oriented Architecture (SOA), fared a little better.

SOA, when seen at a very high level, is built around a paradigm that involves identifying individual services for a system that fulfil a business need, and then building those services with a well-defined interface, so that they can communicate with each-other to perform some business function. The paradigm prioritised, among other things, interoperability, flexibility and evolutionary refinement, all of which enabled it to easily adapt to the cloud. The level of granularity is generally just enough to perform some business function, though finer granularity is adopted if these services share functionality. Communication was typically via an Enterprise Service Bus (ESB), and together these services worked in unison to provide a complete system. By having the system architected in this way, individual services could adapt more agilely to the new infrastructure, and so SOA quickly became the architecture of choice for Cloud Applications.

To understand what a profound change moving to the cloud was for companies that were previously managing their own data centres, imagine provisioning a hospital for the next 50 years. Initially wards would be empty, but towards the end of its lifespan, the hospital would be bursting at the seams. The sweet-spot, where demand is perfectly matched, is short-lived. The hospital would spend most of its lifetime either under-utilised and expensive, or swamped and under-performing. Now imagine you could provision a smaller hospital instead, just for the short term. One that you could bulk up as required. Couple that with the fact that technology is constantly advancing and providing even more options. Now, when your emergency department is overrun, and you’ve bulked up as much as you can, you can spawn a completely new hospital next door in a matter of hours to cope with demand, with load distributed between them. But as you marvel at how awesome that is, you can’t help but wonder if you really needed that second helipad, with its own helicopter and crew. And you begin to notice a strange smell and wonder if there’s a better way. Well, for a while there wasn’t a better way, but that would soon change, as cloud technologies continued to mature. DevOps disciplines advanced with improved tools and simplified infrastructure. Alternative data persistence technologies came onstream. New lightweight messaging and lightweight runtime technologies became available, along with auto-scaling. The landscape had changed so much, that it was time to look again at the wasteful practices of cloning complete systems to respond to localised demands and to re-examine whether SOA was still the architecture of choice for Cloud Applications.

Typical Monolithic Web Application

Breaking things down into manageable pieces, decoupling them, and making them independent has been the hallmark of good object-oriented design since its inception. These new technologies meant that it was no longer necessary to bundle all of the services together into a single monolithic application in order for them to function well together. Instead, each component, could be developed independently. They could leverage the latest and most appropriate data persistence schemes for their task. By utilising new lightweight runtimes, these components could deploy in under a second for immediate response to increased demand, with efficient communication between services using lightweight messaging protocols. Their independently bounded context meant that language, runtime and datastore could be chosen specifically for the task at hand. Services, now smaller and self-contained, could be built with faster iteration cycles, and better DevOps tools meant that the more complicated deployment could be automated. The age of microservices had begun, with the promise of greater agility, scalability and resilience.

Typical Microservice Configuration

There are some that consider these new Microservices just to be the current manifestation of SOA. After all, SOA is designed to evolve and the same basic principles are being adhered to, but people said similar things about SOA when it first emerged onto the scene many years ago. Basic principles have always applied. Systems based on SOA principles have matured to a point where they are readily recognisable as services communicating with each-other via an ESB. The shape of a system designed around microservice architecture is much more decentralised. SOA primarily focuses on enterprise scale, whereas Microservice Architecture is at application scale. There is a continuum, as systems evolved to leverage new technologies and adapt to new environments, and it can be hard to pinpoint an exact location where microservices began, but that’s the same for all evolving things, whether it’s birds & dinosaurs or humans & apes.

Microservices are not the same as those services that were exposed via APIs within SOA. Rather they are independent standalone components that perform a function and can be deployed independently. Well-designed microservice architectures demand that each service manage its own data. This gives a microservices based system a very different shape. Splitting a system into completely independent components is extremely difficult, and distributing systems in this way comes with its own set of problems. For example, many business transactions, will likely affect more than one single service (with many disparate databases being modified), so maintaining consistency becomes more difficult. Imagine, for example an order failing at the last hurdle because a credit card was refused, even after the stock had been removed from the virtual shelf. Everything would need to be undone, and the stock placed back on the shelf. As microservice architecture matures, standard patterns have emerged for dealing with such difficulties (Richardson, 2018), and distributed systems have existed for a long time, and so the difficulties are well understood.

Luckily, microservice architecture’s flexibility, means it can coexist quite happily within other architectures, so migration can be a gradual process. As existing legacy systems become stale and unmanageable, they can be replaced, and where appropriate, replaced by microservices.

Of course, there are still legacy monolithic systems out there in the wild. Some still in regular use, like a well-engineered Victorian railway bridge. And where they exist, they hamper development of modern, faster, leaner systems. They are a product of their time. Once marvelled, now they just annoy commuters that wonder why the high-speed train must slow down to a crawl in their vicinity. A digital Colosseum. Built for chariots, while all around people drive Ferraris.

Owners of these systems might protest that they are under constant development. Well, my mother’s broom is at least thirty years old and is used every day. Of course she’s changed the head about 10 times and the handle at least twice, but it’s the same broom. Amazon Web Services (AWS) was launched in 2002. Following this, the digital world went through tumultuous change. If your system is older than that, and you haven’t fundamentally changed your architecture to cope, then the writing is on the wall. You’ve probably already noticed the smell. If you are observing resistance to adapt, loss of equilibrium, an accumulation of problems, flickering performance, or just general system weirdness, then you need to begin anticipating a critical transition (Scheffer, et al., n.d.), and for standard monolithic systems, with their inherent levels of homogeneity and interconnectedness, that transition will be catastrophic. The prognosis is terminal. Don’t be fooled by the fact that your system is large and powerful. Remember Nokia phones and Blackberries were once dominant too.

Of course, many paradigms come and go, and some look at microservices and scoff that it’s just a fad or that it deviates from good traditional design, but they’re wrong (Fowler, 2014). Like all good technologies, eventually its time in the sun will come to an end. But microservices are built with agility, scalability and resilience by design, so don’t expect that to be anytime soon. Even now, nascent technologies like serverless computing are maturing and vying for some space (Eric, et al., 2019), but it will take a lot to dislodge microservices. That’s why companies like Netflix, PayPal, Spotify & Twitter have dumped their monolithic services in favour of a microservice architecture. Maybe it’s time you did too, before it’s too late.

Bibliography
Eric, J. et al., 2019. A Berkeley View on Serverless Computing. [Online]
Available at: https://www.researchgate.net/publication/331034553_Cloud_Programming_Simplified_A_Berkeley_View_on_Serverless_Computing

Fowler, M., 2014. Microservices and the First Law of Distributed Objects. [Online]
Available at: https://martinfowler.com/articles/distributed-objects-microservices.html

Richardson, C., 2018. Pattern: Saga. [Online]
Available at: https://microservices.io/patterns/data/saga.html

Scheffer, M. et al., n.d. Anticipating Critical Transitions. [Online]
Available at: https://science.sciencemag.org/content/338/6105/344

Writer – Sean McLaughlin