Microservices are small, reusable software components that are developed and deployed independently but interwork and operate in unison to form a complete application. Architecting an application using microservices methodologies is a modern alternative to creating a monolithic piece of code to deliver an application.
Monolithic code vs. microservices architecture
A monolithic legacy
While applications developed as a single, large piece of code are (conceptually) easier to deploy, there are many limitations to the monolithic architectural approach. As a codebase grows, it becomes increasingly difficult to understand and modify. New engineers, especially, not only struggle to comprehend previous work, their programming styles may fundamentally differ. This compounds over time, further complicating the software implementation while rapidly degrading the overall quality of the code.
Mitigating this requires large teams of engineers to work in parallel, as even the smallest code change could affect other contributors in some manner. Plus, testing necessitates a complete new build of the software, requiring engineering groups, working on otherwise distinct components, to be totally synchronized. For deployed applications, an entire service upgrade is required for each small change, making the introduction of new features an extremely long and laborious process. This mode of operation also makes the adoption of new continuous development (DevOps) approaches extremely difficult. Furthermore, it is practically impossible to change the underlying development framework, if newer technologies become available.
Scaling deployed applications, developed in such a monolithic manner, is also incredibly inefficient. Supporting an increase in transaction volumes in any part of the overall service requires the instantiation of more copies of the complete application, typically running behind a load balancer. If these load balancers can’t handle session persistence, then multiple applications instances may repeatedly request the same backend data, causing significant performance inefficiencies. Moreover, deploying a complete copy of the application when only a specific process is required can result in the unnecessary reservation of large CPU cycles (if the process is memory intensive), or the unnecessary reservation of memory blocks (if the process is processor intensive).
A modern approach
Developing applications using a microservices methodologies not only enables the adoption of DevOps approaches but also effectively eliminates the deployment inefficiencies that result when scaling services. Furthermore, unlike its monolithic predecessor, microservices development principles make it far easier to incrementally upgrade when new technologies emerge.
With micoservices architectures, individual engineers or small groups of programmers can focus solely on one element of an overall application, contributing code far more rapidly and with their own programming style, without close regard for other components being delivered in a similarly disaggregated manner.
As highly disaggregated systems, applications built using microservices approaches do inherently complicate the programming and implementation processes. To mitigate this, a microservices framework must be employed. This framework provides a development toolchain which dramatically simplifies the software engineering process. A microservices framework abstracts the complexity of all underlying common services and lifecycle management systems, such as storage and orchestration, while providing a lightweight inter-process communication mechanism. While this and other attributes are similar to that outlined within Service Oriented Architectures (SOA), microservices architectures are recognized for being more lightweight in nature. In the latter example, this means forgoing elaborate message oriented middleware (MOM) for a simpler messaging system, which is typically built on HTTP.
In the same way microservices are developed in more agile engineering environments, they are also deployed and scale in a far leaner fashion than classic, monolithic applications. Built to be cloud native, each instance of a microservice can be installed on-demand within its own lightweight Linux container. When service load increases, only the distinct elements (microservices) requiring additional processing capability need be added to the overall system. Compared to typical scale-up approaches, this scale-out model significantly reduces the compute overheads and stranded (unused or unusable) compute capacity.
For more information about products built using cloud native microservices methodologies, visit www.metaswitch.com/products.