Why Microservices Architecture Makes Better Engineers

Skills silos are still a common issue in the tech industry. Too often, software engineers describe themselves as “Java Developers” rather than “software scientists or engineers that happens to have Java expertise”. Or perhaps someone identifies as a back-end engineer and therefore won’t do front-end programming. Even IT operations engineers refuse to embrace infrastructure as code because it is a “development, not an infrastructure concern”. However, these silos are gradually being broken down due to the growing influence of microservices.

When applying a microservices architecture, it doesn’t matter if you are a computer scientist enrolled in delivering just one single component of the distributed system. Unless the whole infrastructure’s operational concerns are taken into account, the project has the potential to fail. As a result, software engineers are having to evolve their work practices. They need to look beyond the immediate process they’re in charge of delivering, and consider the system as a whole, ensuring their delivery doesn’t impact on the other component’s individual cycle.

When utilising this type of architecture, it’s essential to have good collaboration between multiple services. This requires you to embed remote communication, self-healing and failover testing into the business logic instead of abstracting it away within a framework. In addition, you need to design data modelling and transaction boundaries so that they follow bounded contexts, teams and organisation structures. And every data process needs to take into consideration how the data involved is represented somewhere else.

Consider the Back-end-only, Java-only, Spring-only developer (and as a lover of Java and the Spring framework myself, I feel I can say this), happy in their silo, “protected” by declarative aspect-oriented annotations (they are, indeed, really good if not misused). They can bring a well-written algorithm that, theoretically, “atomically” begins, and hold a SQL transaction while it:

  • reads from the HTTP request stream;
  • reads from the remote relational DB (even if it’s deployed locally, it’s still a separated process, unless you are using H2or similar);
  • processes some sophisticated algorithm;
  • writes a large stream to a remote object store or file system;
  • generates and persists the index in a remote search engine;
  • writes the update to the relational DB;
  • writes to the HTTP response stream;
  • finally releasing the thread and committing the SQL transaction.

During the whole process, the client-side application is also blocking waiting for the response, because for a “back-end-only-developer”, this is only a cosmetic view, not an actual application.

Some would argue: “the process is using optimistic concurrency control, so the DB locking is not that expensive”. In that case, when there are concurrent modification conflicts, the transaction will roll back and be retried by the framework. If this happens, the input streams cannot be read again, they are lost.

What if any of the other write operations are not idempotent? What happens if the search engine indexing is re-triggered? Even if they are, what happens if the current operation violates the relational DB constraints and the transaction rollback is definitive? The content already written in the object store would be available, and is perhaps already being distributed among external CDNs, and the rolled-back item already indexed under the search engine. Does the business logic implement compensating transactions for these? Are we sure this operation is atomic?

Monolith and Microservices Alike

The example illustrated above is similar to what I’ve experienced in more than one monolith application. The point I want to address is this: the problems already exist within monolith applications, which microservices architectures try to address using a programming model that is more complex than merely using APIs to handle ACID transactions. The real difference is that in this architectural style the problems are evident, instead of abstracted away, by a false sense of integrity, and handled later on by IT operations (and ending up as application bugs anyway). Because the distributed computing complexities cannot be hidden with a microservices approach, pitfalls like the one illustrated should happen less often, because software developers and data scientists need to address them as they arise.

When designing a solution using this paradigm, silos cannot exist. All the actors involved in the delivery are directly involved in the lifecycle process. Business logic addresses both functional and non-functional requirements. The programming model and the data modelling (including a different choice of persistent store per use-case to address) are harmonically and incrementally dealing with end-to-end concerns, addressed and tested with coding, regardless of whether it’s domain application code, continuous delivery pipeline code or quality assurance code.

The Cost of Microservices 

Microservices architecture comes with a cost, however. It increases the impact of network latency and complexity, requiring you to design with fault tolerance and load balancing in mind. It thereby also brings benefits, and the mind-shift described in this text is very important. The key to success is to adopt Domain Driven Design and DevOps principles as the basis of the solution. Why? Because if you are using microservices architecture without a complex domain you are probably a masochist :).

By accepting there is no atomicity across different contexts, what would formerly have comprised different threads can provide extra benefits by becoming distributed processes instead. With consistency eventually driving the design, client applications play an important role in the system: by implementing MVC processes with some sort of application layer read-your-write consistency model, and event listeners updating the state across multiple components of the system. The same happens to the deployment and release processes. It is embedded as part of the solution as the applications must take into consideration, for example, downtime, fall-back, and integration with multiple versions of the same API.

After delivering successfully against this model, in the future, even when working with simpler domains that do not justify the breakdown of the monolith, these lessons learned, about the development process and the roles played by the team members, will remain relevant.

No one can be an expert in every area of the applications and systems science, engineering and operations. But, it is important to understand the high-level aspect of the full lifecycle and embrace it when delivering each piece of the solution, without the need to declare yourself as a “full-stack developer”.

Leave a Reply

Your email address will not be published. Required fields are marked *