After the assessment, the team (or architect) decides that the time is right to refactor the solution and to consequently split the components into three microservices. During this process, the team realised there were some issues with their structure.
It’s possible to identify the classes that are prefixed with the name of the components. It’s also reasonable to assume these are the starting points of the potential public APIs. In reality with this structure, potentially all interfaces must be public. For example, in order to give access to the repository layer from another package, its interface must be public. In fact, every layers interface must be public, which begs the question – where is the public API of each of the three components?
What about internal components? By analysing class name suffixes, it’s possible to identify the usage of the strategy pattern. Which component is dependent on the strategy? It is necessary to navigate through usages of the interface in order to identify this. While structuring the code in this manner, developers get used to qualify all the interfaces and classes as public. Defensive programming is lost and there is a very real possibility that the implementation of the strategies are also public and could be referenced directly. In reality, the strategy interface is probably a design decision that falls under the category of implementation detail. Usually only a service or domain object holding the strategy context should to be aware of it.
In addition, when looking at the package names, it’s fair to claim that the Model-View-Controller (MVC) pattern is used. Assuming that the service is a data-centric REST API and its main client is a frontend Single Page Application (SPA) also implementing the MVC. The usage of this pattern on a backend REST API is usually an overhead and also a reflection of the over-engineering of the structure. Data-centric REST APIs react to requests for resources, there’s no actual view. As a rendition (or adaptation) of the object itself is usually returned. Also, a controller is not necessary as there isn’t a user interface being managed by the backend and the actions are inferred by HTTP verbs and paths. In the end, the model ends up by becoming an anaemic service implementation that simply delegates calls to the repository. Services should have a purpose greater than facade CRUD operations for domain objects already managed by repositories.
Concluding the analysis of the approach, how can we ensure developers respect the services boundaries? Using this packaging strategy, it’s easier to overlook during a code review when a Service in one component becomes dependent on a repository or other internal class of another component. All interfaces have to be public in the first place, so it’s possible to add the dependency without changing the target interface. Should services consume multiple repositories, developers would have to be very careful in order to respect the transactional boundaries across the multiple bounded contexts. The microservices refactoring process will be considerably more expensive if there are atomic data operations among repositories living on separated services.
Is there a better packaging strategy for helping to protect internal resources, to promote SOLID principles and to enforce transactional boundaries, so as at the same time make the process of breaking the monolith smoother? Yes, by packaging by feature and NOT by layer. Layer is indeed an important concept, however for logical division, we can identify layers by using suffixes in class names. When physical division is necessary (rare in microservices architectures), we still structure under same package name (effectively the namespace) but assemble each layer in separated archive files, so that it can be distributed separately.
Going back to our scenario, let’s imagine the same monolith application structured with the following packaging strategy: