Teal Partners is developing an application in collaboration with SD Worx to support HR departments in small to medium-sized enterprises. The software package will allow SMEs to manage the entirety of the HR-related information for their employees. The new payroll engine performs background calculations of the wages, social security contributions and tax.
These days a payroll engine does a lot more than computations and settlements (salary payments). Employers are keen to offer their employees an interesting package of non-statutory benefits, to give them an edge in the war for talent. A package of this type includes options for part-time or home working, for a better work-life balance.
With the new payroll engine we want to help businesses make wage cost forecasts up to a year in advance. It should be capable of simulating a variety of remuneration packages and employment scenarios, as well as their impact on costs and net salaries.
To incorporate these options in the payroll-engine software package we went for a modular approach. There are three reasons to opt for modularity: (1) different teams can deliver features simultaneously, (2) the solution can be introduced gradually to a new market, and (3) the modules are separately scalable. Let us take a closer look.
Modularity lets you to develop new features simultaneously in different teams. This way software developers get to concentrate on their core task: analysing and resolving software issues, each in their own module.
The solution is built along the lines of microservices architecture, and has four modules.
Each module has its own domain model and a separate database. The data is exchanged asynchronously between the modules. In the database there is a difference between
<ul style="list-style-type: square;"> <li>the owned data that are managed by the module;</li> <li>the reference data that the module receives from other modules;</li> <li>the definition data that show the parametrisation.</li> </ul>
An example from the perspective of the payroll module:
From the outset we chose to store the definition dataset, i.e. the relevant parametrisation dataset, in each database. In this way, referential integrity can be guaranteed.
The illustration below shows the software solution's modular design.
In a modular architecture it is crucial to set the right boundaries.
The following drivers helped us choose the right setup:
Thus, splitting the software into modules is not the same as splitting the software into services. Whereas with services we focus on technical responsibility, with modules we focus on splitting the domain into functional entities with limited mutual dependence.
Besides splitting the modules, we defined a number of technical patterns to fully uncouple them without sacrificing data consistency.
Each module stores relevant data on the other modules in its own database. We call this the reference data. There are three advantages to this method when uncoupling the software.
With the software split into modules, the route is clear for development. The teams can develop their modules at their own pace and by their own methods. The success of the modular approach depends on the exchange of data between modules. We opted for an inbox/outbox pattern.
To explain the concept of the inbox/outbox pattern, let us look at the different steps in the exchange of data.
Step 1. The data is changed in module A.
In a module, every request (such as saving a form to the UI) is handled by a unit of work and an accompanying database session. This means that the process is atomic: the data is either saved or rejected in its entirety.
Every change in the database results in a single message per external module containing changed data. However, the message is not sent immediately but written to the outbox first. The outbox is a table containing the module's outgoing data. By filling the outbox in the same session as that in which the owned data was changed, the message in the outbox is included as part of the atomic action. In other words, there can be no change in the owned data without a record or records in the outbox, and vice versa.
Step 2. The messages are sent from module A to module B.
With the request processed, a background process initiates and picks up the unhandled messages in the outbox and sends them to the right modules. This background process has its own unit of work and is executed entirely asynchronously. The receiving module has an inbox API to receive the messages.
The receiving module stores the message in an inbox table. This is no more than a log of all messages incoming from the other modules.
Step 3. The inbox message is processed in module C
With an inbox-message received, the receiving module will begin a new process, again with its own unit of work, to process the incoming message. The impact of the incoming data is written to the reference data, and the impact on the owned data, if any, is processed asynchronously.
This pattern has a number of properties.
In this article we have talked about the advantages of a modular approach.
(1) We can have different teams deliver features at the same time;
(2) The solution can be introduced gradually to a new market;
(3) The modules can be scaled separately.
The biggest challenge is to fully uncouple the modules while guaranteeing data consistency between them. We do this by choosing the right module boundaries, using reference data and employing the outbox/inbox pattern.
Did you recognise any concepts from Domain Driven Design or Microservices in this article? You would be right if you did. In this article we deliberately avoid hyping these technologies by not mentioning them explicitly. Here, the concepts described in the literature have pointed the way. We apply them in practice, beginning with a problem statement. It has been our aim to turn a spotlight on the pragmatic approach.
We hope that our sharing of this practical experience has been instructive. We would love to hear your reactions to this blog. All feedback is welcome. What are your thoughts on the solution we describe here?