Software design

Software design DOs and DON’Ts

For many of us, including me, Uncle Bob’s Clean Code and Clean Architecture are handbooks which take a proud place on the bookshelf. Principles described in these great books serves as a practical guide on how to build software and keep code clean and maintainable. It inspires and motivates us to make better products and enjoy our profession.

Despite the fact that practices described in the books are essential from coding perspective, there is one thing which is missing in that puzzle: how to decompose software system? Just ask this question yourself and try to come up with an answer.

Okay. I bet at least 90% of you thought something like: “WTF you are talking about, dude?” Of course we will make functional decomposition of the system and put each decomposed piece in its own component with clearly defined interfaces, then… stop! At this moment take a short break. Keep reading… functional decomposition is a wrong way to decompose software system. What? You’re out of your mind?

Welcome to the club of cheated software developers 🙂 For years we have been told that in order to build a relatively complex software we need to get input from a business, analyze it and come up with functional decomposition based on requirements. Then design a system based on this decomposition. Thousands posts and Hello World examples all over the Internet didn’t make your understanding of software design better either. Pain and suffering of maintaining such systems is our everyday life. But it’s time to change that. Forget everything you know about system design.

I would like to introduce to you a book Righting Software by the Juval Löwy. This is a revelation and a Holy Grail in software design IMHO. The ideas author shares take their roots in other engineering industries like building cars, houses or computers which is proven by the time. The Method he describes, shows reasoning and practical examples of decomposing real world software system. I’ll share with you ideas I found interesting with a short rationale behind each of it, but I highly recommend to read the book.

The first and foremost

Avoid functional decomposition

Functional decomposition couples services to the requirements because the services are a reflection of the requirements. Any change in the required functionality imposes a change on the functional services. This leads to high coupling and explosion of services, greatly complicates individual reuse of services, encourages duplication in services because a lot of common functionalities customized to specific cases and much more problems.

The next:

Avoid Domain Decomposition

Domain decomposition is even worse than the functional decomposition. The reason domain decomposition does not work is that it is still functional decomposition in disguise. Each domain often devolves into an ugly grab bag of functionality, increasing the internal complexity of the domain. The increased inner complexity causes you to avoid the pain of cross-domain connectivity, and communication across domains is typically reduced to simple state changes (CRUD-like) rather than actions triggering required behavior execution involving all domains. Composing more complex behaviors across domains is very difficult

What alternatives?

Volatility-based decomposition identifies areas of potential change and encapsulates those into services or system building blocks. You then implement the required behavior as the interaction between the encapsulated areas of volatility.

The motivation for volatility-based decomposition is simplicity itself: any change is encapsulated, containing the effect on the system.

Decompose based on volatility

With functional decomposition, your building blocks represent areas of functionality, not volatility. As a result, when a change happens, by the very definition of the decomposition, it affects multiple (if not most) of the components in your architecture. Functional decomposition therefore tends to maximize the effect of the change. Since most software systems are designed functionally, change is often painful and expensive, and the system is likely to resonate with the change. Changes made in one area of functionality trigger other changes and so on. Accommodating change is the real reason you must avoid functional decomposition.

Universal Principle

The merits of volatility-based decomposition are not specific to software systems. They are universal principles of good design, from commerce to business interactions to biology to physical systems and great software. Universal principles, by their very nature, apply to software too (else they would not be universal). For example, consider your own body. A functional decomposition of your own body would have components for every task you are required to do, from driving to programming to presenting, yet your body does not have any such components. You accomplish a task such as programming by integrating areas of volatility. For example, your heart provides an important service for your system: pumping blood. Pumping blood has enormous volatility to it: high blood pressure and low pressure, salinity, viscosity, pulse rate, activity level (sitting or running), with and without adrenaline, different blood types, healthy and sick, and so on. Yet all that volatility is encapsulated behind the service called the heart. Would you be able to program if you had to care about the volatility involved in pumping blood?

Incremental construction

For the system of any level of complexity the right approach to construction is another principle:

Design iteratively, build incrementaly

It is near to impossible to create proper design from the first try, not revisiting it later. It is continuous process: you start with some rough cuts to the blueprints, refine them, check alternatives, then, after several iterations the design will converge. But you don’t want to build iteratively, you want to do it incrementaly. Imagine a car building process. You don’t start with creating scooter in first iterations, then come up with motorcycle and only at later iterations you finally build a car. It doesn’t make any sense. Instead, you build a chassis, then a frame, engine, wheels and tires, running gear and only after that you paint and build interior. It is complex process but customer paid for a car not a scooter. Which leads to the next fundamental design rule:

Features are aspects of integration, not implementation

As stated, the feature emerges once you have integrated the chassis with the engine, gear box, seats, dashboard, a driver, a road and fuel. In that case it could transport you from location A to location B.

Requirements

Requirements always change. Designing a system against the requirements leads to fragile designs. Requirement change and design must change. Change of the design is always painful, expensive and often destructive.

Never design against the requirements

In any system you have core use cases and other use cases. Core use cases represent the essence of the business of the system. Finding and extracting core use cases is challenging. Usually they not explicitly presented in the requirements. System with 100 use cases could have only 2-3 core use cases which often could be extracted as abstraction of other use cases and could require new term or a name.

Architect mission

Your tasks as an architect is to identify the smallest set of components that you can put together to satisfy all core use cases. Regular use cases is just a variations of core use cases and represent different interaction between components, but not different decomposition. When requirements change, your design does not.

Conclusion

Functional decomposition is a good technique splitting up complex business processes for analysis and further sub-tasking. But it is not applicable for creating software design. Volatility based decomposition and “The Method” described in the book gives a reader comprehensive explanation on how to decompose the system and what is important, how to do it in time and within a budget (2nd part of the book dedicated to the project design).

Software design

CQS, CQRS, Event Sourcing. What’s the difference?

There similar but yet different concepts that confuses some developers when they see these definitions and trying to understand what is the relation between the terms. Let’s try to figure it out.

CQS

CQS stands for Command-query separation. Initially designed by Bertrand Meyer as a part of his work on Eiffel programming language. The core idea behind this concept is that

Every method should either be a command that performs an action, or a query that returns data to the caller, but not both.

Application of this principle makes software design cleaner and code easier to read and understand it intent. Usually, you apply it on source code level by creating clear interface definitions, followed by implementation which should adhere to command-query separation rules. For in-depth overview of CQS in action and it affect on design see my article CQS and its impact on design.

CQRS

CQRS stands for Command Query Responsibility Segregation. It is architectural pattern. It is related to CQS in the way that the main idea of that pattern is based on command-query separation. But unlike former CQRS applied not on a code level but on application level. Originally this pattern was described by Greg Young in his CQRS Documents.

In a nutshell it says that your write model is not the same as your read model because of different representation logic behind it: for your web pages you could have views specifically adapted for representation logic of the UI elements on it. And write model contains all data in format which best fits type of the data. If you familiar with SQL, read model is something similar to SQL Views (which is just projection of data in convenient form).

This gives you not only flexibility in separation of representational logic, but also in choosing underlying storage technologies. Write model storage could be in good old SQL and read model could be in MongoDB or any other NoSQL DBMS. In essence it is SRP (single responsibility principle) at application level.

Event Sourcing

Event sourcing is technique to store data as a series of events where these events acts as source of truth for data. The key idea here is that each change to the data is event which is stored with metadata like timestamp and relation to aggregate root (often CQRS applied with DDD where aggregate root is domain object which encapsulates access to the child objects).

In that way you have history of all changes to your data over lifetime of application. Next key aspect of such systems is that in order to get current state of domain object all events for its aggregate root should be replayed from the beginning (there some tricks like snapshots which solve performance issues with that approach when you have huge amount of data). In the same manner, current state of a whole application is a sum of all events for all aggregate roots. This is very different from the usual CRUD model where you don’t preserve previous state and override the current state (just writes new values in columns for SQL database).

With event sourcing in place you could re-build your data for particular point in time which could be very useful for data analysis. It also gives you history of every data change by design with no additional coding.

Relation

CQS is a code level principle to improve design by applying separation of concerns.

CQRS is application level architectural pattern for separating commands (writes) and queries (reads) in regards to the storage. It is based on the ideas of CQS.

Event Souring is a storage pattern to represent data changes as a stream of events with point-in-time recovery.

Conclusion

CQRS is based on CQS principle and boosted to application level. CQRS is often combined with Event Sourcing and DDD. You can implement CQRS without Event Sourcing. You can implement Event Sourcing without CQRS. Event Sourcing is opposite to the CRUD.