F#, Programming, Software design

Power of composition with map and bind

In functional architecture functionalities get composed into workflows. Workflows are essential part of any business behavior modeling. Things get complicated when you need to build bigger systems from small components. Sometimes it is hard to find proper connectors to fit multiple functions having different inputs and outputs. There are various tools to achieve that composition in FP world which you could have heard by the names like functors, monoids or monads. These tools allow you to glue things together by connecting outputs of one functions to the inputs of another functions with proper transformations in between. In practice it is much easier to understand how it works than diving in category theory and trying to figure out the math beneath it.

🔌 Composition basics

When dealing with relatively simple types like strings and numbers connecting inputs and outputs is quite straightforward. Consider this example:

let addOne a = a + 1
let multipleByTwo a = a * 2

Here we defined two functions both of which takes a number as an input and returns number as an output, so their signatures are the same – we expect number on input and the result of operation in output is also a number:

(int -> int)

We can call it in following ways:

multipleByTwo (addOne 2)
// OR
2 |> addOne |> multiplyByTwo
val it : int = 6

We also can create new function which is composition of addOne and multiplyByTwo:

let addOneMultipliedByTwo = addOne >> multiplyByTwo
addOneMultipliedByTwo 2

This way you can build really complex logic from smaller pieces just like with a Lego bricks.

🅰️ ADTs are everywhere

More often, however, you will find yourself writing a bit more complicated things than adding or multiplying numbers. It could be custom types, or types based on other types which are known as algebraic data types (ADTs). It is very common to build up things from abstract types and provide functions which transform other values to that types. One very familiar to you example could be Maybe (a.k.a. Option) type which you could heard as Maybe monad or Maybe functor. In a nutshell it is a container for value or absence of the value. This is extremely effective abstraction to avoid nulls in your code and hence having peace 🧘 and no null reference exceptions everywhere.

In F# it is presented in Option module with set of functions to work with that type. So you have type Option<‘T> with possible value Value: ‘T or no value which is None. You can find tons of functions in the module. They help you building more complex things from smaller and make proper transformations for connecting functions which require that type.

Let’s have a quick look on how to use it:

let someValue = Some 10
let noValue = None

someValue |> Option.get // val it : int = 10
someValue |> Option.isSome // val it : bool = true
noValue |> Option.isNone // val it : bool = true
(10, someValue) ||> Option.contains // val it : bool = true
(99, someValue) ||> Option.defaultValue // val it : int = 10
(99, noValue) ||> Option.defaultValue // val it : int = 99

😲 When things go wrong

Now let’s have a small programming exercise. Suppose silly scenario where we have a players (any game you can imagine) and we need to check if the score player collected is good or not. So we come up with something like this:

type Player = { 
    Name: string 
    Score: int
}

let isGoodScore score = if score >= 70 then true else false

So all we need is to create players and check their scores:

let frank = { Name = "Frank"; Score = 90; }
let jack = { Name = "Jack"; Score = 37; }

frank.Score |> isGoodScore // val it : bool = true
jack.Score |> isGoodScore // val it : bool = false

“Hey, but player could have no score as well”

So how about to support that? Well, piece of a cake. Let’s make few minor changes:

type Player = { 
    Name: string 
    Score: int option
}
let frank = { Name = "Frank"; Score = Some 90; }
let john = { Name = "John"; Score = None; }
let jack = { Name = "Jack"; Score = Some 37; }

Nice! We’ve wrapped score in Option type just exactly like in requirement we got. How about isGoodScore function, will it still work?

frank.Score |> isGoodScore
error FS0001: Type mismatch. Expecting a
    'int option -> 'a'
but given a
    'int -> bool'
The type 'int option' does not match the type 'int'

Oops, we can’t use optional type with plain type like that:

So we need a ways to glue up monadic types like Option with functions working on plain values. And that’s where two most essential functions get into the big picture: map and bind.

🤝When composition meet ADT

As I mentioned before in the FP toolbox there various tools to help us with transformations. One such tool is map function. There are other names for it like fmap, lift, Select (think of C# LINQ). Each monadic-like type has this function.

Let’s have a look what signature of that function for Option:

(('a -> 'b) -> 'a option -> 'b option)

There 3 arguments: function which transforms input of type ‘a to ‘b, optional ‘a and optional ‘b. So how can we apply map for our use case? Pretty straightforward actually:

frank.Score |> Option.map isGoodScore // val it : bool option = Some true
john.Score |> Option.map isGoodScore // val it : bool option = None
jack.Score |> Option.map isGoodScore // val it : bool option = Some false

You see how return type is changed? We just applied standalone function which works on int to Option type. It lifted result of the function execution back to the Option. If input value is Some int it will be extracted from container (Option type), piped to the function and on the other end lifted up/wrapped back to the Option type. In case if there no value, it will just use None.

In C# IEnumerable with Select method on it works in the same way but applied to collections which means that collections are also ADTs. Here some visuals to help in understanding what’s going on:

👷 Bind it

Another very useful tool is bind function which you may have heard by other names like flatMap, collect, SelectMany. This one allows you to compose monadic functions in a little bit different way. Here the signature of the bind function for Option:

(('a -> 'b option) -> 'a option -> 'b option)

Let’s extend on our previous example and say that now we have an external source (database, file, etc.) from which we need to fetch players to find out score. So we define tryFindPlayer function as follows:

let tryFindPlayer name = 
    [ frank; john; jack ] |> List.tryFind (fun c -> c.Name = name)

List.tryFind is built-in function which returns Some ‘T if satisfies predicate in lambda or None. In our case it will return Some Player or None. Now we would be able to get the score of the player:

tryFindPlayer "Frank"
    |> Option.bind (fun c -> c.Score)

Here the visuals:

As you see, unlike map, bind allows you to compose up things within the same category (Option) but with different underlying types. It is flattening result, so instead of having Option<Option<int>> with bind it skips unnecessary wrapping.

💪The power of composition

There a lot of ADTs in form of data structures, workflows and other concepts which you need to combine to build working software: List<T>, Option<T>, State<T>, Async<T>, etc.

Once you get a grasp on how to use it – it becomes straightforward how to compose things up:

tryFindPlayer "Frank" 
    |> Option.bind (fun c -> c.Score)
    |> Option.map isGoodScore
val it : bool option = Some true
Cloud, DevOps, Software design

The 12-Factor App

Around 2011-2012 when the cloud computing was not that popular, developers from Heroku presented methodology known as Twelve-Factor App. It describes recipes for building cloud-native, scalable, deployable applications. In my opinion it is a baseline for any team building cloud-native apps.

When you design applications for containers you should keep in mind that container image moves from environment to environment, so the image can’t hold things like production database credentials or any other sensitive information. It all should be supplied to the container by injecting configuration on container start up.

Another thing is to not bake-in any networking inside your image. Container images should not contain hostnames or port numbers. That’s because the setting needs to change dynamically while the container image stays the same. Links between containers are all established by the control plane when starting them up.

Containers are meant to start and stop rapidly. Avoid long startup or initialization sequences. Some production servers take many minutes to load reference data or to warm up caches. These are not suited for containers. Aim for a total startup time of one second.

It’s hard to debug an containerized applications. Just getting access to log files can be a challenge. Such applications need to send their telemetry out to a data collector.

Let’s take a closer look to these 12 factors which you should keep in mind each time developing application which meant to be running in container. The “factors” identify different potential impediments to deployment, with recommended solutions for each:

1 Codebase Track one codebase in revision control. Deploy the same build to every environment
2 Dependencies Explicitly declare and isolate dependencies
3 Config Store config in the environment
4 Backing services Treat backing services as attached resources
5 Build, release, run Strictly separate build and run stages
6 Processes Execute the app as one or more stateless processes
7 Port binding Export services via port binding
8 Concurrency Scale out via the process model
9 Disposability Maximize robustness with fast startup and graceful shutdown
10 Dev/prod parity Keep development, staging, and production as similar as possible
11 Logs Treat logs as event streams
12 Admin processes Run admin/management tasks as one-off processes
12-Factors

Programming, Software design

Continuous refactoring

A son asked his father (a programmer) why the sun rises in the east, and sets in the west. His response? It works, don’t touch!

Fisrt rule of programming

How often you see software systems where adding new, relatively simple features takes ages, where it is hard to find the core logic or in which part of the system to apply changes? Does reading the code gives you a headache? Does any changes you need to introduce in that software makes you sick and you want to escape from that sinking ship? Well, unfortunately, you became a victim of a software rot.

Ivar Jacobson describes a software entropy as follows:

The second law of thermodynamics, in principle, states that a closed system‘s disorder cannot be reduced, it can only remain unchanged or increase. A measure of this disorder is entropy. This law also seems plausible for software systems; as a system is modified, its disorder, or entropy, tends to increase. This is known as software entropy

Ivar Jacobson

The Problem

In our daily work we too busy with adding new features as fast as possible and closing our tickets. Everyone is happy: the business sells features to customers, and we got paid, and if lucky, we even get some bonuses for that incredible performance. With time, however, software gets more complicated. You’re going to find out that adding more features takes longer and produces more bugs. This tends to grow very quickly and ends in a system which extremely hard and expensive to maintain.

Perhaps you heard of a term Technical Debt? It describes collective debt you owe to your software by doing some shortcuts when implementing some feature because you didn’t have time to do it properly. It could be on any level from overall software design to the low-level implementation. But it is a very optimistic term – it assumes that this debt will be paid off which we all know almost never happen. I prefer to think about it as a Software Rot actually and the whole practice which leads to that as a Shortcut-Driven-Development.

I think many companies just don’t realize what are the costs of maintenance of a poorly built software. But hey, wouldn’t it be nice if we can show and prove this mathematically? So that you will have strong arguments in your discussions with product owners and managers regarding the need of refactoring. Once upon a time, while I was reading Code Simplicity: The Fundamentals of Software , I reached chapter which describes the equitation of software design and it has brilliant math representation of that equation:

D = (Pv * Vi) / (Ei + Em)

Or in English:

The Desirability of Implementation is directly proportional to the Probability of Value and the Potential Value of Implementation, and inversely proportional to the total effort, consisting of the Effort of Implementation plus the Effort of Maintenance.

However, there is a critical factor missing from the simple form of this equation: time. What we actually want to know is the limit of this equation as time approaches infinity, and that gives us the true Desirability of Implementation. So let’s look at this from a logical standpoint:

The Effort of Implementation is a one-time cost, and never changes, so is mostly unaffected by time.

The Value of Implementation may increase or decrease over time, depending on the feature. It’s not predictable, and so we can assume for the sake of this equation that it is a static value that does not change with time (though if that’s not the case in your situation, keep this factor in mind as well). One could even consider that the Effort of Maintenance is actually “the effort required to maintain this exact level of Value,” so that the Value would indeed remain totally fixed over time.

The Probability of Value, being a probability, approaches 1 (100%) as time goes to infinity.

The Effort of Maintenance, being based on time, approaches infinity as time goes to infinity. What this equation actually tells us is that the most important factors to balance, in terms of time, are probability of value vs. effort of maintenance. If the probability of value is high and the effort of maintenance is low, the desirability is then dependent only upon the Potential Value of Implementation vs. the Effort of Implementation–a decision that a product manager can easily make. If the probability of value is low and the effort of maintenance is high, the only justification for implementation would be a near-infinite Potential Value of Implementation.

The Solution

Let’s take a look at definition of refactoring given by Martin Fowler:

Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior.

Its heart is a series of small behavior preserving transformations. Each transformation (called a “refactoring”) does little, but a sequence of these transformations can produce a significant restructuring. Since each refactoring is small, it’s less likely to go wrong. The system is kept fully working after each refactoring, reducing the chances that a system can get seriously broken during the restructuring.

Keeping software systems in good order requires developer teams to follow strong practices. But first of all it is your personal responsibility and should be in your culture. Continuous Refactoring is a key factor in that sense. You should refactor and improve your code on a day-to-day basis. Plan this work if you need to, explain the importance of it to the managers and product owners. You should become refactoring machines. This is not only tremendously reduce the denominator part of equation (Em), but also will force you to protect your code from breaking by implementing all kind of testing , one of which is regression testing. Each bit of refactoring will make your code better and easier to extend with new functionality. Gradually, you can switch from Shortcut-Driven-Development to Software Engineering and enjoy from what you are doing and helping business to survive in the long run.

The Mantra

Mantra for TDD followers

Software design

Software design DOs and DON’Ts

For many of us, including me, Uncle Bob’s Clean Code and Clean Architecture are handbooks which take a proud place on the bookshelf. Principles described in these great books serves as a practical guide on how to build software and keep code clean and maintainable. It inspires and motivates us to make better products and enjoy our profession.

Despite the fact that practices described in the books are essential from coding perspective, there is one thing which is missing in that puzzle: how to decompose software system? Just ask this question yourself and try to come up with an answer.

Okay. I bet at least 90% of you thought something like: “WTF you are talking about, dude?” Of course we will make functional decomposition of the system and put each decomposed piece in its own component with clearly defined interfaces, then… stop! At this moment take a short break. Keep reading… functional decomposition is a wrong way to decompose software system. What? You’re out of your mind?

Welcome to the club of cheated software developers 🙂 For years we have been told that in order to build a relatively complex software we need to get input from a business, analyze it and come up with functional decomposition based on requirements. Then design a system based on this decomposition. Thousands posts and Hello World examples all over the Internet didn’t make your understanding of software design better either. Pain and suffering of maintaining such systems is our everyday life. But it’s time to change that. Forget everything you know about system design.

I would like to introduce to you a book Righting Software by the Juval Löwy. This is a revelation and a Holy Grail in software design IMHO. The ideas author shares take their roots in other engineering industries like building cars, houses or computers which is proven by the time. The Method he describes, shows reasoning and practical examples of decomposing real world software system. I’ll share with you ideas I found interesting with a short rationale behind each of it, but I highly recommend to read the book.

The first and foremost

Avoid functional decomposition

Functional decomposition couples services to the requirements because the services are a reflection of the requirements. Any change in the required functionality imposes a change on the functional services. This leads to high coupling and explosion of services, greatly complicates individual reuse of services, encourages duplication in services because a lot of common functionalities customized to specific cases and much more problems.

The next:

Avoid Domain Decomposition

Domain decomposition is even worse than the functional decomposition. The reason domain decomposition does not work is that it is still functional decomposition in disguise. Each domain often devolves into an ugly grab bag of functionality, increasing the internal complexity of the domain. The increased inner complexity causes you to avoid the pain of cross-domain connectivity, and communication across domains is typically reduced to simple state changes (CRUD-like) rather than actions triggering required behavior execution involving all domains. Composing more complex behaviors across domains is very difficult

What alternatives?

Volatility-based decomposition identifies areas of potential change and encapsulates those into services or system building blocks. You then implement the required behavior as the interaction between the encapsulated areas of volatility.

The motivation for volatility-based decomposition is simplicity itself: any change is encapsulated, containing the effect on the system.

Decompose based on volatility

With functional decomposition, your building blocks represent areas of functionality, not volatility. As a result, when a change happens, by the very definition of the decomposition, it affects multiple (if not most) of the components in your architecture. Functional decomposition therefore tends to maximize the effect of the change. Since most software systems are designed functionally, change is often painful and expensive, and the system is likely to resonate with the change. Changes made in one area of functionality trigger other changes and so on. Accommodating change is the real reason you must avoid functional decomposition.

Universal Principle

The merits of volatility-based decomposition are not specific to software systems. They are universal principles of good design, from commerce to business interactions to biology to physical systems and great software. Universal principles, by their very nature, apply to software too (else they would not be universal). For example, consider your own body. A functional decomposition of your own body would have components for every task you are required to do, from driving to programming to presenting, yet your body does not have any such components. You accomplish a task such as programming by integrating areas of volatility. For example, your heart provides an important service for your system: pumping blood. Pumping blood has enormous volatility to it: high blood pressure and low pressure, salinity, viscosity, pulse rate, activity level (sitting or running), with and without adrenaline, different blood types, healthy and sick, and so on. Yet all that volatility is encapsulated behind the service called the heart. Would you be able to program if you had to care about the volatility involved in pumping blood?

Incremental construction

For the system of any level of complexity the right approach to construction is another principle:

Design iteratively, build incrementaly

It is near to impossible to create proper design from the first try, not revisiting it later. It is continuous process: you start with some rough cuts to the blueprints, refine them, check alternatives, then, after several iterations the design will converge. But you don’t want to build iteratively, you want to do it incrementaly. Imagine a car building process. You don’t start with creating scooter in first iterations, then come up with motorcycle and only at later iterations you finally build a car. It doesn’t make any sense. Instead, you build a chassis, then a frame, engine, wheels and tires, running gear and only after that you paint and build interior. It is complex process but customer paid for a car not a scooter. Which leads to the next fundamental design rule:

Features are aspects of integration, not implementation

As stated, the feature emerges once you have integrated the chassis with the engine, gear box, seats, dashboard, a driver, a road and fuel. In that case it could transport you from location A to location B.

Requirements

Requirements always change. Designing a system against the requirements leads to fragile designs. Requirement change and design must change. Change of the design is always painful, expensive and often destructive.

Never design against the requirements

In any system you have core use cases and other use cases. Core use cases represent the essence of the business of the system. Finding and extracting core use cases is challenging. Usually they not explicitly presented in the requirements. System with 100 use cases could have only 2-3 core use cases which often could be extracted as abstraction of other use cases and could require new term or a name.

Architect mission

Your tasks as an architect is to identify the smallest set of components that you can put together to satisfy all core use cases. Regular use cases is just a variations of core use cases and represent different interaction between components, but not different decomposition. When requirements change, your design does not.

Conclusion

Functional decomposition is a good technique splitting up complex business processes for analysis and further sub-tasking. But it is not applicable for creating software design. Volatility based decomposition and “The Method” described in the book gives a reader comprehensive explanation on how to decompose the system and what is important, how to do it in time and within a budget (2nd part of the book dedicated to the project design).

Software design

CQS, CQRS, Event Sourcing. What’s the difference?

There similar but yet different concepts that confuses some developers when they see these definitions and trying to understand what is the relation between the terms. Let’s try to figure it out.

CQS

CQS stands for Command-query separation. Initially designed by Bertrand Meyer as a part of his work on Eiffel programming language. The core idea behind this concept is that

Every method should either be a command that performs an action, or a query that returns data to the caller, but not both.

Application of this principle makes software design cleaner and code easier to read and understand it intent. Usually, you apply it on source code level by creating clear interface definitions, followed by implementation which should adhere to command-query separation rules. For in-depth overview of CQS in action and it affect on design see my article CQS and its impact on design.

CQRS

CQRS stands for Command Query Responsibility Segregation. It is architectural pattern. It is related to CQS in the way that the main idea of that pattern is based on command-query separation. But unlike former CQRS applied not on a code level but on application level. Originally this pattern was described by Greg Young in his CQRS Documents.

In a nutshell it says that your write model is not the same as your read model because of different representation logic behind it: for your web pages you could have views specifically adapted for representation logic of the UI elements on it. And write model contains all data in format which best fits type of the data. If you familiar with SQL, read model is something similar to SQL Views (which is just projection of data in convenient form).

This gives you not only flexibility in separation of representational logic, but also in choosing underlying storage technologies. Write model storage could be in good old SQL and read model could be in MongoDB or any other NoSQL DBMS. In essence it is SRP (single responsibility principle) at application level.

Event Sourcing

Event sourcing is technique to store data as a series of events where these events acts as source of truth for data. The key idea here is that each change to the data is event which is stored with metadata like timestamp and relation to aggregate root (often CQRS applied with DDD where aggregate root is domain object which encapsulates access to the child objects).

In that way you have history of all changes to your data over lifetime of application. Next key aspect of such systems is that in order to get current state of domain object all events for its aggregate root should be replayed from the beginning (there some tricks like snapshots which solve performance issues with that approach when you have huge amount of data). In the same manner, current state of a whole application is a sum of all events for all aggregate roots. This is very different from the usual CRUD model where you don’t preserve previous state and override the current state (just writes new values in columns for SQL database).

With event sourcing in place you could re-build your data for particular point in time which could be very useful for data analysis. It also gives you history of every data change by design with no additional coding.

Relation

CQS is a code level principle to improve design by applying separation of concerns.

CQRS is application level architectural pattern for separating commands (writes) and queries (reads) in regards to the storage. It is based on the ideas of CQS.

Event Souring is a storage pattern to represent data changes as a stream of events with point-in-time recovery.

Conclusion

CQRS is based on CQS principle and boosted to application level. CQRS is often combined with Event Sourcing and DDD. You can implement CQRS without Event Sourcing. You can implement Event Sourcing without CQRS. Event Sourcing is opposite to the CRUD.

Programming, Software design

CQS and its impact on design

Let’s start from the abbreviation. CQS stands for Command-Query Separation. The term was invented by Bertrand Meyer during his work on Eiffel programming language in the middle of 1980x. In a nutshell, it states:

Every method should either be a command that performs an action, or a query that returns data to the caller, but not both.

Sounds good. But what does it mean in practice? Let’s look at the trivial example.

Suppose we have some service which could get user from some storage and send message to that user based on the email address:

public class User
{
    public int Id { get; set; }
    public string Email { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime BirthDate { get; set; }
}

public interface IStorage
{
    User GetUser(string email);
}

public interface IMessageService
{
    void Send(string email, string message);
}

public class InMemoryStorage : IStorage
{
    public User GetUser(string email) =>
        new User
        {
            Id = 1,
            Email = "bruce@eternity.net",
            FirstName = "Bruce",
            LastName = "Lee",
            BirthDate = new DateTime(1940, 11, 27)
        };
}

public class EternityMessageService : IMessageService
{
    public void Send(string email, string message)
    {
        Console.WriteLine($"Sending '{message}' to {email}...");
    }
}

public class Service
{
    private readonly IStorage storage;
    private readonly IMessageService messageService;

    public Service(IStorage storage, IMessageService messageService)
    {
        this.storage = storage;
        this.messageService = messageService;
    }

    public User GetUser(string email) => storage.GetUser(email);
    public void Send(string message, string email) => messageService.Send(email, message);
}

Service exposes two public methods:

User GetUser(string email)
void Send(string message, string email)

From public API of the service you should be able to distinguish where is query and where is command methods.

For User GetUser(string email) method we expect to pass e-mail as a parameter and get back a user object as a result. It perfectly fits in the query method definition. We do not expect to see signature like void GetUser(string email). It will be very confusing: why it states GetUser and doesn’t return any value? In the same manner we expect to see the command method void Send(string message, string email) with no return value and with two input parameters message and email. This clearly states that in order to operate, method requires input and produces side effect in the form of sending a message.

Good design promotes having method signatures with clear intention.

Consumer can execute code with following snippet:

void Main()
{
    var service = new Service(new InMemoryStorage(), new EternityMessageService());
    var user = service.GetUser("bruce@eternity.net");
    // Some logic with that user...
    service.Send("A wise man can learn more from a foolish question than a fool can learn from a wise answer", user.Email)
}

Now, let’s assume that we have a new requirement:

When today is the Birthday of the user we would like to send an e-mail with congratulations.

This is a typical scenario for a lot of web sites like forums. Also, let’s assume that developer who is going to implement this change not familiar with CQS. So he decides to extend GetUser method with a new logic:

public User GetUser(string email)
{
    var user = storage.GetUser(email); 
    if (user.BirthDate.Day == serverDate.Day && user.BirthDate.Month == serverDate.Month)
    {
        Send($"Congrats with your Birthday, {user.FirstName}! We wish you could stay with us for longer", user.Email);
    }
    return user;
}

Now, running same Main() method results in following output:

Sending 'Congrats with your Birthday, Bruce! We wish you could stay with us for longer' to bruce@eternity.net...
Sending 'A wise man can learn more from a foolish question than a fool can learn from a wise answer' to bruce@eternity.net...

And bingo! We have a violation of command and query separation. The consumer of a service does not expect this method to send a message in addition to its main purpose of retrieving a user. And it is not obvious looking at the signature of this method. By reading method definitions you should be able quickly identify to which category it belongs.

Now let’s see on one of the main properties of query methods:

Any call to the same query method with the same parameters should always give the same result. In other words, query method should be idempotent.

Correct way to implement Birthday requirement would be to move this logic in Main() or to wrap it in any other command method which could accept user and message as input parameters, keeping GetUser method simple user query:

void Main()
{
    var service = new Service(new InMemoryStorage(), new EternityMessageService());
    var user = service.GetUser("bruce@eternity.net");
    if (user.BirthDate.Day == serverDate.Day && user.BirthDate.Month == serverDate.Month)
    {
        service.Send($"Congrats with your Birthday, {user.FirstName}! We wish you could stay with us for longer", user.Email);
    }
    // Other logic here...
    service.Send("A wise man can learn more from a foolish question than a fool can learn from a wise answer", user.Email)
}

It’s worth to mention opposite scenario:

Calling query method from command method is quite legit.

This is true because query does not have a side effect and does not change the state of the system.

It’s also good to mention, that there cases when you need a value back from a command. For example Send method need to return id for further tracking. If you change signature to something like int Send(string message, string email). It will no longer be pure command method, but mixture of command and query which violates CQS principle. In that case possible solution would be to create another query method for the purpose of getting tracking id: int GetTrackingId(string email). So we could always adhere to CQS by decomposing query and command methods. Sometimes the price of following pure separation is performance or changes in other layers. Like in example with Send we most probably need to preserve tracking id somewhere in order to fetch it for int GetTrackingId(string email) call. But it’s always a trade-off like with everything in software development.

NB: CQS has relation with another concept: Design by Contract (DbC) which also invented by Bertrand back in the 1986 when he was working on Eiffel design. CQS is beneficial to DbC because any value-returning method (any query) can be called by any assertion without fear of modifying program state.

Conclusion

The code we write should be readable and maintainable. That is the major quality of a good code. Applying CQS in practice lead to clear public APIs and some sort of trust in what it does. Separation in asking for program state and request to change a program state is beneficial for creating good designs. Perhaps that is the reason why more modern architectural patterns like CQRS for building distributed systems based on well-proof ideas from mid 80-x.