Programming

Recommended reads and videos of 2020

2020 hit hard on us and brought a lot of unexpected surprises: COVID-19 and burden on healthcare system, hard times for private business, especially for small-to-medium companies, restrictions, lockdowns and much more… On the other hand, a lot of companies and individuals changed the way how they work: in short terms remote work become as natural as it was going to the office in pre-Corona times. We got more time after all. We could dedicate it to our families and things we like to do most. From professional perspective, we could dedicate freed up time for online courses and readings. As for me, this year I started filling some gaps in certain areas like DevOps and Kubernetes, I discovered brilliant resources on Software Architecture and of course it was a year of deep dive to functional programming for me. Also I finally started my own blog, which you, my dear comrade are reading now. I would like to share with you my list of courses and readings which made up my year. Hope you will find it useful as well.

Righting Software by Juval Löwy

It doesn’t matter if you are experienced architect or just interested in Software Architecture, you definitely need to read that brilliant work. Author offers an idea of volatility based decomposition and compelling arguments to avoid functional and domain decompositions. The book is made up of two sections: one which describes The Method for System Design and one which describes the Project Design. System Design is all about technical implementation of volatility-based decomposition. It teaches you how to plan and split complex system into parts and create design which is flexible and maintainable for years of operation. Second part of the book gives you exhaustive knowledge on Project Design: how to plan project from the beginning to the final delivery. It covers staffing, cost planning, estimations and much more. What I found nice in the book is that author operates with very concrete examples and metrics, graphs and formulas. Everything in this book is concise, concrete and beautiful, however I should admit it is not something which is easy to read, you should have some background and experience with building software.

My verdict 9/10

The Pragmatic Programmer: 20th Anniversary Edition, 2nd Edition: Your Journey to Mastery by David Thomas and Andrew Hunt

20 years old classic revisited. All the main concepts described back in 1999 still valid in our field by current days. Authors made corrections to the current realities we live in like cloud computing and spread of microservices, but more important, they give invaluable advises on how to take responsibility on your own actions, how to develop your career, how to behave like Professional. This book reminds me another classic – The Clean Coder by Robert Martin, but I think it covers a bit wider range of aspects and from different angle. In my opinion it is a must read for every software engineer.

I have pleasure to listen to the audiobook version. Audiobook is organized as a series of sections, each containing a series of topics. It is read by Anna Katarina; Dave and Andy (and a few other folks) jump in every now and then to give their take on things.

My verdict 10/10

Release It!: Design and Deploy Production-Ready Software 2nd Edition by Michael Nygard

This book was published in the late 2018. I recommend this book for everyone somehow related to releasing software in production. It will save you years of try and learn in a hard way. Author already had all that experience and kindly shares it with us. You will understand typical problems with distributed systems. It touches a variety of different aspects like robustness of a system, security, versioning and much more. Very engaging book. As soon as you pick up this book, you will not put it down until you read it to the end. Also author had a great sense of humor, so the book is fun to read.

My verdict 9/10

F# From the Ground Up

F Sharp (programming language) - Wikipedia

Great course on Udemy by Kit Eason. Full of examples and quite interactive. Very good for beginners who wants to get grasp on functional programming and get practical experience.

The Art Of Code by Dylan Beattie

Glorious and hilarious talk on code from the perspective of an art. Dylan Beattie the author of the Rockstar programming language. The ending of the video is really epic 🙂

Where We’re Going, We Don’t Need Servers! by Sam Newman

Interesting talk by the author of Building Microservices books. Thoughts on where we are going and trends in cloud computing.

That’s it. Wish you all Merry Xmas and a Happy New Year 🎄🎅

Programming, Software design

Continuous refactoring

A son asked his father (a programmer) why the sun rises in the east, and sets in the west. His response? It works, don’t touch!

Fisrt rule of programming

How often you see software systems where adding new, relatively simple features takes ages, where it is hard to find the core logic or in which part of the system to apply changes? Does reading the code gives you a headache? Does any changes you need to introduce in that software makes you sick and you want to escape from that sinking ship? Well, unfortunately, you became a victim of a software rot.

Ivar Jacobson describes a software entropy as follows:

The second law of thermodynamics, in principle, states that a closed system‘s disorder cannot be reduced, it can only remain unchanged or increase. A measure of this disorder is entropy. This law also seems plausible for software systems; as a system is modified, its disorder, or entropy, tends to increase. This is known as software entropy

Ivar Jacobson

The Problem

In our daily work we too busy with adding new features as fast as possible and closing our tickets. Everyone is happy: the business sells features to customers, and we got paid, and if lucky, we even get some bonuses for that incredible performance. With time, however, software gets more complicated. You’re going to find out that adding more features takes longer and produces more bugs. This tends to grow very quickly and ends in a system which extremely hard and expensive to maintain.

Perhaps you heard of a term Technical Debt? It describes collective debt you owe to your software by doing some shortcuts when implementing some feature because you didn’t have time to do it properly. It could be on any level from overall software design to the low-level implementation. But it is a very optimistic term – it assumes that this debt will be paid off which we all know almost never happen. I prefer to think about it as a Software Rot actually and the whole practice which leads to that as a Shortcut-Driven-Development.

I think many companies just don’t realize what are the costs of maintenance of a poorly built software. But hey, wouldn’t it be nice if we can show and prove this mathematically? So that you will have strong arguments in your discussions with product owners and managers regarding the need of refactoring. Once upon a time, while I was reading Code Simplicity: The Fundamentals of Software , I reached chapter which describes the equitation of software design and it has brilliant math representation of that equation:

D = (Pv * Vi) / (Ei + Em)

Or in English:

The Desirability of Implementation is directly proportional to the Probability of Value and the Potential Value of Implementation, and inversely proportional to the total effort, consisting of the Effort of Implementation plus the Effort of Maintenance.

However, there is a critical factor missing from the simple form of this equation: time. What we actually want to know is the limit of this equation as time approaches infinity, and that gives us the true Desirability of Implementation. So let’s look at this from a logical standpoint:

The Effort of Implementation is a one-time cost, and never changes, so is mostly unaffected by time.

The Value of Implementation may increase or decrease over time, depending on the feature. It’s not predictable, and so we can assume for the sake of this equation that it is a static value that does not change with time (though if that’s not the case in your situation, keep this factor in mind as well). One could even consider that the Effort of Maintenance is actually “the effort required to maintain this exact level of Value,” so that the Value would indeed remain totally fixed over time.

The Probability of Value, being a probability, approaches 1 (100%) as time goes to infinity.

The Effort of Maintenance, being based on time, approaches infinity as time goes to infinity. What this equation actually tells us is that the most important factors to balance, in terms of time, are probability of value vs. effort of maintenance. If the probability of value is high and the effort of maintenance is low, the desirability is then dependent only upon the Potential Value of Implementation vs. the Effort of Implementation–a decision that a product manager can easily make. If the probability of value is low and the effort of maintenance is high, the only justification for implementation would be a near-infinite Potential Value of Implementation.

The Solution

Let’s take a look at definition of refactoring given by Martin Fowler:

Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior.

Its heart is a series of small behavior preserving transformations. Each transformation (called a “refactoring”) does little, but a sequence of these transformations can produce a significant restructuring. Since each refactoring is small, it’s less likely to go wrong. The system is kept fully working after each refactoring, reducing the chances that a system can get seriously broken during the restructuring.

Keeping software systems in good order requires developer teams to follow strong practices. But first of all it is your personal responsibility and should be in your culture. Continuous Refactoring is a key factor in that sense. You should refactor and improve your code on a day-to-day basis. Plan this work if you need to, explain the importance of it to the managers and product owners. You should become refactoring machines. This is not only tremendously reduce the denominator part of equation (Em), but also will force you to protect your code from breaking by implementing all kind of testing , one of which is regression testing. Each bit of refactoring will make your code better and easier to extend with new functionality. Gradually, you can switch from Shortcut-Driven-Development to Software Engineering and enjoy from what you are doing and helping business to survive in the long run.

The Mantra

Mantra for TDD followers

.NET, F#, Programming

Having fun with F# operators

F# is very exciting and fun language to learn. It contains pipe and composition operators which allows you to write less code with better conciseness. In addition to familiar prefix and postfix operators it also comes with the “infix” operator. The beauty of it is that you can define your own infix operators and succinctly express business logic in your F# code.

Prefix, infix and postfix 👾

As an example of prefix operators we can define any regular function:

let times n x = x * n 

and call this function with a prefix notation:

times 3 3 // val it : int = 9

In F# vast majority of primitives are functions, just like in pure OOP language everything is an object. So you can also call multiplication operator as a function:

(*) 3 3 // val it : int = 9

which gives the same result as in the previous code snippet.

Postfix operators is not something you often use and mostly comes with built-in keywords:

type maybeAString = string option // built-in postfix keyword
type maybeAString2 = Option<string> // effectively same as this
// Usage
let s:maybeAString = Some "Ninja in the bushes!"
let s2:maybeAString2 = None

But most interesting one is infix operator. As you already could guess, infix operator should be placed between two operands. Everyone did some math in school and wrote something similar to:

3 * 3 // val it : int = 9

Not surprisingly it is something you use without even thinking. Now, let’s define few custom functions:

let (^+^) x y = x ** 2. + y ** 2. // sum of the square of two numbers
let (^^) x y = x ** y // returns x to the power of y

And use it with an infix operator:

3. ^+^ 3. // val it : float = 18.0
3. ^^ 3. // val it : float = 27.0

Note that we can also use it with a prefix notation just as a regular functions:

(^+^) 3. 3. // val it : float = 18.0
(^^) 3. 3. // val it : float = 27.0

Of course infix syntax looks much more succinct in that case.

Pipe, compose and mix 🔌

The king among F# operators is a pipe operator (|>). It allows you to express function composition in a readable way. Function application is left associative, meaning that evaluating x y z is the same as evaluating (x y) z. If you would like to have right associativity you can use explicit parentheses or pipe operator:

let fun x y z = x (y z)
let fun x y z = y z |> x // forward pipe operator
let fun x y z = x <| y z // backward pipe operator

Okay. As you see there two flavors of pipe operators: forward and backward. Here the definition of forward pipe operator:

let (|>) x f = f x

Just as simple as that: feeding the argument from the left side (x) to function (f). The definition of the backward pipe operator is:

let (<|) x f = x f

You may wonder why it is needed and what is benefit of using it? You will see example later in this post.

So how we can apply pipe operators in practice? Here examples:

let listOfIntegers = [5;6;4;3;1;2]
listOfIntegers |> List.sortBy (fun el -> abs el) // val it : int list = [1; 2; 3; 4; 5; 6]
// Same as
List.sortBy (fun el -> abs el) listOfIntegers

It shines when you have long list of functions you need to compose together:

text.Split([|'.'; ' '; '\r'|], StringSplitOptions.RemoveEmptyEntries)
      |> Array.map (fun w -> w.Trim())
      |> Array.filter (fun w -> w.Length > 2)
      |> Array.iter (fun w -> ...

The backward pipe operator could be useful in some cases to make your code looks more English-like:

let myList = []
myList |> List.isEmpty |> not
// Same as above but looks prettier
myList |> (not << List.isEmpty)

Composition operator could also be forward (>>) and backward (<<) and it also used for composing functions. Unlike pipe operator, result of execution compose will be a new function.

Definition of composition operators:

let (>>) f g x = g ( f(x) )
let (<<) f g x = f ( g(x) )

For example:

let add1 x = x + 1
let times2 x = x * 2
let add1Times2 = (>>) add1 times2
add1Times2 3 // val it : int = 8

Which we could re-write like this:

let add x y = x + y
let times n x = x * n
let add1Times2 = add 1 >> times 2
add1Times2 3 // val it : int = 8

In both examples it relies on core concept of partial application, that is when one argument baked-in in functions add1 and times2 but left second argument free so that it will be passed on function invocation by the user.

As long as input and outputs of functions involved in composition match, any kind of value could be used

Same example with backward composition operator gives different result because functions composed in the opposite order:

let add x y = x + y
let times n x = x * n
let times2Add1 = add 1 << times 2
times2Add1 3 // val it : int = 7

Have fun 😝

Now a small exercise for you. What will be outcome of all these expressions? 🤔 :

3 * 3
(*) 3 3
3 |> (*) 3
3 |> (*) <| 3

What about that one:

let ninjaBy3 = 3 * 3 |> (+)
ninjaBy3 5

Try it yourself. Leave comments and have fun!

.NET, C#, Programming

C# 8.0 pattern matching in action

Let’s revisit definition of the pattern matching from the Wiki:

In computer sciencepattern matching is the act of checking a given sequence of tokens for the presence of the constituents of some pattern. In contrast to pattern recognition, the match usually has to be exact: “either it will or will not be a match.” The patterns generally have the form of either sequences or tree structures. Uses of pattern matching include outputting the locations (if any) of a pattern within a token sequence, to output some component of the matched pattern, and to substitute the matching pattern with some other token sequence

Pattern matching

Putting it into the human language means that instead of comparing expressions by exact values (think of if/else/switch statements), you literally match by patterns or shape of the data. Pattern could be constant value, tuple, variable, etc. For full definition please refer to the documentation. Initially, pattern matching was introduced in C# 7. It was shipped with basic capabilities to recognize const, type and var patterns. The language was extended with is and when keywords. One of the last pieces to make it work was introduction of discards for deconstruction of tuples and objects. Combining all these together you was able to use pattern matching in if and switch expressions:

if (o is null) Console.WriteLine("o is null");
if (o is string s && s.Trim() != string.Empty)
        Console.WriteLine("whoah, o is not null");
switch (o)
{
    case double n when n == 0.0: return 42;
    case string s when s == string.Empty: return 42;
    case int n when n == 0: return 42;
    case bool _: return 42;
    default: return -1;
}

C# 8 extended pattern matching with switch expressions and three new ways of expressing a pattern: positional, property and tuple. Again, I will refer to full documentation for the details.

In this post I would like to show a real-world example of using pattern matching power. Let’s say you want to build an SQL-query based on list of filters you receive as an input from HTTP request. For example we would like to get a list of all shurikens filtered by shape and material:

http://www.ninja.org/shuriken?filters[shape]='star,stick'&filters[material]='kugi-gata'

We need a model to which we can map this request with list of filters:

public class ShurikenQuery
{
    [BindProperty(Name = "filters", SupportsGet = true)]
    public IDictionary<string, string> ShurikenFilters { get; set; }
}

Now, let’s write a function which builds SQL-query string based on provided filters:

private static string BuildFilterExpression(ShurikenQuery query)
{
    if (query is null)
        throw new ArgumentNullException(nameof(query));

    const char Delimiter = ',';

    var expression = query.ShurikenFilters?.Aggregate(new StringBuilder(), (acc, ms) =>
    {
        var key = ms.Key;
        var value = ms.Value;
        var exp = (key, value) switch
        {
            (_, null) => $"[{key}] = ''",
            (_, var val) when val.Contains(Delimiter) =>
                @$"[{key}] IN ({string.Join(',', val
                    .Replace("'", string.Empty)
                    .Replace("\"", string.Empty)
                    .Split(Delimiter).Select(x => $"'{x}'"))})",
            (_, _) => $"[{key}] = '{value}'"
        };
        return exp != null ? acc.AppendLine($" AND {exp}") : acc;
    });
    return expression?.ToString() ?? string.Empty;
}

Let’s break this code down. On line 8 we use LINQ Aggregate function which do the main work of building a filter string. We want to iterate over each KeyValuePair in dictionary and based on the data shape in it create a string which represents expression which could be provided for SQL WHERE clause.

On line 10 and 11 we extract key and value for KeyValuePair in own variables just for convenience.

Lines between 12-21 are of main interest to us – that’s where all magic happens. On line 12 we wrapped key and value in a tuple and use switch expression to start pattern match. On line 14 we check if value variable is null (_ in key position means discard – it’s when we don’t care what actual value is). If that’s a case we produce string like [Shape]=”. On line 15-19 again, we not interested what in key position, but now we assign value to a dedicated variable val we can work with. Next, we check if this value contains filter with multiple values (like in case filters[shape]=’star,stick’) and split it in separate values, removing ” and ‘ on the way we go. We want to translate this into SQL IN operator, so string after processing this pattern looks like this: [Shape] IN (‘star’, ‘stick’). Last pattern matches for remaining cases of single value filters (like filters[material]=’kugi-gata’) which produce following string: [Material] = ‘kugi-gata’. Line 23 applies AND to string we built and accumulating result in a StringBuilder variable we provided in initial run of Aggregate function on line 8.

If we would put (_, _) as a first line in a switch expression other patterns will be not evaluated, because (_, _) will catch all values

Be aware that in pattern matching the order is everything

Finally, the resulting string we return looks like this:

 AND [Shape] IN ('star','stick') AND [Material] = 'kugi-gata'

And that’s a valid string for SQL WHERE clause. As you can see pattern matching is a real deal and could be used in a lot of cases for parsing expressions. With C# 9 pattern matching will be extended even more with relational and logical patterns.

Hope you enjoyed. Stay tuned and happy coding.

.NET, async, LINQ, Programming

Make LINQ Aggregate asynchronous

I often use LINQ in my code. Well, put it in another way: I can’t live without using LINQ in my daily work. One of the my favorite methods is Aggregate. Applying it wisely could save you from having explicit loops, naturally chain into other LINQ methods and at the same time keep your code readable and well-structured. Aggregate is similar to reduce and fold functions which is hammer and anvil of functional programming tooling.

When you use Entity Framework it provides you with async extensions methods like ToListAsync(), ToArrayAsync(), SingleAsync(). But what if you want to achieve asynchronous behavior using LINQ Aggregate method? You will not find async extension in existing framework (on the moment of writing this article I’m using .NET Core 3.1 and C# 8.0). But let me give you a real-world example of the case when you could find this really useful.

Let’s say you need to fetch from database all distinct values for multiple columns in order to build multi-selection filter like this:

Let’s also assume you use SQL Server as it is most common one. For keeping it simple I will show you example with using Dapper micro-ORM.

The function could look like this:

public List<MultiSelectionModel> GetMultiSelectionFilterValues(string[] dataFields) {
  var results = new List<MultiSelectionModel>();

  var query = dataFields.Aggregate(new StringBuilder(), (acc, field) =>{
    return acc.AppendLine($ "SELECT [{field}] FROM Table GROUP BY [{field}];");
  });

  using var connection = new SqlConnection(this.connectionString);
  connection.Open();

  using(var multi = connection.QueryMultiple(query.ToString())) {
    results.AddRange(dataFields.Aggregate(
     new List<MultiSelectionModel>(), (acc, field) =>{
      acc.Add(new MultiSelectionModel {
        DataField = field,
        Values = multi.Read(),
      });

      return acc;
    }));
  }

  return results;
}

The function receives as input parameter array of data fields (columns) for which we need to fetch distinct values for multi-selection filter and returns a list of multi-selection model which is just simple data structure defined as:

public class MultiSelectionModel
{
    public string DataField { get; set; }
    public IEnumerable<dynamic> Values { get; set; }
}

On lines 4-6 you see how Aggregate method applied for building a SELECT query for fetching distinct values for provided columns. I uses GROUP BY in this example, but you can use DISTINCT with same effect, although there difference in performance between distinct and group by for more complex queries which is excellently explained in this article. Lines 13-21 highlights the main logic of the function where we actually querying database with multi.Read() and assign results with distinct values for each data field in resulting model. In both cases following Aggregate extension used:

public static TAccumulate Aggregate<TSource, TAccumulate>(
	this IEnumerable<TSource> source,
	TAccumulate seed,
	Func<TAccumulate, TSource, TAccumulate> func
)

In first case as a seed parameter we provided StringBuilder. Second parameter is a function which receives accumulator and element from the source and returns accumulator which is StringBuilder in our case. In second case, as a seed we used List<MultiSelectionModel> which is resulting collection, so that final list is accumulated in that collection.

So that works. You can stop reading now and go for a couple of 🍺 with fellows…

Oh, you still here 😏. You know, curiosity killed the cat. But we different animals, so let’s move on. Well, as you can notice, in the first example we used what is known in Dapper as multi-result result. It executes multiple queries within the same command and map results. The good news is that it also has async version. The bad news is that our Aggregate does not have async version. Should we go back to old good for-each loop for mapping results from query execution then? No way!

So how could we implement all the way down async version of GetMultiSelectionFilterValues? Well, let’s re-write it how we would like to see it:

public async Task<List<MultiSelectionModel>> GetMultiSelectionFilterValuesAsync(string[] dataFields) {
  var results = new List<MultiSelectionModel>();

  var query = dataFields.Aggregate(new StringBuilder(), (acc, field) =>{
    return acc.AppendLine($ "SELECT [{field}] FROM Table GROUP BY [{field}];");
  });

  using var connection = new SqlConnection(this.connectionString);
  connection.Open();

  using(var multi = await connection.QueryMultipleAsync(query.ToString())) {
    results.AddRange(await dataFields.AggregateAsync(
     new List<MultiSelectionModel>(), async (acc, field) =>{
      acc.Add(new MultiSelectionModel {
        DataField = field,
        Values = await multi.ReadAsync(),
      });

      return acc;
    }));
  }

  return results;
}

Much better now, isn’t it? I’ve highlighted the changes. This is fully asynchronous Aggregate method now. Of course you wish to know where did I get this async extension 😀? Here the extension methods I come up with to make it work:

public static class AsyncExtensions {
	public static Task<TSource> AggregateAsync<TSource>(
	this IEnumerable<TSource> source, Func<TSource, TSource, Task<TSource>> func) {
		if (source == null) {
			throw new ArgumentNullException(nameof(source));
		}

		if (func == null) {
			throw new ArgumentNullException(nameof(func));
		}

		return source.AggregateInternalAsync(func);
	}

	public static Task<TAccumulate> AggregateAsync<TSource,
	TAccumulate>(
	this IEnumerable<TSource> source, TAccumulate seed, Func<TAccumulate, TSource, Task<TAccumulate>> func) {
		if (source == null) {
			throw new ArgumentNullException(nameof(source));
		}

		if (func == null) {
			throw new ArgumentNullException(nameof(func));
		}

		return source.AggregateInternalAsync(seed, func);
	}

	private static async Task<TSource> AggregateInternalAsync <TSource> (
	this IEnumerable <TSource> source, Func<TSource, TSource, Task<TSource>> func) {
		using
		var e = source.GetEnumerator();

		if (!e.MoveNext()) {
			throw new InvalidOperationException("Sequence contains no elements");
		}

		var result = e.Current;
		while (e.MoveNext()) {
			result = await func(result, e.Current).ConfigureAwait(false);
		}

		return result;
	}

	private static async Task<TAccumulate> AggregateInternalAsync<TSource,	TAccumulate>(
	this IEnumerable<TSource> source, TAccumulate seed, Func<TAccumulate, TSource, Task<TAccumulate>> func) {
		var result = seed;
		foreach(var element in source) {
			result = await func(result, element);
		}

		return result;
	}
}

I did it for two of three existing Aggregate overloads. The last one you can implement yourself if you need it. It will be good exercise for you to understand how aggregate works behind the scenes.

Stay tuned and have fun.

.NET, ASP.NET Core, Programming

From Zero to Hero: Build ASP.NET Core 3.1 production-ready solution from the ground up (Part 1)

How often do you start a new project with latest and greatest version of .NET Core and C# to try some new fancy language features or perhaps creating a new solution for implementing your ideas? It happens to me a lot. I find myself creating a pet projects over and over again. Sometimes, project growth and get more contributors . People working from different places having different IDEs and operational systems. Solution should work the same way on each workstation on each OS. Also it is important to have code style conventions and scripts for building and running solution. I would like to share with you my experience on structuring .NET solution, containerizing it with Docker, adding HTTPS support for development and many more nice bonuses like adding code analyzers, following conventions and code formatting. As an example we will create ASP.NET Core 3.1 simple API.

From this post you will learn:

  • How to properly structure you solution
  • How to add Git and other configuration files
  • How to create ASP.NET Core API application
  • How to containerize ASP.NET Core application
  • How to add support for HTTPS development certificate
  • How to add styling and code conventions with analyzers
  • How to make it work cross-platform in different editors and OSes (Visual Studio, Visual Code, CLI)

Structure solution and add Git with configuration files

Okay. Lets start from the beginning. I assume you have Git installed:

mkdir ninja-core
cd ninja-core
git init

I do suggest to structure your solution in the following way:

  • /
    • src
      • project-1
      • project-2
    • docs
    • tests
    • build
    • deploy

src – solution source files which includes all projects sources

docs – documentation on your solution. This could be any diagrams which contains sequence or activity flows or just a simple use cases

tests – all kind of tests for your solution including unit tests, integration tests, acceptance tests, etc.

build – could be any scripts for building your solution

deploy – scripts related to deploying your solution to different environments or localhost

Suggested solution structure of our deadly Ninja .NET Core app could look like this for now:

In the root folder we will have following files:

Let’s add following files in the root folder of our project:

.gitattributes – Defines Git behavior on certain attributes-aware operations like line endings, merge settings for different file types and much more.

.gitignore – Defines patterns for files which should be ignored by Git (like binaries, tooling output, etc). This one adapted for Visual Studio/Code and .NET projects

.gitlab-ci.yml – Configuration file for GitLab pipeline (will be covered in Part 2). We would like to be sure that our code continuously integrated and delivered.

README.md – Every well-made project should contain readme file with instructions on how to build and run your solution, optionally, with team members and responsible persons.

You can use files as is or adapt it for your’s project needs. After you created folder structure and added all needed files with configuration you need to push it to your repository (I assume you’ve created one). Typically it looks something like:

git remote add origin git@gitlab.com:username/yrepo_name.git
git add .
git commit -m "Initial commit"
git push -u origin master

Create ASP.NET Core web application

Creating ASP.NET Core web app is really simple. Just run following commands in CLI:

#inside of ninja-core/src folder
mkdir Iga.Ninja.Api
cd Iga.Ninja.Api
dotnet new webapi

By default, it generates WeatherForecastController.cs file in Controllers folder. Because we’re building deadly ninja API, we want to delete this file and instead add simple NinjaController.cs with following content:

using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;

namespace Iga.Ninja.Controllers
{
    [ApiController]
    [Route("[controller]")]
    public class NinjaController : ControllerBase
    {
        private readonly ILogger<NinjaController> _logger;

        public NinjaController(ILogger<NinjaController> logger)
        {
            _logger = logger;
        }

        [HttpGet]
        public string Get() => "Go Ninjas!!!";
    }
}

Cool. Now we should be able to build and run it:

dotnet run

Go in your browser and see it working: http://localhost:5000/ninja.

Containerize ASP.NET Core application

Since introducing back in 2013 Docker changed the way how modern software development looks today, especially in micro-service oriented architecture. You want your application to work exactly the same on local machine, on test and on production with all package dependencies required for app to run. This also helps a lot in end-to-end testing when your application has dependency on external services and you would like to test whole flow.

First, we need to create an image in the root of Iga.Ninja.Api folder. Here the Dockerfile I use:

# Stage 1 - Build SDK image
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /build

# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore

# Copy everything else and build
COPY . ./
RUN dotnet build -c Release -o ./app

# Stage 2 - Publish
FROM build AS publish
RUN dotnet publish -c Release -o ./app

# Stage 3 - Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
COPY --from=publish /build/app .
ENTRYPOINT ["dotnet", "Iga.Ninja.Api.dll"]

Here we use what is known as multi-stage builds which is available in Docker starting from version 17.05. So don’t forget to check that you are up-to-date. There two images: one with .NET Core SDK which contains all required tools for building .NET Core application and .NET Core runtime which is needed to run application. We use image with SDK in a first stage as a base image to restore packages and build application. You can notice that we have dotnet restore and dotnet build as a two separate commands in Dockerfile instead of one. That is small trick to make creation of the image a bit faster.

Each command that is found in a Dockerfile creates a new layer. Each layers contains the filesystem changes of the image between the state before the execution of the command and the state after the execution of the command.

Docker uses a layer cache to optimize the process of building Docker images and make it faster.

Docker Layer Caching mainly works on RUNCOPY and ADD commands

So if csproj file hasn’t changed since last state, cached layer will be used. In Stage 2 we just publish binaries built by Stage 1 and dotnet build. Stage 3 will use ASP.NET Core runtime image and artifacts from Stage 2 with published binaries. That will be our final image. With the last line we instruct Docker what command to execute when new container from that image will be instantiated. By the way ASP.NET Core application is just console app which runs with built-in and lightweight Kestrel web server. But preferred option if you run on Windows is to use In-Process hosting model with IIS HTTP Server (IISHttpServer) instead of Kestrel which gives performance advantages.

That’s it. You can build an image and run it:

docker build -t ninja-api .
docker run --rm -d -p 8000:80 --name deadly-ninja ninja-api

Now you should be able to see a deadly ninja in action by visiting http://localhost:8000/ninja in your browser.

Congratulations! You’ve just containerized our web api.

Add HTTPS development certificate (with support in Docker)

So far so good. Now we would like to enforce HTTPS in out API project for development and make it work also when running in Docker container. In order to achieve that we need to do the following steps:

  • Trust ASP.NET Core HTTPS development certificate.

When you install .NET Core SDK it installs development certificate to the local user certificate store. But it is not trusted, so run this command to fix that:

dotnet dev-certs https --trust

That’s already enough if we going to run our API locally. However if we would like to add this support in Docker we need to do additional steps:

  • Export the HTTPS certificate into a PFX file using the dev-certs global tool to %USERPROFILE%/.aspnet/https/<>.pfx using a password of your choice

PFX filename should correspond to your application name:

# Inside Iga.Ninja.Api folder
dotnet dev-certs https -ep %USERPROFILE%\.aspnet\https\Iga.Ninja.Api.pfx -p shinobi
  • Add the password to the user secrets in your project:
dotnet user-secrets init -p Iga.Ninja.Api.csproj
dotnet user-secrets -p Iga.Ninja.Api.csproj set "Kestrel:Certificates:Development:Password" "shinobi"

Now we would be able to run our container with ASP.NET Core HTTPS development support in container with following command:

docker run --rm -it -p 8000:80 -p 8001:443 -e ASPNETCORE_URLS="https://+;http://+" -e ASPNETCORE_HTTPS_PORT=8001 -e ASPNETCORE_ENVIRONMENT=Development -v %APPDATA%\microsoft\UserSecrets\:/root/.microsoft/usersecrets -v %USERPROFILE%\.aspnet\https:/root/.aspnet/https/ --name deadly-ninja-secure ninja-api

Navigate to https://localhost:8001/ninja. Now, our deadly ninja even more secure and trusted than ever.

P.S. Because docker mounts user secrets as a volume, it is very important to check that docker has access rights to required folders, so please check your docker resources settings

Add styling and code conventions with analyzers

When you work on a project with more than one developer you want to have common conventions and agreement on how to style and format your code. It is time to add that. First, I would like to suggest to create a solution file for our project. Although not necessary it is very handy to have it, especially if you work outside of IDE. It will serve as a project container and you can issue dotnet build in /src root, so that your solution file will be used for build process. Let’s add solution file and our API project:

cd ./src
dotnet new sln --name Iga.Ninja
dotnet sln add Iga.Ninja.Api/Iga.Ninja.Api.csproj

Okay. Let’s move on. There a lot of source code analyzer packages you can find. For our example we will use SecurityCodeScan, SonarAnalyzer.CSharp and StyleCop.Analyzers. You can add it by running following commands in Iga.Ninja.Api folder:

dotnet add package SonarAnalyzer.CSharp
dotnet add package SecurityCodeScan
dotnet add package StyleCop.Analyzers

But I will suggest a different approach here. Instead of adding these packages manually to the specific project, it would be nice to have a way to automatically add it to any project we add in our solution. This is because we want to have code analyzers in each of our projects and enforce code validation on solution build. And there is a way to do it. We need to add Directory.Build.Props file in the root of our /src folder.

Directory.Build.props is a user-defined file that provides customizations to projects under a directory.

When MSBuild runs, Microsoft.Common.props searches your directory structure for the Directory.Build.props file (and Microsoft.Common.targets looks for Directory.Build.targets). If it finds one, it imports the property.

Let’s add Directory.Build.props file. The content of my file:

<Project>
  <PropertyGroup>
    <Company>NinjaCorp</Company>
    <ProductName>MessageLog</ProductName>
  </PropertyGroup>
  <!-- StyleCop Analyzers configuration -->
  <PropertyGroup>
    <SolutionDir Condition="'$(SolutionDir)'==''">$(MSBuildThisFileDirectory)</SolutionDir>
    <CodeAnalysisRuleSet>$(SolutionDir)ca.ruleset</CodeAnalysisRuleSet>
  </PropertyGroup>
  <PropertyGroup>
    <TreatWarningsAsErrors>false</TreatWarningsAsErrors>
  </PropertyGroup>
  <ItemGroup>
    <AdditionalFiles Include="$(SolutionDir)stylecop.json" Link="stylecop.json" />
    <PackageReference Include="Microsoft.CodeAnalysis.FxCopAnalyzers" Version="3.0.0">
      <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
      <PrivateAssets>all</PrivateAssets>
    </PackageReference>
    <PackageReference Include="SecurityCodeScan" Version="3.5.3.0">
      <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
      <PrivateAssets>all</PrivateAssets>
    </PackageReference>
    <PackageReference Include="SonarAnalyzer.CSharp" Version="8.10.0.19839">
      <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
      <PrivateAssets>all</PrivateAssets>
    </PackageReference>
    <PackageReference Include="StyleCop.Analyzers" Version="1.1.118">
      <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
      <PrivateAssets>all</PrivateAssets>
    </PackageReference>
  </ItemGroup>
</Project>

An attentive reader noticed next line in the file:

<CodeAnalysisRuleSet>$(SolutionDir)ca.ruleset</CodeAnalysisRuleSet>

This file is code analysis rule set reference file which describes configuration for different rules for StyleCop. You should not necessarily 100% agree with these rules, so you can configure it. As a base I use Roslyn Analyzer rule set with a bit of tweaks. You can find this rule set for our ninja core project here. And again, you should customize it for your organization needs. So this file will be picked up each time you issue dotnet build command on your solution and will validate your binaries against this rule set. You will see warnings in output of your build which you can resolve later:

Next line which you perhaps noticed is

<AdditionalFiles Include="$(SolutionDir)stylecop.json" Link="stylecop.json" />

This file used for fine-tune the behavior of certain Stylecop rules and to specify project-specific text. You can find full reference here. In our project stylecop.json looks like this:

{
  "$schema": "https://raw.githubusercontent.com/DotNetAnalyzers/StyleCopAnalyzers/master/StyleCop.Analyzers/StyleCop.Analyzers/Settings/stylecop.schema.json",
  "settings": {
    "documentationRules": {
      "companyName": "Ninja Coreporation",
      "copyrightText": "Copyright (c) {companyName}. All Rights Reserved.\r\n See LICENSE in the project root for license information.",
      "xmlHeader": false,
      "fileNamingConvention": "stylecop"
    },
    "layoutRules": {
      "newlineAtEndOfFile": "allow"
    }
  }
}

By the way, all package references and additional files described in Directory.Build.Props file will be automatically added to all projects on dotnet build/publish without need to add packages to each project manually.

Last steps

Okay. Now we have pretty decent solution which runs locally, in docker with HTTPS support, with code analyzers in place. You can build and run it from CLI on Windows and Linux. You should be able to run it in VS Code or in Visual Studio 2019. Before committing changes to Git what I like to do is to format code according to conventions in our .editroconfig file. And there is very nice tool for that – dotnet-format. You can install it globally:

dotnet tool install -g dotnet-format

Then all you need is to go in your project/solution folder an issue following command:

dotnet format

This will ensure you files now formatted according to your conventions, so when you commit to the Git you are good.

In next part we will look how to setup CI/CD pipeline for our ninja-core web api project with an example of GitLab infrastructure.

You can find sample for this article on my GitLab: https://gitlab.com/dnovhorodov/ninjacore

Have a nice coding and stay tuned.

Software design

Software design DOs and DON’Ts

For many of us, including me, Uncle Bob’s Clean Code and Clean Architecture are handbooks which take a proud place on the bookshelf. Principles described in these great books serves as a practical guide on how to build software and keep code clean and maintainable. It inspires and motivates us to make better products and enjoy our profession.

Despite the fact that practices described in the books are essential from coding perspective, there is one thing which is missing in that puzzle: how to decompose software system? Just ask this question yourself and try to come up with an answer.

Okay. I bet at least 90% of you thought something like: “WTF you are talking about, dude?” Of course we will make functional decomposition of the system and put each decomposed piece in its own component with clearly defined interfaces, then… stop! At this moment take a short break. Keep reading… functional decomposition is a wrong way to decompose software system. What? You’re out of your mind?

Welcome to the club of cheated software developers 🙂 For years we have been told that in order to build a relatively complex software we need to get input from a business, analyze it and come up with functional decomposition based on requirements. Then design a system based on this decomposition. Thousands posts and Hello World examples all over the Internet didn’t make your understanding of software design better either. Pain and suffering of maintaining such systems is our everyday life. But it’s time to change that. Forget everything you know about system design.

I would like to introduce to you a book Righting Software by the Juval Löwy. This is a revelation and a Holy Grail in software design IMHO. The ideas author shares take their roots in other engineering industries like building cars, houses or computers which is proven by the time. The Method he describes, shows reasoning and practical examples of decomposing real world software system. I’ll share with you ideas I found interesting with a short rationale behind each of it, but I highly recommend to read the book.

The first and foremost

Avoid functional decomposition

Functional decomposition couples services to the requirements because the services are a reflection of the requirements. Any change in the required functionality imposes a change on the functional services. This leads to high coupling and explosion of services, greatly complicates individual reuse of services, encourages duplication in services because a lot of common functionalities customized to specific cases and much more problems.

The next:

Avoid Domain Decomposition

Domain decomposition is even worse than the functional decomposition. The reason domain decomposition does not work is that it is still functional decomposition in disguise. Each domain often devolves into an ugly grab bag of functionality, increasing the internal complexity of the domain. The increased inner complexity causes you to avoid the pain of cross-domain connectivity, and communication across domains is typically reduced to simple state changes (CRUD-like) rather than actions triggering required behavior execution involving all domains. Composing more complex behaviors across domains is very difficult

What alternatives?

Volatility-based decomposition identifies areas of potential change and encapsulates those into services or system building blocks. You then implement the required behavior as the interaction between the encapsulated areas of volatility.

The motivation for volatility-based decomposition is simplicity itself: any change is encapsulated, containing the effect on the system.

Decompose based on volatility

With functional decomposition, your building blocks represent areas of functionality, not volatility. As a result, when a change happens, by the very definition of the decomposition, it affects multiple (if not most) of the components in your architecture. Functional decomposition therefore tends to maximize the effect of the change. Since most software systems are designed functionally, change is often painful and expensive, and the system is likely to resonate with the change. Changes made in one area of functionality trigger other changes and so on. Accommodating change is the real reason you must avoid functional decomposition.

Universal Principle

The merits of volatility-based decomposition are not specific to software systems. They are universal principles of good design, from commerce to business interactions to biology to physical systems and great software. Universal principles, by their very nature, apply to software too (else they would not be universal). For example, consider your own body. A functional decomposition of your own body would have components for every task you are required to do, from driving to programming to presenting, yet your body does not have any such components. You accomplish a task such as programming by integrating areas of volatility. For example, your heart provides an important service for your system: pumping blood. Pumping blood has enormous volatility to it: high blood pressure and low pressure, salinity, viscosity, pulse rate, activity level (sitting or running), with and without adrenaline, different blood types, healthy and sick, and so on. Yet all that volatility is encapsulated behind the service called the heart. Would you be able to program if you had to care about the volatility involved in pumping blood?

Incremental construction

For the system of any level of complexity the right approach to construction is another principle:

Design iteratively, build incrementaly

It is near to impossible to create proper design from the first try, not revisiting it later. It is continuous process: you start with some rough cuts to the blueprints, refine them, check alternatives, then, after several iterations the design will converge. But you don’t want to build iteratively, you want to do it incrementaly. Imagine a car building process. You don’t start with creating scooter in first iterations, then come up with motorcycle and only at later iterations you finally build a car. It doesn’t make any sense. Instead, you build a chassis, then a frame, engine, wheels and tires, running gear and only after that you paint and build interior. It is complex process but customer paid for a car not a scooter. Which leads to the next fundamental design rule:

Features are aspects of integration, not implementation

As stated, the feature emerges once you have integrated the chassis with the engine, gear box, seats, dashboard, a driver, a road and fuel. In that case it could transport you from location A to location B.

Requirements

Requirements always change. Designing a system against the requirements leads to fragile designs. Requirement change and design must change. Change of the design is always painful, expensive and often destructive.

Never design against the requirements

In any system you have core use cases and other use cases. Core use cases represent the essence of the business of the system. Finding and extracting core use cases is challenging. Usually they not explicitly presented in the requirements. System with 100 use cases could have only 2-3 core use cases which often could be extracted as abstraction of other use cases and could require new term or a name.

Architect mission

Your tasks as an architect is to identify the smallest set of components that you can put together to satisfy all core use cases. Regular use cases is just a variations of core use cases and represent different interaction between components, but not different decomposition. When requirements change, your design does not.

Conclusion

Functional decomposition is a good technique splitting up complex business processes for analysis and further sub-tasking. But it is not applicable for creating software design. Volatility based decomposition and “The Method” described in the book gives a reader comprehensive explanation on how to decompose the system and what is important, how to do it in time and within a budget (2nd part of the book dedicated to the project design).

Software design

CQS, CQRS, Event Sourcing. What’s the difference?

There similar but yet different concepts that confuses some developers when they see these definitions and trying to understand what is the relation between the terms. Let’s try to figure it out.

CQS

CQS stands for Command-query separation. Initially designed by Bertrand Meyer as a part of his work on Eiffel programming language. The core idea behind this concept is that

Every method should either be a command that performs an action, or a query that returns data to the caller, but not both.

Application of this principle makes software design cleaner and code easier to read and understand it intent. Usually, you apply it on source code level by creating clear interface definitions, followed by implementation which should adhere to command-query separation rules. For in-depth overview of CQS in action and it affect on design see my article CQS and its impact on design.

CQRS

CQRS stands for Command Query Responsibility Segregation. It is architectural pattern. It is related to CQS in the way that the main idea of that pattern is based on command-query separation. But unlike former CQRS applied not on a code level but on application level. Originally this pattern was described by Greg Young in his CQRS Documents.

In a nutshell it says that your write model is not the same as your read model because of different representation logic behind it: for your web pages you could have views specifically adapted for representation logic of the UI elements on it. And write model contains all data in format which best fits type of the data. If you familiar with SQL, read model is something similar to SQL Views (which is just projection of data in convenient form).

This gives you not only flexibility in separation of representational logic, but also in choosing underlying storage technologies. Write model storage could be in good old SQL and read model could be in MongoDB or any other NoSQL DBMS. In essence it is SRP (single responsibility principle) at application level.

Event Sourcing

Event sourcing is technique to store data as a series of events where these events acts as source of truth for data. The key idea here is that each change to the data is event which is stored with metadata like timestamp and relation to aggregate root (often CQRS applied with DDD where aggregate root is domain object which encapsulates access to the child objects).

In that way you have history of all changes to your data over lifetime of application. Next key aspect of such systems is that in order to get current state of domain object all events for its aggregate root should be replayed from the beginning (there some tricks like snapshots which solve performance issues with that approach when you have huge amount of data). In the same manner, current state of a whole application is a sum of all events for all aggregate roots. This is very different from the usual CRUD model where you don’t preserve previous state and override the current state (just writes new values in columns for SQL database).

With event sourcing in place you could re-build your data for particular point in time which could be very useful for data analysis. It also gives you history of every data change by design with no additional coding.

Relation

CQS is a code level principle to improve design by applying separation of concerns.

CQRS is application level architectural pattern for separating commands (writes) and queries (reads) in regards to the storage. It is based on the ideas of CQS.

Event Souring is a storage pattern to represent data changes as a stream of events with point-in-time recovery.

Conclusion

CQRS is based on CQS principle and boosted to application level. CQRS is often combined with Event Sourcing and DDD. You can implement CQRS without Event Sourcing. You can implement Event Sourcing without CQRS. Event Sourcing is opposite to the CRUD.

.NET, F#, Programming

Sequences and problem solving in F#

Sequences in F# is very similar to the lists: they represent ordered collection of values. However, unlike lists, sequences are lazy evaluated, meaning elements in a sequence computed as they needed. This is very handy for example to represent infinite data structures. Data types, such as lists, arrays, sets, and maps are implicitly sequences because they are enumerable collections. A function that takes a sequence as an argument works with any of the common F# data types, in addition to any .NET data type that implements System.Collections.Generic.IEnumerable<'T>. The type seq<'T> is a type abbreviation for IEnumerable<'T>. This means that any type that implements the generic System.Collections.Generic.IEnumerable<'T>, which includes arrays, lists, sets, and maps in F#, and also most .NET collection types, is compatible with the seq type and can be used wherever a sequence is expected. Sequences contains over than 70 operations which I will not list here. You can follow refences Sequences and F# – Sequences for more details.

In this post I would like to look at the real world example in practice and compare both: C# and F# approaches to solve the same problem. Lets describe it:

Print all working (business) days within specified date range.

To make it more interesting, we would like to support an interval: when specified we return values for each n working day instead of each day.

First, lets look at one of the possible C# implementations:

using System;
using System.Linq;
using System.Collections.Generic;

public class Program {

 public static void Main() {

  var startDate = new DateTime(2020, 06, 01);
  var endDate = new DateTime(2020, 07, 01);
  var interval = 2;
  Func<DateTime, bool> IsWorkingDay = (date) => 
        date.DayOfWeek != DayOfWeek.Saturday && date.DayOfWeek != DayOfWeek.Sunday;

  foreach(var date in GetWorkingDays(startDate, endDate, IsWorkingDay)
                      .Where((d, i) => i % interval == 0)) 
  {
      Console.WriteLine(date);
  }
 }

 private static IEnumerable<string> GetWorkingDays(DateTime start, DateTime stop, Func<DateTime, bool> filter) {

  var date = start.AddDays(-1);

  while (date < stop) {
   date = date.AddDays(1);

   if (filter(date)) {
    yield return string.Format("{0:dd-MM-yy dddd}", date);
   }
  }
 }
}

The code is pretty straightforward: we use IEnumerable<string> to generate a sequence of values which is filtered by business days. Note that enumerable is lazy evaluated. Then we apply LINQ extension:

Where<TSource>(IEnumerable<TSource>, Func<TSource,Int32,Boolean>)

which takes an integer as a second parameter. It selects only values where index is divisible by interval without remainder, hence satisfying requirement of getting each n business days.

Finally, with interval of 2 we will have output similar to this:

01-06-20 Monday
03-06-20 Wednesday
05-06-20 Friday
09-06-20 Tuesday
11-06-20 Thursday
15-06-20 Monday
17-06-20 Wednesday
19-06-20 Friday
23-06-20 Tuesday
25-06-20 Thursday
29-06-20 Monday
01-07-20 Wednesday

Next, I will show you F# implementation.

In F#, generally, solving any problem implies decomposition on granular level of a function and composing these functions in specific order and with a glue in a form of a language constructs.

First, lets define a working day filter:

let IsWorkingDay (day : DateTime) = day.DayOfWeek <> DayOfWeek.Saturday && day.DayOfWeek <> DayOfWeek.Sunday

Now, lets define an infinite sequence of a days following some start date:

let DaysFollowing (start : DateTime) = Seq.initInfinite (fun d -> start.AddDays(float (d)))

Next, we need a function to represent a sequence of working days starting from some start date which is essence a composition of DaysFollowing function with IsWorkingDay filter with a help of a pipeline operator:

let WorkingDaysFollowing start = 
   start
   |> DaysFollowing
   |> Seq.filter IsWorkingDay

Notice the use of Seq.filter operation here. We just provide filtering function with following signature:

where : ('T → bool) → seq<'T> → seq<'T>

This should be familiar to you if you ever used LINQ 🙂 In F#, 'T notation just means generic type.

At this point we would like to have a function which could make use of an interval variable in generation of the next working date. Here it is:

let NextWorkingDayAfter interval start = 
   start
   |> WorkingDaysFollowing
   |> Seq.item interval

And again, we stack one block on top of another which is function composition in action. Seq.item computes the nth element in the collection. First, we get sequence of working days and then we process nth from that sequence:

item : int → seq<'T> → 'T

Finally, we need to define function which will compose all these blocks and return final sequence of dates. We want our resulting sequence to be a string representation of a working dates according to original requirement. That’s how we could achieve that:

let WorkingDays startDate endDate interval = 
   Seq.unfold (fun date -> 
      if date > endDate then None
      else 
         let next = date |> NextWorkingDayAfter interval
         let dateString = date.ToString("dd-MMM-yy dddd")
         Some(dateString, next)) startDate

We use unfold function here. It is one of the most complex operations in Seq data type to understand, yet very powerful. There is no direct analogy of it in C#. Put it simple: function returns a sequence that contains the elements generated by the given computation. The signature of that function is:

unfold : ('State → 'T * 'State option) → 'State → seq<'T>

Lets take a closer look at the unfold function. The first parameter is a computation function which takes the current state and transforms it to produce each subsequent element in the sequence. For the first iteration, the value passed in is the initial state parameter, which is the second parameter passed to the unfold function which is start date in the example above. The computation function (or generator) must return an option type of a two element tuple. The first element of the tuple is the item to be yielded and the second element is the state to pass on the generator function in the next iteration. It returns Some when there are results or None when there are no more results. In our case when passed state (date) is less than end date we calculate next working date (taking in consideration interval) and converting it to string. We wrap it in an option tuple where the first value will be added to resulting sequence and the second value is a state which will be passed to next iteration of the unfold.

We invoke it as follows:

WorkingDays (DateTime(2020, 6, 1)) (DateTime(2020, 7, 01)) 2 |> Seq.iter (fun x -> printfn "%s" x)

Which produce the same output as C# version

Put it all together:

open System

let IsWorkingDay (day : DateTime) = day.DayOfWeek <> DayOfWeek.Saturday && day.DayOfWeek <> DayOfWeek.Sunday
let DaysFollowing (start : DateTime) = Seq.initInfinite (fun i -> start.AddDays(float (i)))

let WorkingDaysFollowing start = 
   start
   |> DaysFollowing
   |> Seq.filter IsWorkingDay

let NextWorkingDayAfter interval start = 
   start
   |> WorkingDaysFollowing
   |> Seq.item interval

let WorkingDays startDate endDate interval = 
   Seq.unfold (fun date -> 
      if date > endDate then None
      else 
         let next = date |> NextWorkingDayAfter interval
         let dateString = date.ToString("dd-MMM-yy dddd")
         Some(dateString, next)) startDate

Conclusion

In F# function composition plays an important role. You start by splitting complex problem in smallest possible pieces and wrapping it into the functions. This is what known as decomposition. To solve a problem you need to compose these functions in certain way. Very much like LEGO bricks. Side effect which gives you such granular decomposition is re-usability: once defined, function can be applied in different contexts and to make it fit functional languages provides rich set of tools which is out of scope of the article. On the other hand, C# and OOP in general gives you classes and design patterns to solve same problems, often in a much more verbose and error-prone way.

Programming, Software design

CQS and its impact on design

Let’s start from the abbreviation. CQS stands for Command-Query Separation. The term was invented by Bertrand Meyer during his work on Eiffel programming language in the middle of 1980x. In a nutshell, it states:

Every method should either be a command that performs an action, or a query that returns data to the caller, but not both.

Sounds good. But what does it mean in practice? Let’s look at the trivial example.

Suppose we have some service which could get user from some storage and send message to that user based on the email address:

public class User
{
    public int Id { get; set; }
    public string Email { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime BirthDate { get; set; }
}

public interface IStorage
{
    User GetUser(string email);
}

public interface IMessageService
{
    void Send(string email, string message);
}

public class InMemoryStorage : IStorage
{
    public User GetUser(string email) =>
        new User
        {
            Id = 1,
            Email = "bruce@eternity.net",
            FirstName = "Bruce",
            LastName = "Lee",
            BirthDate = new DateTime(1940, 11, 27)
        };
}

public class EternityMessageService : IMessageService
{
    public void Send(string email, string message)
    {
        Console.WriteLine($"Sending '{message}' to {email}...");
    }
}

public class Service
{
    private readonly IStorage storage;
    private readonly IMessageService messageService;

    public Service(IStorage storage, IMessageService messageService)
    {
        this.storage = storage;
        this.messageService = messageService;
    }

    public User GetUser(string email) => storage.GetUser(email);
    public void Send(string message, string email) => messageService.Send(email, message);
}

Service exposes two public methods:

User GetUser(string email)
void Send(string message, string email)

From public API of the service you should be able to distinguish where is query and where is command methods.

For User GetUser(string email) method we expect to pass e-mail as a parameter and get back a user object as a result. It perfectly fits in the query method definition. We do not expect to see signature like void GetUser(string email). It will be very confusing: why it states GetUser and doesn’t return any value? In the same manner we expect to see the command method void Send(string message, string email) with no return value and with two input parameters message and email. This clearly states that in order to operate, method requires input and produces side effect in the form of sending a message.

Good design promotes having method signatures with clear intention.

Consumer can execute code with following snippet:

void Main()
{
    var service = new Service(new InMemoryStorage(), new EternityMessageService());
    var user = service.GetUser("bruce@eternity.net");
    // Some logic with that user...
    service.Send("A wise man can learn more from a foolish question than a fool can learn from a wise answer", user.Email)
}

Now, let’s assume that we have a new requirement:

When today is the Birthday of the user we would like to send an e-mail with congratulations.

This is a typical scenario for a lot of web sites like forums. Also, let’s assume that developer who is going to implement this change not familiar with CQS. So he decides to extend GetUser method with a new logic:

public User GetUser(string email)
{
    var user = storage.GetUser(email); 
    if (user.BirthDate.Day == serverDate.Day && user.BirthDate.Month == serverDate.Month)
    {
        Send($"Congrats with your Birthday, {user.FirstName}! We wish you could stay with us for longer", user.Email);
    }
    return user;
}

Now, running same Main() method results in following output:

Sending 'Congrats with your Birthday, Bruce! We wish you could stay with us for longer' to bruce@eternity.net...
Sending 'A wise man can learn more from a foolish question than a fool can learn from a wise answer' to bruce@eternity.net...

And bingo! We have a violation of command and query separation. The consumer of a service does not expect this method to send a message in addition to its main purpose of retrieving a user. And it is not obvious looking at the signature of this method. By reading method definitions you should be able quickly identify to which category it belongs.

Now let’s see on one of the main properties of query methods:

Any call to the same query method with the same parameters should always give the same result. In other words, query method should be idempotent.

Correct way to implement Birthday requirement would be to move this logic in Main() or to wrap it in any other command method which could accept user and message as input parameters, keeping GetUser method simple user query:

void Main()
{
    var service = new Service(new InMemoryStorage(), new EternityMessageService());
    var user = service.GetUser("bruce@eternity.net");
    if (user.BirthDate.Day == serverDate.Day && user.BirthDate.Month == serverDate.Month)
    {
        service.Send($"Congrats with your Birthday, {user.FirstName}! We wish you could stay with us for longer", user.Email);
    }
    // Other logic here...
    service.Send("A wise man can learn more from a foolish question than a fool can learn from a wise answer", user.Email)
}

It’s worth to mention opposite scenario:

Calling query method from command method is quite legit.

This is true because query does not have a side effect and does not change the state of the system.

It’s also good to mention, that there cases when you need a value back from a command. For example Send method need to return id for further tracking. If you change signature to something like int Send(string message, string email). It will no longer be pure command method, but mixture of command and query which violates CQS principle. In that case possible solution would be to create another query method for the purpose of getting tracking id: int GetTrackingId(string email). So we could always adhere to CQS by decomposing query and command methods. Sometimes the price of following pure separation is performance or changes in other layers. Like in example with Send we most probably need to preserve tracking id somewhere in order to fetch it for int GetTrackingId(string email) call. But it’s always a trade-off like with everything in software development.

NB: CQS has relation with another concept: Design by Contract (DbC) which also invented by Bertrand back in the 1986 when he was working on Eiffel design. CQS is beneficial to DbC because any value-returning method (any query) can be called by any assertion without fear of modifying program state.

Conclusion

The code we write should be readable and maintainable. That is the major quality of a good code. Applying CQS in practice lead to clear public APIs and some sort of trust in what it does. Separation in asking for program state and request to change a program state is beneficial for creating good designs. Perhaps that is the reason why more modern architectural patterns like CQRS for building distributed systems based on well-proof ideas from mid 80-x.