F#, Programming, Software design

Power of composition with map and bind

In functional architecture functionalities get composed into workflows. Workflows are essential part of any business behavior modeling. Things get complicated when you need to build bigger systems from small components. Sometimes it is hard to find proper connectors to fit multiple functions having different inputs and outputs. There are various tools to achieve that composition in FP world which you could have heard by the names like functors, monoids or monads. These tools allow you to glue things together by connecting outputs of one functions to the inputs of another functions with proper transformations in between. In practice it is much easier to understand how it works than diving in category theory and trying to figure out the math beneath it.

🔌 Composition basics

When dealing with relatively simple types like strings and numbers connecting inputs and outputs is quite straightforward. Consider this example:

let addOne a = a + 1
let multipleByTwo a = a * 2

Here we defined two functions both of which takes a number as an input and returns number as an output, so their signatures are the same – we expect number on input and the result of operation in output is also a number:

(int -> int)

We can call it in following ways:

multipleByTwo (addOne 2)
// OR
2 |> addOne |> multiplyByTwo
val it : int = 6

We also can create new function which is composition of addOne and multiplyByTwo:

let addOneMultipliedByTwo = addOne >> multiplyByTwo
addOneMultipliedByTwo 2

This way you can build really complex logic from smaller pieces just like with a Lego bricks.

🅰️ ADTs are everywhere

More often, however, you will find yourself writing a bit more complicated things than adding or multiplying numbers. It could be custom types, or types based on other types which are known as algebraic data types (ADTs). It is very common to build up things from abstract types and provide functions which transform other values to that types. One very familiar to you example could be Maybe (a.k.a. Option) type which you could heard as Maybe monad or Maybe functor. In a nutshell it is a container for value or absence of the value. This is extremely effective abstraction to avoid nulls in your code and hence having peace 🧘 and no null reference exceptions everywhere.

In F# it is presented in Option module with set of functions to work with that type. So you have type Option<‘T> with possible value Value: ‘T or no value which is None. You can find tons of functions in the module. They help you building more complex things from smaller and make proper transformations for connecting functions which require that type.

Let’s have a quick look on how to use it:

let someValue = Some 10
let noValue = None

someValue |> Option.get // val it : int = 10
someValue |> Option.isSome // val it : bool = true
noValue |> Option.isNone // val it : bool = true
(10, someValue) ||> Option.contains // val it : bool = true
(99, someValue) ||> Option.defaultValue // val it : int = 10
(99, noValue) ||> Option.defaultValue // val it : int = 99

😲 When things go wrong

Now let’s have a small programming exercise. Suppose silly scenario where we have a players (any game you can imagine) and we need to check if the score player collected is good or not. So we come up with something like this:

type Player = { 
    Name: string 
    Score: int

let isGoodScore score = if score >= 70 then true else false

So all we need is to create players and check their scores:

let frank = { Name = "Frank"; Score = 90; }
let jack = { Name = "Jack"; Score = 37; }

frank.Score |> isGoodScore // val it : bool = true
jack.Score |> isGoodScore // val it : bool = false

“Hey, but player could have no score as well”

So how about to support that? Well, piece of a cake. Let’s make few minor changes:

type Player = { 
    Name: string 
    Score: int option
let frank = { Name = "Frank"; Score = Some 90; }
let john = { Name = "John"; Score = None; }
let jack = { Name = "Jack"; Score = Some 37; }

Nice! We’ve wrapped score in Option type just exactly like in requirement we got. How about isGoodScore function, will it still work?

frank.Score |> isGoodScore
error FS0001: Type mismatch. Expecting a
    'int option -> 'a'
but given a
    'int -> bool'
The type 'int option' does not match the type 'int'

Oops, we can’t use optional type with plain type like that:

So we need a ways to glue up monadic types like Option with functions working on plain values. And that’s where two most essential functions get into the big picture: map and bind.

🤝When composition meet ADT

As I mentioned before in the FP toolbox there various tools to help us with transformations. One such tool is map function. There are other names for it like fmap, lift, Select (think of C# LINQ). Each monadic-like type has this function.

Let’s have a look what signature of that function for Option:

(('a -> 'b) -> 'a option -> 'b option)

There 3 arguments: function which transforms input of type ‘a to ‘b, optional ‘a and optional ‘b. So how can we apply map for our use case? Pretty straightforward actually:

frank.Score |> Option.map isGoodScore // val it : bool option = Some true
john.Score |> Option.map isGoodScore // val it : bool option = None
jack.Score |> Option.map isGoodScore // val it : bool option = Some false

You see how return type is changed? We just applied standalone function which works on int to Option type. It lifted result of the function execution back to the Option. If input value is Some int it will be extracted from container (Option type), piped to the function and on the other end lifted up/wrapped back to the Option type. In case if there no value, it will just use None.

In C# IEnumerable with Select method on it works in the same way but applied to collections which means that collections are also ADTs. Here some visuals to help in understanding what’s going on:

👷 Bind it

Another very useful tool is bind function which you may have heard by other names like flatMap, collect, SelectMany. This one allows you to compose monadic functions in a little bit different way. Here the signature of the bind function for Option:

(('a -> 'b option) -> 'a option -> 'b option)

Let’s extend on our previous example and say that now we have an external source (database, file, etc.) from which we need to fetch players to find out score. So we define tryFindPlayer function as follows:

let tryFindPlayer name = 
    [ frank; john; jack ] |> List.tryFind (fun c -> c.Name = name)

List.tryFind is built-in function which returns Some ‘T if satisfies predicate in lambda or None. In our case it will return Some Player or None. Now we would be able to get the score of the player:

tryFindPlayer "Frank"
    |> Option.bind (fun c -> c.Score)

Here the visuals:

As you see, unlike map, bind allows you to compose up things within the same category (Option) but with different underlying types. It is flattening result, so instead of having Option<Option<int>> with bind it skips unnecessary wrapping.

💪The power of composition

There a lot of ADTs in form of data structures, workflows and other concepts which you need to combine to build working software: List<T>, Option<T>, State<T>, Async<T>, etc.

Once you get a grasp on how to use it – it becomes straightforward how to compose things up:

tryFindPlayer "Frank" 
    |> Option.bind (fun c -> c.Score)
    |> Option.map isGoodScore
val it : bool option = Some true
.NET, F#, Programming

How to read settings from configuration file in F#

During the work on one of the projects I had to make connection to the SQL server to fetch data. Most of the development time I spent in F# interactive – I create some sort of scratchpad script file (with fsx extension) and run VSCode with Ionide extension. This works like a charm with all features you expect from modern code editor like autocomplete, linting and syntax highlighting. Having built-in REPL allows you to use NuGet packages, load files with F# code, reference managed assemblies and execute selected parts of the code by pressing Alt+Enter directly in editor.

During development you could keep connection string in constant or variable, but at some stage, when you finalize project you want to move everything to config file. There is a problem related to this however. The way how default executable treated depends on the context. In case of F# project the default executable is the current project .config file. In case of F# interactive it is Fsi.exe.config. So solution which works fine for your F# project will fail when you run from F# interactive. I will show you how you can make it work in both contexts.

So, how to read configuration file in your F# project? Well, one great and simple option is just to use AppSettings type provider. It will expose your app.config in a strongly typed way. If you don’t know what type providers are please refer to the documentation. There is no direct analogy in C# to this concept. As author of F# language said:

A design-time component that computes a space of types and methods on-demand…

An adapter between data/services and the .NET type system…

On-demand, scalable compile-time provision of type/module definitions…

Don Syme

However in this post I would like to show you how you can create a simple abstraction to read connection string (or any other section like appSettings) and what caveats are on your way. Assume we have following app.config file in the root folder of our demo app:

<?xml version="1.0" encoding="utf-8" ?>
        <add name="NinjaConnectionString" connectionString="Server=(localdb)\MsSqlLocalDb;Database=NinjaDb;Trusted_Connection=True;"/>

Solution for F# projects

Let’s create Configuration.fs file and start with class definition for our configuration abstraction:

type NinjaConfiguration() = class
    static member ConnectionString = ()

Ok, now we need a function to read a config file (assuming you have your configuration file in bin folder and named {project-executable}.config). Just add this section to your fsproj to copy app.config from your project’s root to bin on each build:

<Target Name="CopyCustomContent" AfterTargets="AfterBuild">
        <Copy SourceFiles="app.config" DestinationFiles="$(OutDir)\ninja_app.dll.config" />

The function to read connection strings could look like this:

let private tryGetConnectionString (connectionStrings: ConnectionStringSettingsCollection) name =
    seq { for i in 0..connectionStrings.Count - 1 -> connectionStrings.[i] }
    |> Seq.tryFind(fun cfg -> cfg.Name = name)
    |> function
    | Some cs -> Some cs.ConnectionString
    | _ -> None

The signature of the function is

(ConnectionStringSettingsCollection -> string -> string option)

It takes ConnectionStringSettingsCollection and name of the connection element in your app.config and returns option of string with it’s value or None.

On line 2 we create a F# sequence expression to wrap standard .NET collection type. This will allow us to use any idiomatic F# language constructs which applicable to collections (think of all functions in Seq module, pipe operator, etc.).

On line 3 we immediately benefit from it by piping all elements from connection string section to Seq.tryFind and using lambda function to find only setting we need by name parameter. This will iterate over all entries in connection strings and compare it against Name property of ConnectionStringSettings class. If it finds an entry, Some of ConnectionStringSettings will be returned, otherwise None.

Lines 4-6 just extract connection string from it with a simple pattern matching.

Let’s update NinjaConfiguration class:

type NinjaConfiguration() = class
    static member ConnectionString = 
        tryGetConnectionString ConfigurationManager.ConnectionStrings "NinjaConnectionString"

This is already working code, however without error handling it is not complete, so let’s add try-with section to be sure that when file is missing we not bubble up runtime exception in your face:

type NinjaConfiguration() = class
    static member ConnectionString = 
            tryGetConnectionString ConfigurationManager.ConnectionStrings "NinjaConnectionString"
            | Failure (_) -> None

Much better. If there is a problem with finding or opening configuration file we return None. Same for the case when there no connection string with NinjaConnectionString name found. Put it all together we should come up with this code:

module Ninja.Configuration

open System.Configuration

let private tryGetConnectionString (connectionStrings: ConnectionStringSettingsCollection) name =
    seq { for i in 0..connectionStrings.Count - 1 -> connectionStrings.[i] }
    |> Seq.tryFind(fun cfg -> cfg.Name = name)
    |> function
    | Some cs -> Some cs.ConnectionString
    | _ -> None

type NinjaConfiguration() = class
    static member ConnectionString =
            tryGetConnectionString ConfigurationManager.ConnectionStrings "NinjaConnectionString"
           | Failure(_) -> None

Extending solution to work in F# interactive

Previous solution works fine when you run it with F5 in VSCode or Visual Studio IDE or via dotnet run command line. But how to make it work in F# interactive?

Let’s create simple scratchpad.fsx to use NinjaConfiguration in F# interactive:

#r "nuget: System.Configuration.ConfigurationManager" // install NuGet package needed for Configuration.fs
#load "Configuration.fs" // load our NinjaConfiguration class

open Ninja.Configuration // open module so that type will be available for use
let connStr = NinjaConfiguration.ConnectionString

val connStr : string option = None. 

App.config in the same folder where scratchpad.fsx and Configuration.fs. So why result is None? The answer is that default path for lookup will be fsi.exe and since we used ConfigurationManager.ConnectionStrings it will start search config file from global scope (machine.config). So to solve that issue we need to set current directory for F# interactive and map configuration file to that folder. To make it work in both contexts we need to add conditional compiler directive (let’s call it COMPILED). Let’s make final changes to our code in Configuration.fs to the following snippet:

module Ninja.Configuration

open System.Configuration

let [<Literal>] private DbConnectionStringName = "NinjaConnectionString"

let private tryGetConnectionString (connectionStrings: ConnectionStringSettingsCollection) name =
    seq { for i in 0..connectionStrings.Count - 1 -> connectionStrings.[i] }
    |> Seq.tryFind(fun cfg -> cfg.Name = name)
    |> function
    | Some cs -> Some cs.ConnectionString
    | _ -> None

type NinjaConfiguration() = class
    static member ConnectionString =
            // Executes in F# project/solution when provided COMPILED compilation directive
            #if COMPILED 
                tryGetConnectionString ConfigurationManager.ConnectionStrings DbConnectionStringName
            #else // Executes in script environment (fsx file)
                System.IO.Directory.SetCurrentDirectory (__SOURCE_DIRECTORY__)
                let fileMap = ExeConfigurationFileMap()
                fileMap.ExeConfigFilename <- "app.config"
                let config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None)
                tryGetConnectionString config.ConnectionStrings.ConnectionStrings DbConnectionStringName
           | Failure(_) -> None

After change re-execute following lines in the script:

#load "Configuration.fs" // load our NinjaConfiguration class

open Ninja.Configuration // open module so that type will be available for use
let connStr = NinjaConfiguration.ConnectionString

Now result is:

val connStr: string option = Some "Server=(localdb)\MsSqlLocalDb;Database=NinjaDb;Trusted_Connection=True;"

After adjustments the code in Configuration.fs will work in both cases: as a part of F# project or in F# interactive. Same principle applies to any IO: if you want your code to work in both contexts you need to take this in consideration.

Happy coding!

ASP.NET Core, C#, Programming

Tips on using Autofac in .NET Core 3.x

.NET Core supports DI (dependency injection) design pattern which is technique for achieving Inversion of Control (IoC) between classes and their dependencies. The native, minimalistic implementation is known as conforming container and it is anti-pattern. You can read more about issues related with it here. There is a promise from Microsoft to make better integration points with 3rd party DI vendors in a new .NET Core releases but it is good enough for most of the pet projects and small production projects. Being a user of DI for quite a long time I used to have more broad support from dependency injection frameworks. For the sake of keeping this article short and focused I would skip the definite list of functionality I miss in native DI implementation for .NET Core. I will mention only few: extended lifetime scopes support, automatic assembly scanning for implementations, aggregate services and multi-tenant support. There plenty of DI frameworks on the market. Back in 2008/2009 when I switched to .NET one of my favorite DI frameworks was StructureMap. Having rich functionality it was one of the standard choice for my projects. Another popular framework was Castle Windsor. For some time I was also a user of a Ninject DI which I found very easy to use.

However, StructureMap was deprecated for some time already, Ninject is still good, but I was looking for some different DI to try with one of my new .NET Core projects. Autofac caught my attention immediately. It is on the market since 2007 and it gets only better with 3400+ stars and 700+ forks on the GitHub. It has exhaustive documentation and feature list. On the moment of writing this post the latest version of Autofac is 6 and the way how you bootstrap it in .NET Core 3.x and 5 changed compared to 5.x branch.

So, enough talks: talk is cheap, show me some code…

Tip 1


public class Program
  public static void Main(string[] args)
    // ASP.NET Core 3.0+:
    // The UseServiceProviderFactory call attaches the
    // Autofac provider to the generic hosting mechanism.
    var host = Host.CreateDefaultBuilder(args)
        .UseServiceProviderFactory(new AutofacServiceProviderFactory())
        .ConfigureWebHostDefaults(webHostBuilder => {

Startup Class

public class Startup
  public Startup(IHostingEnvironment env)
    // In ASP.NET Core 3.0 `env` will be an IWebHostEnvironment, not IHostingEnvironment.
    this.Configuration = new ConfigurationBuilder().Build();
  public IConfigurationRoot Configuration { get; private set; }
  public ILifetimeScope AutofacContainer { get; private set; }
  public void ConfigureServices(IServiceCollection services)

  public void ConfigureContainer(ContainerBuilder builder)
    // Register your own things directly with Autofac here. Don't
    // call builder.Populate(), that happens in AutofacServiceProviderFactory
    // for you.
    builder.RegisterModule(new MyApplicationModule());

  public void Configure(
    IApplicationBuilder app,
    ILoggerFactory loggerFactory)
    // If, for some reason, you need a reference to the built container, you
    // can use the convenience extension method GetAutofacRoot.
    this.AutofacContainer = app.ApplicationServices.GetAutofacRoot();

Tip 2

Scanning assemblies

Autofac can use conventions to find and register components in assemblies.

public void ConfigureContainer(ContainerBuilder builder)

This will register types that are assignable to closed implementations of the open generic type. In that case it will register all implementations of IConfigureOptions<>. See options pattern for more information on how to configure configuration settings with dependency injection.

Tip 3

Use Mvc/Api controllers instantiation with Autofac

Controllers aren’t resolved from the container; just controller constructor parameters. That means controller lifecycles, property injection, and other things aren’t managed by Autofac – they’re managed by ASP.NET Core. You can change that using AddControllersAsServices().

  public void ConfigureServices(IServiceCollection services)
public void ConfigureContainer(ContainerBuilder builder) {
	var controllersTypesInAssembly = typeof(Startup).Assembly.GetExportedTypes().Where(type => typeof(ControllerBase).IsAssignableFrom(type)).ToArray();

Here we register all types that are descendants of ControllerBase type. We also enable property injection capability (line 5). This is useful when you want to have some property in the base controller implementation which could be re-used (e.g. IMediator).

Tip 4

Register EF Core DbContext with Autofac

If you use Entity Framework Core you want your DbContext to be managed by DI container. One important notice is that DbContext should behave as a unit of work and be scoped to request lifetime. In native DI it registered as a scoped service which in Autofac equal to InstancePerLifetimeScope.

public static void AddCustomDbContext(this ContainerBuilder builder, IConfiguration configuration) {
	builder.Register(c => {
		var options = new DbContextOptionsBuilder<ApplicationContext>();
		options.UseSqlServer(configuration["ConnectionStrings:ApplicationDb"], sqlOptions => { sqlOptions.MigrationsAssembly(typeof(Startup).GetTypeInfo().Assembly.GetName().Name);
			sqlOptions.EnableRetryOnFailure(maxRetryCount: 15, maxRetryDelay: TimeSpan.FromSeconds(30), errorNumbersToAdd: null);
		return options.Options;
public void ConfigureContainer(ContainerBuilder builder) {

Tip 5

Use modules for your registrations

public void ConfigureContainer(ContainerBuilder builder) {
	builder.RegisterModule(new MediatorModule());
	builder.RegisterModule(new ApplicationModule());
public class ApplicationModule: Autofac.Module {
	public ApplicationModule() {}
	protected override void Load(ContainerBuilder builder) {

Keeping registrations in modules makes your wire-up code structured and allow deployment-time settings to be injected.

Tip 6

Follow best practices and recommendations

.NET, F#, Programming

Having fun with F# operators

F# is very exciting and fun language to learn. It contains pipe and composition operators which allows you to write less code with better conciseness. In addition to familiar prefix and postfix operators it also comes with the “infix” operator. The beauty of it is that you can define your own infix operators and succinctly express business logic in your F# code.

Prefix, infix and postfix 👾

As an example of prefix operators we can define any regular function:

let times n x = x * n 

and call this function with a prefix notation:

times 3 3 // val it : int = 9

In F# vast majority of primitives are functions, just like in pure OOP language everything is an object. So you can also call multiplication operator as a function:

(*) 3 3 // val it : int = 9

which gives the same result as in the previous code snippet.

Postfix operators is not something you often use and mostly comes with built-in keywords:

type maybeAString = string option // built-in postfix keyword
type maybeAString2 = Option<string> // effectively same as this
// Usage
let s:maybeAString = Some "Ninja in the bushes!"
let s2:maybeAString2 = None

But most interesting one is infix operator. As you already could guess, infix operator should be placed between two operands. Everyone did some math in school and wrote something similar to:

3 * 3 // val it : int = 9

Not surprisingly it is something you use without even thinking. Now, let’s define few custom functions:

let (^+^) x y = x ** 2. + y ** 2. // sum of the square of two numbers
let (^^) x y = x ** y // returns x to the power of y

And use it with an infix operator:

3. ^+^ 3. // val it : float = 18.0
3. ^^ 3. // val it : float = 27.0

Note that we can also use it with a prefix notation just as a regular functions:

(^+^) 3. 3. // val it : float = 18.0
(^^) 3. 3. // val it : float = 27.0

Of course infix syntax looks much more succinct in that case.

Pipe, compose and mix 🔌

The king among F# operators is a pipe operator (|>). It allows you to express function composition in a readable way. Function application is left associative, meaning that evaluating x y z is the same as evaluating (x y) z. If you would like to have right associativity you can use explicit parentheses or pipe operator:

let fun x y z = x (y z)
let fun x y z = y z |> x // forward pipe operator
let fun x y z = x <| y z // backward pipe operator

Okay. As you see there two flavors of pipe operators: forward and backward. Here the definition of forward pipe operator:

let (|>) x f = f x

Just as simple as that: feeding the argument from the left side (x) to function (f). The definition of the backward pipe operator is:

let (<|) x f = x f

You may wonder why it is needed and what is benefit of using it? You will see example later in this post.

So how we can apply pipe operators in practice? Here examples:

let listOfIntegers = [5;6;4;3;1;2]
listOfIntegers |> List.sortBy (fun el -> abs el) // val it : int list = [1; 2; 3; 4; 5; 6]
// Same as
List.sortBy (fun el -> abs el) listOfIntegers

It shines when you have long list of functions you need to compose together:

text.Split([|'.'; ' '; '\r'|], StringSplitOptions.RemoveEmptyEntries)
      |> Array.map (fun w -> w.Trim())
      |> Array.filter (fun w -> w.Length > 2)
      |> Array.iter (fun w -> ...

The backward pipe operator could be useful in some cases to make your code looks more English-like:

let myList = []
myList |> List.isEmpty |> not
// Same as above but looks prettier
myList |> (not << List.isEmpty)

Composition operator could also be forward (>>) and backward (<<) and it also used for composing functions. Unlike pipe operator, result of execution compose will be a new function.

Definition of composition operators:

let (>>) f g x = g ( f(x) )
let (<<) f g x = f ( g(x) )

For example:

let add1 x = x + 1
let times2 x = x * 2
let add1Times2 = (>>) add1 times2
add1Times2 3 // val it : int = 8

Which we could re-write like this:

let add x y = x + y
let times n x = x * n
let add1Times2 = add 1 >> times 2
add1Times2 3 // val it : int = 8

In both examples it relies on core concept of partial application, that is when one argument baked-in in functions add1 and times2 but left second argument free so that it will be passed on function invocation by the user.

As long as input and outputs of functions involved in composition match, any kind of value could be used

Same example with backward composition operator gives different result because functions composed in the opposite order:

let add x y = x + y
let times n x = x * n
let times2Add1 = add 1 << times 2
times2Add1 3 // val it : int = 7

Have fun 😝

Now a small exercise for you. What will be outcome of all these expressions? 🤔 :

3 * 3
(*) 3 3
3 |> (*) 3
3 |> (*) <| 3

What about that one:

let ninjaBy3 = 3 * 3 |> (+)
ninjaBy3 5

Try it yourself. Leave comments and have fun!

.NET, C#, Programming

C# 8.0 pattern matching in action

Let’s revisit definition of the pattern matching from the Wiki:

In computer sciencepattern matching is the act of checking a given sequence of tokens for the presence of the constituents of some pattern. In contrast to pattern recognition, the match usually has to be exact: “either it will or will not be a match.” The patterns generally have the form of either sequences or tree structures. Uses of pattern matching include outputting the locations (if any) of a pattern within a token sequence, to output some component of the matched pattern, and to substitute the matching pattern with some other token sequence

Pattern matching

Putting it into the human language means that instead of comparing expressions by exact values (think of if/else/switch statements), you literally match by patterns or shape of the data. Pattern could be constant value, tuple, variable, etc. For full definition please refer to the documentation. Initially, pattern matching was introduced in C# 7. It was shipped with basic capabilities to recognize const, type and var patterns. The language was extended with is and when keywords. One of the last pieces to make it work was introduction of discards for deconstruction of tuples and objects. Combining all these together you was able to use pattern matching in if and switch expressions:

if (o is null) Console.WriteLine("o is null");
if (o is string s && s.Trim() != string.Empty)
        Console.WriteLine("whoah, o is not null");
switch (o)
    case double n when n == 0.0: return 42;
    case string s when s == string.Empty: return 42;
    case int n when n == 0: return 42;
    case bool _: return 42;
    default: return -1;

C# 8 extended pattern matching with switch expressions and three new ways of expressing a pattern: positional, property and tuple. Again, I will refer to full documentation for the details.

In this post I would like to show a real-world example of using pattern matching power. Let’s say you want to build an SQL-query based on list of filters you receive as an input from HTTP request. For example we would like to get a list of all shurikens filtered by shape and material:


We need a model to which we can map this request with list of filters:

public class ShurikenQuery
    [BindProperty(Name = "filters", SupportsGet = true)]
    public IDictionary<string, string> ShurikenFilters { get; set; }

Now, let’s write a function which builds SQL-query string based on provided filters:

private static string BuildFilterExpression(ShurikenQuery query)
    if (query is null)
        throw new ArgumentNullException(nameof(query));

    const char Delimiter = ',';

    var expression = query.ShurikenFilters?.Aggregate(new StringBuilder(), (acc, ms) =>
        var key = ms.Key;
        var value = ms.Value;
        var exp = (key, value) switch
            (_, null) => $"[{key}] = ''",
            (_, var val) when val.Contains(Delimiter) =>
                @$"[{key}] IN ({string.Join(',', val
                    .Replace("'", string.Empty)
                    .Replace("\"", string.Empty)
                    .Split(Delimiter).Select(x => $"'{x}'"))})",
            (_, _) => $"[{key}] = '{value}'"
        return exp != null ? acc.AppendLine($" AND {exp}") : acc;
    return expression?.ToString() ?? string.Empty;

Let’s break this code down. On line 8 we use LINQ Aggregate function which do the main work of building a filter string. We want to iterate over each KeyValuePair in dictionary and based on the data shape in it create a string which represents expression which could be provided for SQL WHERE clause.

On line 10 and 11 we extract key and value for KeyValuePair in own variables just for convenience.

Lines between 12-21 are of main interest to us – that’s where all magic happens. On line 12 we wrapped key and value in a tuple and use switch expression to start pattern match. On line 14 we check if value variable is null (_ in key position means discard – it’s when we don’t care what actual value is). If that’s a case we produce string like [Shape]=”. On line 15-19 again, we not interested what in key position, but now we assign value to a dedicated variable val we can work with. Next, we check if this value contains filter with multiple values (like in case filters[shape]=’star,stick’) and split it in separate values, removing ” and ‘ on the way we go. We want to translate this into SQL IN operator, so string after processing this pattern looks like this: [Shape] IN (‘star’, ‘stick’). Last pattern matches for remaining cases of single value filters (like filters[material]=’kugi-gata’) which produce following string: [Material] = ‘kugi-gata’. Line 23 applies AND to string we built and accumulating result in a StringBuilder variable we provided in initial run of Aggregate function on line 8.

If we would put (_, _) as a first line in a switch expression other patterns will be not evaluated, because (_, _) will catch all values

Be aware that in pattern matching the order is everything

Finally, the resulting string we return looks like this:

 AND [Shape] IN ('star','stick') AND [Material] = 'kugi-gata'

And that’s a valid string for SQL WHERE clause. As you can see pattern matching is a real deal and could be used in a lot of cases for parsing expressions. With C# 9 pattern matching will be extended even more with relational and logical patterns.

Hope you enjoyed. Stay tuned and happy coding.

.NET, async, LINQ, Programming

Make LINQ Aggregate asynchronous

I often use LINQ in my code. Well, put it in another way: I can’t live without using LINQ in my daily work. One of the my favorite methods is Aggregate. Applying it wisely could save you from having explicit loops, naturally chain into other LINQ methods and at the same time keep your code readable and well-structured. Aggregate is similar to reduce and fold functions which is hammer and anvil of functional programming tooling.

When you use Entity Framework it provides you with async extensions methods like ToListAsync(), ToArrayAsync(), SingleAsync(). But what if you want to achieve asynchronous behavior using LINQ Aggregate method? You will not find async extension in existing framework (on the moment of writing this article I’m using .NET Core 3.1 and C# 8.0). But let me give you a real-world example of the case when you could find this really useful.

Let’s say you need to fetch from database all distinct values for multiple columns in order to build multi-selection filter like this:

Let’s also assume you use SQL Server as it is most common one. For keeping it simple I will show you example with using Dapper micro-ORM.

The function could look like this:

public List<MultiSelectionModel> GetMultiSelectionFilterValues(string[] dataFields) {
  var results = new List<MultiSelectionModel>();

  var query = dataFields.Aggregate(new StringBuilder(), (acc, field) =>{
    return acc.AppendLine($ "SELECT [{field}] FROM Table GROUP BY [{field}];");

  using var connection = new SqlConnection(this.connectionString);

  using(var multi = connection.QueryMultiple(query.ToString())) {
     new List<MultiSelectionModel>(), (acc, field) =>{
      acc.Add(new MultiSelectionModel {
        DataField = field,
        Values = multi.Read(),

      return acc;

  return results;

The function receives as input parameter array of data fields (columns) for which we need to fetch distinct values for multi-selection filter and returns a list of multi-selection model which is just simple data structure defined as:

public class MultiSelectionModel
    public string DataField { get; set; }
    public IEnumerable<dynamic> Values { get; set; }

On lines 4-6 you see how Aggregate method applied for building a SELECT query for fetching distinct values for provided columns. I uses GROUP BY in this example, but you can use DISTINCT with same effect, although there difference in performance between distinct and group by for more complex queries which is excellently explained in this article. Lines 13-21 highlights the main logic of the function where we actually querying database with multi.Read() and assign results with distinct values for each data field in resulting model. In both cases following Aggregate extension used:

public static TAccumulate Aggregate<TSource, TAccumulate>(
	this IEnumerable<TSource> source,
	TAccumulate seed,
	Func<TAccumulate, TSource, TAccumulate> func

In first case as a seed parameter we provided StringBuilder. Second parameter is a function which receives accumulator and element from the source and returns accumulator which is StringBuilder in our case. In second case, as a seed we used List<MultiSelectionModel> which is resulting collection, so that final list is accumulated in that collection.

So that works. You can stop reading now and go for a couple of 🍺 with fellows…

Oh, you still here 😏. You know, curiosity killed the cat. But we different animals, so let’s move on. Well, as you can notice, in the first example we used what is known in Dapper as multi-result result. It executes multiple queries within the same command and map results. The good news is that it also has async version. The bad news is that our Aggregate does not have async version. Should we go back to old good for-each loop for mapping results from query execution then? No way!

So how could we implement all the way down async version of GetMultiSelectionFilterValues? Well, let’s re-write it how we would like to see it:

public async Task<List<MultiSelectionModel>> GetMultiSelectionFilterValuesAsync(string[] dataFields) {
  var results = new List<MultiSelectionModel>();

  var query = dataFields.Aggregate(new StringBuilder(), (acc, field) =>{
    return acc.AppendLine($ "SELECT [{field}] FROM Table GROUP BY [{field}];");

  using var connection = new SqlConnection(this.connectionString);

  using(var multi = await connection.QueryMultipleAsync(query.ToString())) {
    results.AddRange(await dataFields.AggregateAsync(
     new List<MultiSelectionModel>(), async (acc, field) =>{
      acc.Add(new MultiSelectionModel {
        DataField = field,
        Values = await multi.ReadAsync(),

      return acc;

  return results;

Much better now, isn’t it? I’ve highlighted the changes. This is fully asynchronous Aggregate method now. Of course you wish to know where did I get this async extension 😀? Here the extension methods I come up with to make it work:

public static class AsyncExtensions {
	public static Task<TSource> AggregateAsync<TSource>(
	this IEnumerable<TSource> source, Func<TSource, TSource, Task<TSource>> func) {
		if (source == null) {
			throw new ArgumentNullException(nameof(source));

		if (func == null) {
			throw new ArgumentNullException(nameof(func));

		return source.AggregateInternalAsync(func);

	public static Task<TAccumulate> AggregateAsync<TSource,
	this IEnumerable<TSource> source, TAccumulate seed, Func<TAccumulate, TSource, Task<TAccumulate>> func) {
		if (source == null) {
			throw new ArgumentNullException(nameof(source));

		if (func == null) {
			throw new ArgumentNullException(nameof(func));

		return source.AggregateInternalAsync(seed, func);

	private static async Task<TSource> AggregateInternalAsync <TSource> (
	this IEnumerable <TSource> source, Func<TSource, TSource, Task<TSource>> func) {
		var e = source.GetEnumerator();

		if (!e.MoveNext()) {
			throw new InvalidOperationException("Sequence contains no elements");

		var result = e.Current;
		while (e.MoveNext()) {
			result = await func(result, e.Current).ConfigureAwait(false);

		return result;

	private static async Task<TAccumulate> AggregateInternalAsync<TSource,	TAccumulate>(
	this IEnumerable<TSource> source, TAccumulate seed, Func<TAccumulate, TSource, Task<TAccumulate>> func) {
		var result = seed;
		foreach(var element in source) {
			result = await func(result, element);

		return result;

I did it for two of three existing Aggregate overloads. The last one you can implement yourself if you need it. It will be good exercise for you to understand how aggregate works behind the scenes.

Stay tuned and have fun.

.NET, ASP.NET Core, Programming

From Zero to Hero: Build ASP.NET Core 3.1 production-ready solution from the ground up (Part 1)

How often do you start a new project with latest and greatest version of .NET Core and C# to try some new fancy language features or perhaps creating a new solution for implementing your ideas? It happens to me a lot. I find myself creating a pet projects over and over again. Sometimes, project growth and get more contributors . People working from different places having different IDEs and operational systems. Solution should work the same way on each workstation on each OS. Also it is important to have code style conventions and scripts for building and running solution. I would like to share with you my experience on structuring .NET solution, containerizing it with Docker, adding HTTPS support for development and many more nice bonuses like adding code analyzers, following conventions and code formatting. As an example we will create ASP.NET Core 3.1 simple API.

From this post you will learn:

  • How to properly structure you solution
  • How to add Git and other configuration files
  • How to create ASP.NET Core API application
  • How to containerize ASP.NET Core application
  • How to add support for HTTPS development certificate
  • How to add styling and code conventions with analyzers
  • How to make it work cross-platform in different editors and OSes (Visual Studio, Visual Code, CLI)

Structure solution and add Git with configuration files

Okay. Lets start from the beginning. I assume you have Git installed:

mkdir ninja-core
cd ninja-core
git init

I do suggest to structure your solution in the following way:

  • /
    • src
      • project-1
      • project-2
    • docs
    • tests
    • build
    • deploy

src – solution source files which includes all projects sources

docs – documentation on your solution. This could be any diagrams which contains sequence or activity flows or just a simple use cases

tests – all kind of tests for your solution including unit tests, integration tests, acceptance tests, etc.

build – could be any scripts for building your solution

deploy – scripts related to deploying your solution to different environments or localhost

Suggested solution structure of our deadly Ninja .NET Core app could look like this for now:

In the root folder we will have following files:

Let’s add following files in the root folder of our project:

.gitattributes – Defines Git behavior on certain attributes-aware operations like line endings, merge settings for different file types and much more.

.gitignore – Defines patterns for files which should be ignored by Git (like binaries, tooling output, etc). This one adapted for Visual Studio/Code and .NET projects

.gitlab-ci.yml – Configuration file for GitLab pipeline (will be covered in Part 2). We would like to be sure that our code continuously integrated and delivered.

README.md – Every well-made project should contain readme file with instructions on how to build and run your solution, optionally, with team members and responsible persons.

You can use files as is or adapt it for your’s project needs. After you created folder structure and added all needed files with configuration you need to push it to your repository (I assume you’ve created one). Typically it looks something like:

git remote add origin git@gitlab.com:username/yrepo_name.git
git add .
git commit -m "Initial commit"
git push -u origin master

Create ASP.NET Core web application

Creating ASP.NET Core web app is really simple. Just run following commands in CLI:

#inside of ninja-core/src folder
mkdir Iga.Ninja.Api
cd Iga.Ninja.Api
dotnet new webapi

By default, it generates WeatherForecastController.cs file in Controllers folder. Because we’re building deadly ninja API, we want to delete this file and instead add simple NinjaController.cs with following content:

using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;

namespace Iga.Ninja.Controllers
    public class NinjaController : ControllerBase
        private readonly ILogger<NinjaController> _logger;

        public NinjaController(ILogger<NinjaController> logger)
            _logger = logger;

        public string Get() => "Go Ninjas!!!";

Cool. Now we should be able to build and run it:

dotnet run

Go in your browser and see it working: http://localhost:5000/ninja.

Containerize ASP.NET Core application

Since introducing back in 2013 Docker changed the way how modern software development looks today, especially in micro-service oriented architecture. You want your application to work exactly the same on local machine, on test and on production with all package dependencies required for app to run. This also helps a lot in end-to-end testing when your application has dependency on external services and you would like to test whole flow.

First, we need to create an image in the root of Iga.Ninja.Api folder. Here the Dockerfile I use:

# Stage 1 - Build SDK image
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /build

# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore

# Copy everything else and build
COPY . ./
RUN dotnet build -c Release -o ./app

# Stage 2 - Publish
FROM build AS publish
RUN dotnet publish -c Release -o ./app

# Stage 3 - Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
COPY --from=publish /build/app .
ENTRYPOINT ["dotnet", "Iga.Ninja.Api.dll"]

Here we use what is known as multi-stage builds which is available in Docker starting from version 17.05. So don’t forget to check that you are up-to-date. There two images: one with .NET Core SDK which contains all required tools for building .NET Core application and .NET Core runtime which is needed to run application. We use image with SDK in a first stage as a base image to restore packages and build application. You can notice that we have dotnet restore and dotnet build as a two separate commands in Dockerfile instead of one. That is small trick to make creation of the image a bit faster.

Each command that is found in a Dockerfile creates a new layer. Each layers contains the filesystem changes of the image between the state before the execution of the command and the state after the execution of the command.

Docker uses a layer cache to optimize the process of building Docker images and make it faster.

Docker Layer Caching mainly works on RUNCOPY and ADD commands

So if csproj file hasn’t changed since last state, cached layer will be used. In Stage 2 we just publish binaries built by Stage 1 and dotnet build. Stage 3 will use ASP.NET Core runtime image and artifacts from Stage 2 with published binaries. That will be our final image. With the last line we instruct Docker what command to execute when new container from that image will be instantiated. By the way ASP.NET Core application is just console app which runs with built-in and lightweight Kestrel web server. But preferred option if you run on Windows is to use In-Process hosting model with IIS HTTP Server (IISHttpServer) instead of Kestrel which gives performance advantages.

That’s it. You can build an image and run it:

docker build -t ninja-api .
docker run --rm -d -p 8000:80 --name deadly-ninja ninja-api

Now you should be able to see a deadly ninja in action by visiting http://localhost:8000/ninja in your browser.

Congratulations! You’ve just containerized our web api.

Add HTTPS development certificate (with support in Docker)

So far so good. Now we would like to enforce HTTPS in out API project for development and make it work also when running in Docker container. In order to achieve that we need to do the following steps:

  • Trust ASP.NET Core HTTPS development certificate.

When you install .NET Core SDK it installs development certificate to the local user certificate store. But it is not trusted, so run this command to fix that:

dotnet dev-certs https --trust

That’s already enough if we going to run our API locally. However if we would like to add this support in Docker we need to do additional steps:

  • Export the HTTPS certificate into a PFX file using the dev-certs global tool to %USERPROFILE%/.aspnet/https/<>.pfx using a password of your choice

PFX filename should correspond to your application name:

# Inside Iga.Ninja.Api folder
dotnet dev-certs https -ep %USERPROFILE%\.aspnet\https\Iga.Ninja.Api.pfx -p shinobi
  • Add the password to the user secrets in your project:
dotnet user-secrets init -p Iga.Ninja.Api.csproj
dotnet user-secrets -p Iga.Ninja.Api.csproj set "Kestrel:Certificates:Development:Password" "shinobi"

Now we would be able to run our container with ASP.NET Core HTTPS development support in container with following command:

docker run --rm -it -p 8000:80 -p 8001:443 -e ASPNETCORE_URLS="https://+;http://+" -e ASPNETCORE_HTTPS_PORT=8001 -e ASPNETCORE_ENVIRONMENT=Development -v %APPDATA%\microsoft\UserSecrets\:/root/.microsoft/usersecrets -v %USERPROFILE%\.aspnet\https:/root/.aspnet/https/ --name deadly-ninja-secure ninja-api

Navigate to https://localhost:8001/ninja. Now, our deadly ninja even more secure and trusted than ever.

P.S. Because docker mounts user secrets as a volume, it is very important to check that docker has access rights to required folders, so please check your docker resources settings

Add styling and code conventions with analyzers

When you work on a project with more than one developer you want to have common conventions and agreement on how to style and format your code. It is time to add that. First, I would like to suggest to create a solution file for our project. Although not necessary it is very handy to have it, especially if you work outside of IDE. It will serve as a project container and you can issue dotnet build in /src root, so that your solution file will be used for build process. Let’s add solution file and our API project:

cd ./src
dotnet new sln --name Iga.Ninja
dotnet sln add Iga.Ninja.Api/Iga.Ninja.Api.csproj

Okay. Let’s move on. There a lot of source code analyzer packages you can find. For our example we will use SecurityCodeScan, SonarAnalyzer.CSharp and StyleCop.Analyzers. You can add it by running following commands in Iga.Ninja.Api folder:

dotnet add package SonarAnalyzer.CSharp
dotnet add package SecurityCodeScan
dotnet add package StyleCop.Analyzers

But I will suggest a different approach here. Instead of adding these packages manually to the specific project, it would be nice to have a way to automatically add it to any project we add in our solution. This is because we want to have code analyzers in each of our projects and enforce code validation on solution build. And there is a way to do it. We need to add Directory.Build.Props file in the root of our /src folder.

Directory.Build.props is a user-defined file that provides customizations to projects under a directory.

When MSBuild runs, Microsoft.Common.props searches your directory structure for the Directory.Build.props file (and Microsoft.Common.targets looks for Directory.Build.targets). If it finds one, it imports the property.

Let’s add Directory.Build.props file. The content of my file:

  <!-- StyleCop Analyzers configuration -->
    <SolutionDir Condition="'$(SolutionDir)'==''">$(MSBuildThisFileDirectory)</SolutionDir>
    <AdditionalFiles Include="$(SolutionDir)stylecop.json" Link="stylecop.json" />
    <PackageReference Include="Microsoft.CodeAnalysis.FxCopAnalyzers" Version="3.0.0">
      <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
    <PackageReference Include="SecurityCodeScan" Version="">
      <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
    <PackageReference Include="SonarAnalyzer.CSharp" Version="">
      <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
    <PackageReference Include="StyleCop.Analyzers" Version="1.1.118">
      <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>

An attentive reader noticed next line in the file:


This file is code analysis rule set reference file which describes configuration for different rules for StyleCop. You should not necessarily 100% agree with these rules, so you can configure it. As a base I use Roslyn Analyzer rule set with a bit of tweaks. You can find this rule set for our ninja core project here. And again, you should customize it for your organization needs. So this file will be picked up each time you issue dotnet build command on your solution and will validate your binaries against this rule set. You will see warnings in output of your build which you can resolve later:

Next line which you perhaps noticed is

<AdditionalFiles Include="$(SolutionDir)stylecop.json" Link="stylecop.json" />

This file used for fine-tune the behavior of certain Stylecop rules and to specify project-specific text. You can find full reference here. In our project stylecop.json looks like this:

  "$schema": "https://raw.githubusercontent.com/DotNetAnalyzers/StyleCopAnalyzers/master/StyleCop.Analyzers/StyleCop.Analyzers/Settings/stylecop.schema.json",
  "settings": {
    "documentationRules": {
      "companyName": "Ninja Coreporation",
      "copyrightText": "Copyright (c) {companyName}. All Rights Reserved.\r\n See LICENSE in the project root for license information.",
      "xmlHeader": false,
      "fileNamingConvention": "stylecop"
    "layoutRules": {
      "newlineAtEndOfFile": "allow"

By the way, all package references and additional files described in Directory.Build.Props file will be automatically added to all projects on dotnet build/publish without need to add packages to each project manually.

Last steps

Okay. Now we have pretty decent solution which runs locally, in docker with HTTPS support, with code analyzers in place. You can build and run it from CLI on Windows and Linux. You should be able to run it in VS Code or in Visual Studio 2019. Before committing changes to Git what I like to do is to format code according to conventions in our .editroconfig file. And there is very nice tool for that – dotnet-format. You can install it globally:

dotnet tool install -g dotnet-format

Then all you need is to go in your project/solution folder an issue following command:

dotnet format

This will ensure you files now formatted according to your conventions, so when you commit to the Git you are good.

In next part we will look how to setup CI/CD pipeline for our ninja-core web api project with an example of GitLab infrastructure.

You can find sample for this article on my GitLab: https://gitlab.com/dnovhorodov/ninjacore

Have a nice coding and stay tuned.

.NET, F#, Programming

Sequences and problem solving in F#

Sequences in F# is very similar to the lists: they represent ordered collection of values. However, unlike lists, sequences are lazy evaluated, meaning elements in a sequence computed as they needed. This is very handy for example to represent infinite data structures. Data types, such as lists, arrays, sets, and maps are implicitly sequences because they are enumerable collections. A function that takes a sequence as an argument works with any of the common F# data types, in addition to any .NET data type that implements System.Collections.Generic.IEnumerable<'T>. The type seq<'T> is a type abbreviation for IEnumerable<'T>. This means that any type that implements the generic System.Collections.Generic.IEnumerable<'T>, which includes arrays, lists, sets, and maps in F#, and also most .NET collection types, is compatible with the seq type and can be used wherever a sequence is expected. Sequences contains over than 70 operations which I will not list here. You can follow refences Sequences and F# – Sequences for more details.

In this post I would like to look at the real world example in practice and compare both: C# and F# approaches to solve the same problem. Lets describe it:

Print all working (business) days within specified date range.

To make it more interesting, we would like to support an interval: when specified we return values for each n working day instead of each day.

First, lets look at one of the possible C# implementations:

using System;
using System.Linq;
using System.Collections.Generic;

public class Program {

 public static void Main() {

  var startDate = new DateTime(2020, 06, 01);
  var endDate = new DateTime(2020, 07, 01);
  var interval = 2;
  Func<DateTime, bool> IsWorkingDay = (date) => 
        date.DayOfWeek != DayOfWeek.Saturday && date.DayOfWeek != DayOfWeek.Sunday;

  foreach(var date in GetWorkingDays(startDate, endDate, IsWorkingDay)
                      .Where((d, i) => i % interval == 0)) 

 private static IEnumerable<string> GetWorkingDays(DateTime start, DateTime stop, Func<DateTime, bool> filter) {

  var date = start.AddDays(-1);

  while (date < stop) {
   date = date.AddDays(1);

   if (filter(date)) {
    yield return string.Format("{0:dd-MM-yy dddd}", date);

The code is pretty straightforward: we use IEnumerable<string> to generate a sequence of values which is filtered by business days. Note that enumerable is lazy evaluated. Then we apply LINQ extension:

Where<TSource>(IEnumerable<TSource>, Func<TSource,Int32,Boolean>)

which takes an integer as a second parameter. It selects only values where index is divisible by interval without remainder, hence satisfying requirement of getting each n business days.

Finally, with interval of 2 we will have output similar to this:

01-06-20 Monday
03-06-20 Wednesday
05-06-20 Friday
09-06-20 Tuesday
11-06-20 Thursday
15-06-20 Monday
17-06-20 Wednesday
19-06-20 Friday
23-06-20 Tuesday
25-06-20 Thursday
29-06-20 Monday
01-07-20 Wednesday

Next, I will show you F# implementation.

In F#, generally, solving any problem implies decomposition on granular level of a function and composing these functions in specific order and with a glue in a form of a language constructs.

First, lets define a working day filter:

let IsWorkingDay (day : DateTime) = day.DayOfWeek <> DayOfWeek.Saturday && day.DayOfWeek <> DayOfWeek.Sunday

Now, lets define an infinite sequence of a days following some start date:

let DaysFollowing (start : DateTime) = Seq.initInfinite (fun d -> start.AddDays(float (d)))

Next, we need a function to represent a sequence of working days starting from some start date which is essence a composition of DaysFollowing function with IsWorkingDay filter with a help of a pipeline operator:

let WorkingDaysFollowing start = 
   |> DaysFollowing
   |> Seq.filter IsWorkingDay

Notice the use of Seq.filter operation here. We just provide filtering function with following signature:

where : ('T → bool) → seq<'T> → seq<'T>

This should be familiar to you if you ever used LINQ 🙂 In F#, 'T notation just means generic type.

At this point we would like to have a function which could make use of an interval variable in generation of the next working date. Here it is:

let NextWorkingDayAfter interval start = 
   |> WorkingDaysFollowing
   |> Seq.item interval

And again, we stack one block on top of another which is function composition in action. Seq.item computes the nth element in the collection. First, we get sequence of working days and then we process nth from that sequence:

item : int → seq<'T> → 'T

Finally, we need to define function which will compose all these blocks and return final sequence of dates. We want our resulting sequence to be a string representation of a working dates according to original requirement. That’s how we could achieve that:

let WorkingDays startDate endDate interval = 
   Seq.unfold (fun date -> 
      if date > endDate then None
         let next = date |> NextWorkingDayAfter interval
         let dateString = date.ToString("dd-MMM-yy dddd")
         Some(dateString, next)) startDate

We use unfold function here. It is one of the most complex operations in Seq data type to understand, yet very powerful. There is no direct analogy of it in C#. Put it simple: function returns a sequence that contains the elements generated by the given computation. The signature of that function is:

unfold : ('State → 'T * 'State option) → 'State → seq<'T>

Lets take a closer look at the unfold function. The first parameter is a computation function which takes the current state and transforms it to produce each subsequent element in the sequence. For the first iteration, the value passed in is the initial state parameter, which is the second parameter passed to the unfold function which is start date in the example above. The computation function (or generator) must return an option type of a two element tuple. The first element of the tuple is the item to be yielded and the second element is the state to pass on the generator function in the next iteration. It returns Some when there are results or None when there are no more results. In our case when passed state (date) is less than end date we calculate next working date (taking in consideration interval) and converting it to string. We wrap it in an option tuple where the first value will be added to resulting sequence and the second value is a state which will be passed to next iteration of the unfold.

We invoke it as follows:

WorkingDays (DateTime(2020, 6, 1)) (DateTime(2020, 7, 01)) 2 |> Seq.iter (fun x -> printfn "%s" x)

Which produce the same output as C# version

Put it all together:

open System

let IsWorkingDay (day : DateTime) = day.DayOfWeek <> DayOfWeek.Saturday && day.DayOfWeek <> DayOfWeek.Sunday
let DaysFollowing (start : DateTime) = Seq.initInfinite (fun i -> start.AddDays(float (i)))

let WorkingDaysFollowing start = 
   |> DaysFollowing
   |> Seq.filter IsWorkingDay

let NextWorkingDayAfter interval start = 
   |> WorkingDaysFollowing
   |> Seq.item interval

let WorkingDays startDate endDate interval = 
   Seq.unfold (fun date -> 
      if date > endDate then None
         let next = date |> NextWorkingDayAfter interval
         let dateString = date.ToString("dd-MMM-yy dddd")
         Some(dateString, next)) startDate


In F# function composition plays an important role. You start by splitting complex problem in smallest possible pieces and wrapping it into the functions. This is what known as decomposition. To solve a problem you need to compose these functions in certain way. Very much like LEGO bricks. Side effect which gives you such granular decomposition is re-usability: once defined, function can be applied in different contexts and to make it fit functional languages provides rich set of tools which is out of scope of the article. On the other hand, C# and OOP in general gives you classes and design patterns to solve same problems, often in a much more verbose and error-prone way.

.NET, C#, Programming

Make your C# code cleaner with functional approach

Since introducing LINQ in .NET 3.5, the way how we write code changed a lot. Not only in the context of database queries with LINQ to SQL or LINQ to Entities, but also in day-to-day work with manipulating collections and all kind of transformations. Powerful language constructs like implicitly typed variables, anonymous types, lambda expressions and object initializers, gave us tools for writing more robust and conciseness code.

It was a big step towards functional approach to solve engineering tasks by using a more declarative way of expressing your intent instead of sequential statements in imperative paradigm.

Functional programming is a huge topic and mind shift for all .NET developers who is writing their code in C# for a long time. If you are new to the topic (like me), you probably don’t want to get into all that scary sounding things like functors, applicatives or monands right now (discussion for other posts). So let’s see how applying a functional approach could make your code cleaner here and now with our beloved C#.

For the sake of example we will solve a very simple FizzBuzz kata in C#. I will show you how it looks like in F#. If you don’t know what is kata, it just a fancy way of saying puzzle or coding task. The word kata came to us from the world of martial arts and particularly Karate. The FizzBuzz is a simple coding task where you need to solve the following problem:

Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz “

So, first we will start with naïve implementation in C#:

void Main()
    for(var i = 1; i <= 100; i++)
        if(i % (3 * 5) == 0)
        else if(i % 3 == 0)
        else if(i % 5 == 0)

And here’s the output:


So far so good. I told you, it’s a piece of a cake. Okay, how we can improve this code? Let’s use the power and beauty of LINQ:

void Main()
	var result = Enumerable
		.Range(1, 100)
		.Select(x => {
				case var n when n % (3 * 5) == 0: return "FizzBuzz";
				case var n when n % 3 == 0: return "Fizz";
				case var n when n % 5 == 0: return "Buzz";
				default: return x.ToString();
		.Aggregate((x, y) => x + Environment.NewLine + y);

We use the static helper Range on Enumerable to generate a sequence from 1 to 100. Then we use the Select method to iterate over each number in that range and return a string which contains one of those FizzBuzz words. We used very powerful concept – pattern matching. This feature is available from C# 7.0. This variation of pattern matching uses var pattern with when clause for specifying condition. Last method in chain is Aggregate. It is one of the most interesting in the LINQ – you could use it as a functional replacement for the loops in your code base. In this example we concatenated each element in sequence with a new line producing string as a result.

In C# 8.0 pattern matching was extended and improved. We can re-write our code like this:

public static string FizzBuzz(int n) =>
        (n % 3, n % 5) switch
            (0, 0) => "FizzBuzz",
            (0, _) => "Fizz",
            (_, 0) => "Buzz",
            (_, _) => $"{n}"
 static void Main(string[] args)
     foreach (var n in Enumerable.Range(1, 100))

This syntax is much closer to how pattern matching is applied in functional languages. In functional languages _ is called a discard symbol – meaning we are not interested in value in that position. We used what is called tuple pattern here.

  • When remainder of 3 and 5 in both positions 0 – we print “FizzBuzz”.
  • When remainder of 3 is 0 and we not interested in the remainder of 5, we print “Fizz”.
  • When the remainder of 5 is 0 and we not interested in the remainder of 3 we print “Buzz”.
  • For all other cases we just print value of the n.

Remember, in pattern matching order matters – first matched condition win and further calculation stops.

Finally, let’s look at the F# implementation of the kata:

let fizzBuzz list  = 
    list |> List.map (fun x -> 
        match (x % 3, x % 5) with
        | (0, 0) -> "FizzBuzz"
        | (0, _) -> "Fizz"
        | (_, 0) -> "Buzz"
        | _ -> string x

fizzBuzz [1..100] |> List.iter (fun x -> printfn "%s" x)

You can see that this sample is very similar to the previous one with C# 8.0 pattern matching. And this should not surprise you, because the C# team is introducing more and more functional constructs in the language with each version, taking all the good parts from F#.