Skip to content
Mar 7 11

Visual Studio Talk Show on CQRS

by Julien

The Visual Studio Talk show (which despite its name is a french speaking podcast) invited me to talk about CQRS. You can listen to the show here.

Jan 19 11

Source code for the CQRS Workshop

by Julien

For those of you that could not make it to saturday’s workshop, the exercises and their correction are available on Git Hub:

https://github.com/jletroui/CQRSWorkshop

The Dec 15th meetup slides are also available here.

Jan 3 11

CQRS / Event Sourcing workshop in Montreal January 15th

by Julien

Following my December meetup on the subject, the Montreal developer group will allow you to really code a CQRS / Event Sourced application in a one day workshop on January the 15th.

Details and registration on EventBrite.

Dec 2 10

DDD / CQRS / Event Sourcing meetup in Montreal December 15th

by Julien

I will present an introduction and try to answer to as much questions as possible on the DDD / CQRS / Event Sourcing trio here in Montreal in french. This will cover the concepts and the many benefits that this architecture can bring to a company.

And in order to be concrete, this meetup will be followed in January by a one day coding session to truly try it, build it, and see it in action.

Details and registration on EventBrite.

Oct 24 10

Transitioning to Event Sourcing, part 5: use events for updating your domain database

by Julien

Une version française de ce billet est disponible ici.

Overview

The previous step brought a problem: events generated by our aggregates form a model, a representation of our domain. The transactional database, maintained by NHibernate as well. How can we be sure that the 2 models are in sync? This is critical if we want to later switch to event sourcing. We want to be 100% certain that our events are tracking all what is needed to rebuild our aggregates. We also want to make sure that nothing is written in the database that should not. Can we prove that what NHibernates persist all what’s in the events, and only that? Of course not, because there is no relationship between the events and the persistence mechanism:

What we could do however is using the events to persist our aggregate changes:

This is the object of this post.

Benefits

Consistency

The events are now triggering state changes within our aggregate as well as persistence in our database.

Performance

We don’t rely on NHibernate’s session to track changes within our aggregates anymore. Flushing an NHibernate session is an expensive operation: NHibernate must compare state of all loaded entities with their initial values, and generate corresponding SQL statement.

When using event handlers, though, we are never flushing. We execute custom built SQL statements, based on our events. We can do that, because our events are already describing changes within our aggregates. We don’t need to calculate what has changed, it is already done.

Implementation

What changed

The persisting event handlers have their own project:

Custom unit of work

The first element that will be necessary is the repository must now save the list of aggregates being created and loaded:

public T ById(Guid key)
{
    var resVal =  persistenceManager.CurrentSession.Get<T>(key);
    AddToContext(resVal);
    return resVal;
}

public void Add(T toAdd)
{
    AddToContext(toAdd);
}

private void AddToContext(T toAdd)
{
    HashSet<IAggregateRoot> aggregates = context[NHibernatePersistenceManager.AGGREGATE_KEY] as HashSet<IAggregateRoot>;

    if (aggregates == null)
    {
        aggregates = new HashSet<IAggregateRoot>();
        context[NHibernatePersistenceManager.AGGREGATE_KEY] = aggregates;
    }

    aggregates.Add(toAdd);
}

Which means when the persistence manager is commited, we can call the event handlers:

public void Commit()
{
    var aggregates = context[AGGREGATE_KEY] as HashSet<IAggregateRoot>;

    if (aggregates != null && aggregates.Count > 0)
    {
        var session = EnsureOpened();

        using (var tx = session.Connection.BeginTransaction(IsolationLevel.ReadCommitted))
        {
            context[TRANSACTION_KEY] = tx;

            foreach (var ar in aggregates)
            {
                foreach (var evt in ar.UncommitedEvents)
                {
                    eventBus.Publish(evt);
                }
            }

            context[TRANSACTION_KEY] = null;
            tx.Commit();

            context[AGGREGATE_KEY] = null;
        }
    }
}

Event handlers

Persistence event handlers implementation is trivial. For example:

public void Handle(StudentNameCorrectedEvent evt)
{
    persistenceManager.ExecuteNonQuery(
        "UPDATE [Student] SET firstName = @FirstName, lastName = @LastName WHERE Id = @Id",
        new
        {
            Id = evt.StudentId,
            FirstName = evt.FirstName,
            LastName = evt.LastName
        });
}

Running the sample

The sources for the entire serie is available on GitHub at http://github.com/jletroui/TransitioningToEventSourcing
.

To run this sample, simply create a new “DDDPart5″ database in SQLExpress before launching it.

Transitioning to Event Sourcing posts:

Oct 21 10

Mono and MonoTouch Meetup + AutoTest Demo

by Julien

Novell developer JB Evain will be in Montreal for the Unite 10 Conference (http://unity3d.com/unite/). IL code manipulators are already familiar with the popular Mono.Cecil library that he developed.

This event will be a great opportunity to learn more about Mono and discover — in French – MonoTouch, a .NET development environment for iPhone, and its little brother MonoDroid for the Android platform.

Along this event, Greg Young will also be presenting – in English – AutoTest.NET, a highly-automated test/build framework that can integrate to any environment, from Linux/Vim to Windows/Visual Studio.

Attendance is limited so please RSVP on EventBrite.

Register for Mono et MonoTouch Meetup + AutoTest Demo in Montreal, Quebec  on Eventbrite

Aug 3 10

Transitioning to Event Sourcing, part 4: track state changes

by Julien

Une version française de ce billet est disponible ici.

Overview

This step will not bring any technical benefit. The goal is to prepare the next step. We now want that our aggregates track their changes:

You probably already guessed that the changes of an aggregate will be stored in the form of events that happened to this aggregate. Each aggregate is responsible for tracking the changes that happened in them from the time they have been loaded from the persistence storage.

Benefits

Communication

Like for the commands, events describing what changed in an aggregate should be part of the ubiquitous language. This is a tool that might help you and the domain expert formalize what really happen in the domain when a given command is issued.

Cleaning

As you will see below in the implementation section, explicitly defining events and enforcing them will force you to respect one overlooked property of aggregates: their consistency boundaries.

Implementation

We basically add a list of uncommitted changes to the aggregate root:

public interface IAggregateRoot
{
    Guid Id { get; }
    IEnumerable<IEvent> UncommitedEvents { get; }
}

We also have still another project containing the events. Obviously, this project must not have a dependency on the domain model:

As we must enforce that the events are really carrying what has happened in the domain, we are delegating the state changes to a method taking the event as a parameter. For example:

public virtual void RegisterTo(Class @class)
{
    // Business rules, some validation here

    // State changes
    ApplyEvent(new StudentRegisteredToClassEvent(Id, @class.Id, @class.credits));
}

private void Apply(StudentRegisteredToClassEvent evt)
{
    registrationSequence = registrationSequence.Next();
    registrations.Add(new Registration(this, registrationSequence.ToId(), evt.ClassId, evt.Credits));
}

Notice that I had to fix an issue with the consistency boundary. Let’s look at the previous step implementation of the Registration entity (constructor omitted for clarity):

public class Registration : Entity<Student>
{
    internal Class _class;
}

A Registration is part of the Student aggregate. It should not have referenced a Class, which is an other aggregate.
Instead, we should have put in the Registration all what is needed to enforce it’s aggregate behavior:

public class Registration : Entity<Student>
{
    private Guid classId;
    private int classCredits;
}

This will allow us to have a StudentRegisteredClassEvent. Since events don’t have a dependency on the domain model, this event could not have carried the Class object that would have been necessary with the first version of the entity. With the second version though, the event becomes simple (constructor omitted for clarity):

public class StudentRegisteredToClassEvent : IEvent
{
    public readonly Guid StudentId;
    public readonly Guid ClassId;
    public readonly int Credits;

}

AggregateRoot.ApplyEvent()

The other important thing you might have notice in the aggregate is that the public method don’t call directly the Apply(SomeEvent evt) methods. Instead, they are calling void ApplyEvent<T>(T evt). This method located in the root will call the right Apply(SomeEvent evt) method, but will also add the event to the uncommitted event list.

On a side note, void ApplyEvent<T>(T evt) don’t use reflection to call Apply(SomeEvent evt). It is instead using the nice Expression Trees API to build the necessary delegates at startup.

Credits

  • For the orignal ApplyEvent() implementation: Greg Young
  • For the formatting of domain terms: Richard Dingwall

Running the sample

The sources for the entire serie is available on GitHub at http://github.com/jletroui/TransitioningToEventSourcing
.

To run it, simply create a new “DDDPart4″ database in SQLExpress.

Transitioning to Event Sourcing posts:

Jul 22 10

Transitioning to Event Sourcing, part 3: commands

by Julien

Une version française de ce billet est disponible ici.

Overview

This refactoring is more subtle than the previous one. In part 1, I already stated it was crucial to have one method in an aggregate root for each use case your domain is supporting. We will now take it to the next level by making those use cases explicit, in the form of commands. The architecture becomes:

Note that the UI is now completely isolated from the domain (or the other way around?)

Benefits

Isolation

The UI has no more dependencies to the domain model anymore. The 2 will be allowed to evolve separately.

Communication

Creating classes for commands is just recognizing that commands are an important component of the domain model and the ubiquitous language. They are capturing use cases (or stories). For example, a user could ask:

  • I want to create students.
  • I want to create classes.
  • I want to register students to classes.

The commands will crystallize these user’s expectations.

Opportunities

Capturing commands explicitly opens some interesting opportunities. For once, you could log all the commands easily. Doing so, you would capture everything users wanted to perform on the system.
Decoupling command execution from the UI also brings interesting choices. In the sample, commands are executed right away, in the same thread as the controller. But it would be easy to send the command to one or multiple application servers instead (using NServiceBus for example). This could allow your application to easily scale when it needs it, with a very little cost.

Implementation

What changed

2 new projects have appeared in the solution:

The first one is hosting the commands. The other one contains the command handlers. The command handlers are responsible for coordinating domain entities to answer to a command.

Commands are dispatched from the UI to the domain by a bus.

Issuing a command

The result is even simpler controller code. For example:

public RedirectToRouteResult DoCorrectName(StudentDTO model)
{
    var cmd = new CorrectStudentNameCommand(model.Id, model.FirstName, model.LastName);
    commandBus.Send(cmd);

    return RedirectToRoute(new
    {
        controller = "Student",
        action = "Index"
    });
}

The bus will channel the command to the appropriate IHandleCommand<T> handling method:

public void Handle(CorrectStudentNameCommand cmd)
{
    var student = studentRepository.ById(cmd.StudentId);

    if (student != null)
    {
        student.CorrectName(cmd.FirstName, cmd.LastName);
    }
}

Guidelines

The guidelines explained in Part1 do also apply here:

  • One command will be executed on only one aggregate root.
  • Only one method of the aggregate root should be called in the command handler.

Running the sample

The sources for the entire serie is available on GitHub at http://github.com/jletroui/TransitioningToEventSourcing
.

To run it, simply create a new “DDDPart3″ database in SQLExpress.

Transitioning to Event Sourcing posts:

Jul 15 10

Transitioning to Event Sourcing, part 2: go CQRS with DTOs

by Julien

Une version française de ce billet est disponible ici.

Overview

In this refactoring, we want to go CQRS. CQRS simply states that queries (that interrogate your system to display information to your users) and commands (that modifies the state of your system) are using different components in your system. This is a good practice at all levels: method level, class level, but also component level or even data level. In this refactoring, we will apply it to the component level, so we don’t use the entities and NHibernate to access our data:

Instead of displaying entities, you are displaying Data Transfer Objects that are custom crafted for the view. Concretely, you are trying to get all the information needed for the view in one single request to the database, doing a custom SQL query.

What? A SQL query? But did not the ORM free ourselves from those beasts from the middle age? Well, yes they did, but at a price. A price we don’t want to pay at this point anymore.

So what is the problem with a HQL or a Linq query against our entities? First of all, our entities are optimized for treating the use cases of our system. The trouble is most of our screens are usually displaying data from more that one entity. For example, if you are displaying a student, you want in that same screen to display the names of the classes he registered in. It means that NHibernate needs to lazy load the associated classes. If you don’t care, you can end up with a lot of round trips to the database. Along the way, you will also load a lot of things you don’t need (for example, all the other properties of the registered classes for that student). Sure, NHibernate supports fetching strategies. It means that for each requests, you can specify what to load right now, and what not. That is additional complexity to your requests and your infrastructure. Then, you have to do a certain amount of transformation in the view to format the data correctly. But the biggest effect is that your entities need to support the view. So instead on focusing on implementing the business rules, the domain must also support display needs. And it turn out that it is most of the time impossible to have the same model optimized both for transactional behavior and display. The main reason is that often times, views are showing a denormalized model, with lot of data duplication. Which you want to avoid at all cost for your domain model.

So ideally, you would have somewhere your DTOs for each view stored exactly as they should be displayed. But we are not there yet, so we will cheat a little bit and do SQL queries for retrieving our DTOs. For example:

public ViewResult Details(Guid studentId)
{
    return View(studentQueries.ById(studentId));
}

At first glance, nothing changed. Except we are not using a repository anymore to perform the query, but a component that is dedicated to read only queries. Then, the query is not returning the Student entity, but a StudentDTO.

Benefits

Simplicity

No more mapping between your entities and your view model. No more fetching strategies. A plain SQL query for each of your views.

No getters in your domain!

You’ve just freed your domain from the UI burden. So now your aggregates do not need to expose their state through getters anymore. You can now change your entities internals without worrying about the UI.

Implementation

What changed

There is one additional project in the solution for the view model:

This project contains the DTOs and their queries.

The repositories have also disappeared:

Since we don’t do queries anymore on our entities, we don’t need specific implementations for each entity. The base repository is enough.

Speaking about the repository, the IPaginable<T> All() method disappeared from its interface as well.

The NHibernate implementation of the IPaginable<T> has been replaced by a raw SQL implementation (pagination is supported started in SQL Server 2005).

As I am not a fan of SQL queries in the code, so I created a simple infrastructure to read the queries from XML files (more on that later).

Finally, there is an additional component, the DTO mapper that maps the result sets from a IDataReader to the DTO objects.

Creating a DTO

In this project, a DTO is a simple class, with public properties and an empty constructor. For example:

public class ClassDTO
{
    public Guid Id { get; set; }
    [Required]
    [StringLength(255, MinimumLength = 1)]
    public string Name { get; set; }
    [Required]
    [Range(3, 6)]
    public int Credits { get; set; }
}

Since my DTOs can be used directly as a view model, I put some validation in there as well.

Creating a DTO query

Here is a simple query interface:

public interface IClassDTOQueries
{
    IPaginable<ClassDTO> All();
}

Here is how it is implemented:

public class SQLClassDTOQueries : DTOQueries, IClassDTOQueries
{
    public SQLClassDTOQueries(IPersistenceManager pm)
        : base(pm)
    {
    }

    public IPaginable<ClassDTO> All()
    {
        return ByNamedQuery<ClassDTO>("All", null);
    }
}

The ByNamedQuery() is taking 3 arguments:

  • The name of the query (to get from the XML file)
  • The parameters for the query
  • (Optional) The mappings to the collections of the DTO

You can browse the sample application for example of queries using these different features.

Finally, you can define the actual query in an XML file having the same name as the queries implementation:

<?xml version="1.0" encoding="utf-8" ?>
<queries>
<query name="All" defaultSort="Id">
<count>SELECT COUNT(*) FROM Class</count>
<select>
SELECT class.Id,
class.name as Name,
class.credits as Credits
FROM Class class
</select>
</query>
</queries>

For each query that will be paginated, you need to actually specify 2 queries:

  • The count query that is returning the total number of items.
  • The select query that is returning the actual items.

For queries that will not be paginated, you can omit the count query.

The DTO mapper will use the column aliases to find the DTO property in which to store the value. The DTO mapper supports simple properties, sub DTOs, and most collections (including dictionaries).

Running the sample

The sources for the entire serie is available on GitHub at http://github.com/jletroui/TransitioningToEventSourcing
.

To run it, simply create a new “DDDPart2″ database in SQLExpress.

Transitioning to Event Sourcing posts:

Jul 13 10

Transitioning to Event Sourcing, part 1: the DDD “light” application

by Julien

Une version française de ce billet est disponible ici.

This post will present an example of a classic architecture trying to support DDD. You can download the fully working infrastructure described here, as well as a (very simple) application using it here. Feel free to use this code as you wish, including for commercial closed source application. But if you publish it, or part of it, please mention my name in the credits. Obviously, I would not recommend using this code, since the purpose of all these posts are to show you what I think is a better alternative.

This first post will probably be quite long, as I need to describe a complete application architecture. The next ones will be much shorter, since they will focus on replacing a small part of that architecture at a time.

The sample is using C#, with NHibernate as the ORM, Windsor Castle as the IoC Container, and ASP.Net MVC for the presentation layer.

I will assume you are already familiar with the concepts of Object Relational Mapping, Inversion of Control Container, and the Model View Controller pattern.

The (simplified) architecture looks like this:


Let’s get into the details.

1 – The query side

When you want to display information, you can query your entities. This is usually done through a repository. For example:

public interface IStudentRepository : IRepository<Student>
{
    IPaginable<Student> ByNameLike(string name);
}

The base interface looks like:

public interface IRepository<T> where T : IAggregateRoot
{
    T ById(Guid id);
    void Add(T toAdd);
    void Remove(T toRemove);
    IPaginable<T> All();
}

The idea is to put all your queries for a given entity in its repository. You are usually implementing the repository using an ORM like NHibernate or the ADO.Net Entity Framework.

The reason the queries are returning a IPaginable<T> is that you want to be able to sort and paginate, but you don’t want to return a IQueryable<T> which would allow queries to be done all over your code base. In my experience, the code is much more maintainable if all the queries are centralized in the repository. The interface for IPaginable<T> looks like this:

public interface IPaginable<T>
{
    int Count();
    T UniqueValue();
    IEnumerable<T> ToEnumerable();
    IEnumerable<T> ToEnumerable(int skip, int take);
    IEnumerable<T> ToEnumerable(int skip, int take, string sortColumn, SortDirection? sortDirection);
}

From there, it is easy to implement the MvcContrib‘s IPagination<T> interface so you can use the mvcContrib pager control. Your action methods becomes quite simple for displaying data. For example:

public ViewResult Index(int Page = 1, string Name = null)
{
    var model = new StudentSearchModel()
    {
        Name = Name,
        Students = studentRepository.ByNameLike(Name).AsPagination(Page)
    };

    return View(model);
}

public ViewResult Details(Guid studentId)
{
    var student = studentRepository.ById(studentId);

    return View(student);
}

And you are done for displaying your entity!

2 – The write side

Usually, application just don’t display stuff on the user’s screen. They do stuff. Let’s see how it works here.

The design rule of thumb is, for any action the user want to do, you should only call one method on one entity. If your interface supports batch actions, then the same method can be called on a list of entities.

That makes the write pretty simple. For example, if you want to register a student to a class:

public RedirectToRouteResult DoRegisterToClass(RegisterToClassModel  model)
{
    var student = studentRepository.ById(model.StudentId);
    var @class = classRepository.ById(model.ClassId);
    student.RegisterTo(@class);
    return RedirectToRoute(new
    {
        controller = "Student",
        action = "Index"
    });
}

What happen in the aggregate root student? Here it is:

public virtual void RegisterTo(Class @class)
{
    // Business rules
    @class.Validation().NotNull("class");
    if (registrations.Where(x => x.Class.Id == @class.Id).Count() > 0)
    {
        throw new InvalidOperationException("You can not register a student to a class he already registered");
    }
    if (passedClasses.Where(x => x == @class.Id).Count() > 0)
    {
        throw new InvalidOperationException("You can not register a student to a class he already passed");
    }

    // State changes
    registrationSequence = registrationSequence.Next();
    registrations.Add(new Registration(registrationSequence.ToId(), @class));
}

Some comments here. You can see that all the methods in the aggregate roots (including constructor) are divided in 2 sections:

  • Business rules are validating that the action can be performed. This part can throw exceptions.
  • State changes are actually performing the action. This part can not throw exceptions.

The idea is to not modify any entity before being reasonably certain the action will be performed with success. We will see in next posts that it also facilitate future refactorings.

You can notice that there is no code that is explicitly calling NHibernate for committing the changes to the database. This is because we are using a NHibernate session per request pattern. This is implemented in a IHttpModule:

void context_BeginRequest(object sender, EventArgs e)
{
    CurrentContainer.Container.Build<IPersistenceManager>().Open();
}

void context_EndRequest(object sender, EventArgs e)
{
    var pm = CurrentContainer.Container.Build<IPersistenceManager>();

    try
    {
        if (HttpContext.Current.Error == null) pm.Commit();
    }
    finally {pm.Close();}
}

The IPersistenceManager is a simple abstraction of NHibernate’s session and transaction, so you can use other persistence mechanisms.

And that is it for the write side!

3 – Aggregate roots and entities

There are subtelties in the way the aggregate roots and the other entities are implemented using NHibernate.

No setters!

Setters are an anti pattern in your domain. DDD is not about modeling data, or nouns. It is about modeling behaviors that are solving the domain problem, or verbs. So the public interface of your domain should solely consist in public methods on your aggregate roots. The idea is that each method represents a use case. If you feel the need to set a property value on an entity from your controller or application service, it is probably because you have not well identified your use cases for this aggregate. From a design perspective, it is also the only way to ensure your objects invariants. That way, your aggregates are always fully consistent. Consistent means aggregate roots are in a valid state at all time and are respecting all their invariants.

If DDD is about behavior, then getters also should be an anti pattern. And they are. But for now, we still need them for displaying them in the UI. We will see in the next post how to get rid of them.

Aggregate root id

Aggregate roots should have a universal identifier of some kind instead of , say, a integer. Since you don’t want any global lock, one of the best option is to use  a Guid. The reason is, you want your application to be ready for integration with other systems in your enterprise. So the identifier should be reasonably unique not only within your application, but also within all the other applications in your company. This will greatly simplify SOA style integration later on.

Entity id

The entities that are part of your aggregate also have an identity, but local to the aggregate. The reason is here again, you don’t want to have a global lock when selecting a new entity’s id. You could use a Guid here as well, but you may not want to carry its unnecessary storage and indexing price. A simple integer will be enough.

Generating ids

The other thing you should be cautious is to not let the database create the ids for you. This will lead to global locks in your database, and you want to avoid that for performance and scalability reasons.

The way you can do that is by using the technique described here.

So the aggregate roots can simply create a new id themselves:

public abstract class AggregateRoot : IAggregateRoot
{
    private DateTime? version = null;

    public AggregateRoot()
    {
        Id = Guid.NewGuid();
    }

    public virtual Guid Id {get; private set;}
}

For the entities, you need to manage the id yourself. So the base entity class looks like:

public abstract class Entity<T> where T : AggregateRoot
{
protected DateTime? version = null;

// NHibernate constructor.
protected Entity() { }

public Entity(T aggregateRoot, int id)
{
this.aggregateRoot = aggregateRoot.Validation().NotNull("aggregateRoot");
Id = id;
}

private T aggregateRoot;
public virtual int Id {get; private set;}

}

So the primary key in the database for this entity will be a composite of the aggregate root id and the entity “local” id. If you are wondering why there is an empty constructor, it is because NHibernate needs it for creating proxies.

When an aggregate root needs to create a new entity, it is responsible for creating the corresponding id. For example:

private IdSequence registrationSequence;

public virtual void RegisterTo(Class @class)
{
    // Business rules here

    // State changes
    registrationSequence = registrationSequence.Next();
    registrations.Add(new Registration(registrationSequence.ToId(),  @class));
}

The nice thing is you already have the ids, you don’t need to make a database call for that. The bad thing is concurrency issues. If 2 users are creating a new entity in the same aggregate, the second one will get a unique constraint violation. The vast majority of applications are not collaborative and can live with a low probability of that happening. The victim user will have to try again. You can also catch that type of exception, and update the identity in the entity, and automatically retry. Obviously, event sourcing will offer us a nicer approach to that problem.

That is it for the starting application. The next post will talk about getting a more efficient read side.

Running the sample

The sources for the entire serie is available on GitHub at http://github.com/jletroui/TransitioningToEventSourcing
. To run it, simply create a “DDDPart1″ new database in SQLExpress.

Credits

  • Original paginable and repository implementation: Benoit Goudreault-Emond
  • Entities ids: Greg Young

Transitioning to Event Sourcing posts: