Mark Needham

Thoughts on Software Development

Archive for the ‘Domain Driven Design’ tag

Micro Services: A simple example

with 4 comments

In our code base we had the concept of a ‘ProductSpeed’ with two different constructors which initialised the object in different ways:

public class ProductSpeed {
  public ProductSpeed(String name) {
    ...
  }
 
  public ProductSpeed(String name, int order)) {
 
  }
}

In the cases where the first constructor was used the order of the product was irrelevant.

When the second constructor was used we did care about it because we wanted to be able sort the products before showing them in a drop down list to the user.

The reason for the discrepancy was that this object was being constructed from data which originated from two different systems and in one the concept of order existed and in the other it didn’t.

What we actually needed was to have two different versions of that object but we probably wouldn’t want to name them ‘ProductSpeedForSystem1′ and ‘ProductSpeedForSystem2’!

In Domain Driven Design terms we actually have the concept of a ‘ProductSpeed’ but in two different bounded contexts which could just mean that they come under different packages if we’re building everything in one (monolithic) application.

However, we could see from looking at the way ‘ProductSpeed’ initialised from the second constructor was being used in the application that it didn’t interact with anything else and so could easily be pulled out into its own mini application or micro service.

We’re actually building an API for other systems to interact with and the initial design of the code described above was:

Api before

We get a product from the product list (which is sorted based on the ordering described!) and then post a request which includes the product amongst other things.

After we’d pulled out a micro service it looked like this:

Api after

The choice of product is actually a step that you do before you make your request to the main API whereas we’d initially coupled them into the same deployable.

These are the advantages I see from what we’ve done:

  • We can now easily change the underlying data source of the products micro service if we want to since it now has its own schema which we could switch out if necessary.
  • It takes about 5 minutes to populate all the products and we run the script to repopulate the main DB quite frequently. Now products can be loaded separately.
  • Our code is now much simplified!

And some disadvantages:

  • We now have to deploy two jars instead of one so our deployment has become a bit more complicated.

    My colleague James Lewis points out that we’re effectively pushing the complexity from the application into the infrastructure when we design systems with lots of mini applications doing one thing.

  • Overall I think we have more code since there are some similarities between the objects in both contexts and we’ve now got two versions of each object since they’re deployed separately. My experience is that sharing domain code generally leads to suffering so we’re not doing that.

Written by Mark Needham

March 31st, 2012 at 9:06 am

Posted in Micro Services

Tagged with

Strategic Design (Responsibility Traps) – Eric Evans

with one comment

Reading through some of Simon Harris’ blog entries I came across his thoughts on a presentation Eric Evans did at QCon titled ‘Strategic Design – Responsibility Traps‘ which seems to cover a lot of the ground from the second half of Domain Driven Design and more.

In the presentation Evans make some really insightful comments and points out a lot of mistakes that I’ve made on projects. It certainly serves as a reminder to go back and read part 4 of the book again and really understand the material from that section.

These were the most interesting observations for me:

  • In this talk he makes some similar points to those that he made in his ‘What I’ve learned about DDD since the book‘ presentation that I had the chance to see at QCon in London last year. One of these is that there is no such thing as a “right model” – there are only models and some will help us describe our system better than others.
  • Evans suggests that we need to spend most of our time modeling in the core domain since this is the code that gives us a competitive advantage and allows us to differentiate ourselves from competitors.

    I often wonder where the best places to focus efforts on code bases is and I’ve typically been of the opinion that the pain points are the best place to work on since we can see an immediate reward from doing this. As I’ve mentioned before I quite like Fabio’s technical debt quadrant as a mechanism for measuring where we should focus our efforts with this approach.

    It still seems different to what Evans suggests although I’m inclined to believe that the areas of most pain could well be the areas we need to be subtle to change and there could therefore be some correlation between those areas and the core domain.

    I’ve not seen a distinction between the core domain and other parts of the domain model on any projects I’ve worked on so it’d be interesting to hear other opinions on this.

  • A related point to this which I haven’t completely grasped is that there are some areas of the code which we shouldn’t bother trying to improve and instead should just work around and not worry too much about creating intricate models.

    I like this advice in a way although it seems a little dangerous to me and perhaps seems to conflict with Uncle Bob’s idea of following the boy scout rule and improving the code slightly every time we touch it. As I understand it, Evans advice would be to only follow this advice when we’re working in the core domain.

  • Evans points out that we should try to avoid the universal domain model, something which Dan North also points out in his article on SOA. As I see it we can either decide to explicitly mark out multiple different models in our code explicitly otherwise we’ll just end up with one mediocre model being bent to fit the needs of every part of the system.

    I guess it seems intuitive to try and reduce the amount of code required in a system by just doing the modeling once but different teams in different contexts have different meanings and uses for the model that it doesn’t make sense as an approach.

  • My favourite quote from the presentation is the following:

    Be truly responsible, don’t just satisfy the emotional need to be responsible.

    I really like refactoring code so this is a good reminder to me to take a step back when I find myself doing that and consider whether what I’m doing is actually useful.

    Evans suggests that it’s pointless being the team janitor and cleaning up the code after everyone else has rushed to get features delivered. The suggestion seems to be to get the strongest developers on the team working on the most important domain code rather than creating the infrastructure/platform for the rest of the team to work from.

At the moment I feel like watching this presentation has made me think more about the value of what I’m doing when working on a code base. I don’t feel so inclined to randomly refactor code and I’m more keen to work out which bits of the system I’m working on would benefit from this kind of attention.

As I mentioned earlier I now need to finish off reading section 4 of the big blue book!

Written by Mark Needham

January 18th, 2010 at 10:52 pm

Value objects: Immutability and Equality

with 13 comments

A couple of weeks ago I was working on some code where I wanted to create an object composed of the attributes of several other objects.

The object that I wanted to construct was a read only object so it seemed to make sense to make it a value object. The object would be immutable and once created none of the attributes of the object would change.

This was my first attempt at writing the code for this object:

public class MyValueObject
{
    private readonly string otherValue;
    private readonly SomeMutableEntity someMutableEntity;
 
    public MyValueObject(SomeMutableEntity someMutableEntity, string otherValue)
    {
        this.someMutableEntity = someMutableEntity;
        this.otherValue = otherValue;
    }
 
    public string SomeValue { get { return someMutableEntity.SomeValue; } }
 
    public int SomeOtherValue {get { return someMutableEntity.SomeOtherValue; }}
 
    public string OtherValue { get { return otherValue; } }
 
    public bool Equals(MyValueObject obj)
    {
        if (ReferenceEquals(null, obj)) return false;
        if (ReferenceEquals(this, obj)) return true;
        return Equals(obj.OtherValue, OtherValue) && Equals(obj.SomeOtherValue, SomeOtherValue) && Equals(obj.SomeValue, SomeValue);
    }
 
    public override bool Equals(object obj)
    {
		// other equals stuff here
    }
}

It wasn’t immediately obvious to me what the problem was with this solution but it felt really weird to be making use of properties in the equals method.

After discussing this strangeness with Dave he pointed out that ‘MyValueObject’ is not immutable because although the reference to ‘SomeMutableEntity’ inside the object cannot be changed the object itself had lots of setters and could therefore be changed from outside this class.

There are two ways to get around this problem:

  1. We still inject ‘SomeMutableEntity’ but we extract the values from it in the constructor and don’t keep a reference to it.
  2. The client that creates ‘MyValueObject’ constructs the object with the appropriate values

The first solution would work but it feels really weird to pass in the whole object when we only need a small number of its attributes – it’s a case of stamp coupling.

It also seems quite misleading as an API because it suggests that ‘MyValueObject’ is made up of ‘SomeMutableEntity’ which isn’t the case.

My preferred solution is therefore to allow the client to construct ‘MyValueObject’ with all the parameters individually.

public class MyValueObject
{
    private readonly string otherValue;
    private readonly int someOtherValue;
    private readonly string someValue;
 
    public MyValueObject(string someValue, string otherValue, int someOtherValue)
    {
        this.someValue = someValue;
        this.otherValue = otherValue;
        this.someOtherValue = someOtherValue;
    }
 
    public string SomeValue { get { return someValue; } }
 
    public int SomeOtherValue { get { return someOtherValue; } }
 
    public string OtherValue { get { return otherValue; } }
}

The constructor becomes a bit more complicated but it now feels a bit more intuitive and ‘MyValueObject’ lives up to its role as a value object.

The equals method can now just compare equality on the fields of the object.

public class MyValueObject
{
	...
 
	public bool Equals(MyValueObject obj)
	{
	    if (ReferenceEquals(null, obj)) return false;
	    if (ReferenceEquals(this, obj)) return true;
	    return Equals(obj.otherValue, otherValue) && Equals(obj.someValue, someValue) && obj.someOtherValue == someOtherValue;
	}
}

A client might create this object like so:

var myValueObject = new MyValueObject(someMutableEntity.SomeValue, otherValue, someMutableEntity.SomeOtherValue);

Written by Mark Needham

October 23rd, 2009 at 11:39 pm

Book Club: Unshackle your domain (Greg Young)

with 9 comments

In this week’s book club we continued with the idea of discussing videos, this week’s selection being Greg Young’s ‘Unshackle your Domain‘ presentation from QCon San Francisco in November 2008. He also did a version of this talk in the February European Alt.NET meeting.

In this presentation Greg talks about Command Query Separation at the architecture level and explicit state transitions amongst other things.

Jonathan Oliver has created a useful resource page of the material that’s been written about some of these ideas as well.

These are some of my thoughts from our discussion:

  • I think the idea of eventual consistency is quite interesting and I can see how taking this approach instead of trying to create the impression that everything the user does is in real time we can make life much easier for ourselves.

    I’m not sure how this type of idea works when users have the expectation that when they change information on a page that it is updated straight away though.

    For example on a form I might decide to change my address and I would expect that if I reload the page with my address on that it would now display my new address instead of the old one. If that address was eventually consistent after 5 minutes for example the user might become quite confused and send in another update to try and change their address again.

    Liz pointed out that with bank transactions it is often explicitly described to users that money transfers are ‘pending’ so perhaps the expectation that things aren’t done in real time has already been set in some domains.

    Werner Vogels has a nice article about eventual consistency in distributed systems in which he references a paper by Seth Gilbert and Nancy Lynch which talks about the idea that “of three properties of shared-data systems; data consistency, system availability and tolerance to network partition one can only achieve two at any given time.”

  • Dave pointed out that the idea of ‘POST redirect GET‘ often used when processing web form submissions seems to adhere quite nicely to the idea of Command Query Separation as described in the video.

    I find it quite interesting that CQS at the method level in our code is usually quite closely adhered too but so often we’ll just bolt on getters onto domain objects so that we can access some data to display on the view.

    The idea of not doing this and having a write only domain seems very interesting and seemed to make sense in the system that Greg described.

    It would be interesting to know whether one would follow such an extreme approach at the architecture level if there weren’t such high performance requirements or the need to have all the operations performed on the system available for an audit.

  • Greg’s idea of state transitions sounds quite similar although perhaps not exactly the same as Eric Evans’ ‘domain events’ which he discussed in last week’s book club.

    It would be interesting to see what the code to process form submissions by the user would look like with this approach.

    As Silvio pointed out, the complexity of this code would probably be much higher than in a more typical approach where we might just build our domain objects straight from the data the user entered.

    Using Greg’s approach we would need to work out which state transitions had actually happened based on the user input which would presumably involve keeping a copy of the previous state of the domain objects in order to work out what had changed.

    I like the idea of making concepts more explicit though and the idea of keeping all state transitions is something that is built into databases by way of their log by default so it’s not a completely new concept.

    Pat Helland has a cool post titled ‘Accountants don’t use erasers‘ where he describes it in more detail.

Written by Mark Needham

August 29th, 2009 at 9:54 am

Posted in Book Club

Tagged with ,

Domain Driven Design: Anti Corruption Layer

with 6 comments

I previously wrote about some of the Domain Driven Design patterns we have noticed on my project and I think the pattern which ties all these together is the anti corruption layer.

The reason why you might use an anti corruption layer is to create a little padding between subsystems so that they do not leak into each other too much.

Remember, an ANTICORRUPTION LAYER is a means of linking two BOUNDED CONTEXTS. Ordinarily, we are thinking of a system created by someone else; we have incomplete understanding of the system and little control over it.

Even if the model we are using is being defined by an external subsystem I think it still makes sense to have an anti corruption layer, no matter how thin, to restrict any future changes we need to make in our code as a result of external system changes to that layer.

In our case the anti corruption layer is a variation on the repository pattern although we do have one repository per service rather than one repository per aggregate root so it’s not quite the same as the Domain Driven Design definition of this pattern.

anti-corruption.gif

The mapping code is generally just used to go from our our representation of the domain to a representation of the domain in auto generated from an xsd file.

We also try to ensure that any data which is only important to the service layer doesn’t find its way into the rest of our code.

The code looks a bit similar to this:

public class FooRepository 
{
	private readonly FooService fooService;
 
	public FooRepository(FooService fooService)
	{
		this.fooService = fooService;
	}
 
	public Foo RetrieveFoo(int fooId)
	{
		var xsdGeneratedFooRequest = new FooIdToXsdFooRequestMapper().MapFrom(fooId);
		var xsdGeneratedFooResponse = fooService.RetrieveFoo(xsdGeneratedFooRequest); 
		return new XsdFooResponseToFooMapper().MapFrom(xsdGeneratedFooResponse);
	}
}
public class FooIdToXsdFooRequestMapper 
{
	public XsdGeneratedFooRequest MapFrom(int fooId)
	{
		return new XsdGeneratedFooRequest { fooId = fooId };
	}
}
public class XsdFooResponseToFooMapper 
{
	public Foo MapFrom(XsdGeneratedFooResponse xsdGeneratedFooResponse)
	{
		var bar = MapToBar(xsdGeneratedFooResponse.Bar);
		// and so on
		return new Foo(bar);
	}
}

Right now we are transitioning our code to a place where it conforms more closely to the model being defined in the service layer so inside some of the mappers there is some code which is complicated in terms of the number of branches it has but doesn’t really add much value.

We are in the process of moving to a stage where the mappers will just be moving data between data structures with minimal logic for working out how to do so.

This will lead to a much simpler anti corruption layer but I think it will still add value since the coupling between the sub systems will be contained mainly to the mapper and repository classes and the rest of our code doesn’t need to care about it.

Written by Mark Needham

July 7th, 2009 at 9:05 am

Domain Driven Design: Conformist

with 2 comments

Something which constantly surprises me about Domain Driven Design is how there is a pattern described in the book for just about every possible situation you find yourself in when coding on projects.

A lot of these patterns appear in the ‘Strategic Design’ section of the book and one which is very relevant for the project I’m currently working on is the ‘Conformist’ pattern which is described like so:

When two development teams have an upstream/downstream relationship in which the upstream has no motivation to provide for the downstream team’s needs, the downstream team is helpless. Altruism may motivate upstream developers to make promises, but they are unlikely to be fulfilled. Belief in those good intentions leads the downstream team to make plans based on features that will never be available. The downstream project will be delayed until the team ultimately learns to live with what it is given. An interface tailored to the needs of the downstream team is not in the cards.

We are working on the front end of an application which interacts with some services to get and save the data from the website.

We realised that we had a situation similar to this originally but didn’t know that it was the conformist pattern and our original approach was to rely completely on the model in the service layer to the extent that we were mapping directly from SOAP calls to WCF message objects and then passing these around the code – I originally described this as being an externally defined domain model.

This led to quite a lot of pain as whenever there was a change in the service layer model our code base broke all over the place and we then ended up spending most of the day fire fighting – we were too tightly coupled to an external system.

At this stage we were reading Domain Driven Design in our Technical Book Club and I was fairly convinced that what we really needed to do was have our own model and create an anti corruption layer to translate between the service layer model and the new model that we would create.

We changed our code to follow this approach and created repositories and mappers which were the main places in our code base where we cared about this external dependency and although the isolation of the end point has worked much better we never really ended up with a rich domain model that really represented the business domain.

We had something in between the service layer model and the real business model which didn’t really help anyone and meant we ended up spending a lot of time trying to translate between the different definitions that were floating around.

Writing the code for the anti corruption layer also takes a lot of time, is quite frustrating/tedious and it was hard to see the value we were getting from doing so.

We’ve now reached the stage where we know this is the case and that it probably makes much more sense to just accept it and to not spend any more time trying to create our own model but instead just adapt what we have to more closely match the model we get from the services layer.

We will still keep a thin mapping layer as this gives us some protection against changes that may happen in the service layer.

I think a key thing for me here is that it’s really easy to be in denial about what is actually happening since what you really want is to be in control of your own domain model and design it so that it closely matches the business so that they would be able to read and understand your code if they wanted to. Sometimes that isn’t the case.

Chatting with Dave about this he suggested that a lesson for us here is that it’s important to know which pattern you are following which Andy Palmer also pointed out on twitter.

Written by Mark Needham

July 4th, 2009 at 10:17 am

DDD: Making implicit concepts explicit

with 5 comments

One of my favourite parts of the Domain Driven Design book is where Eric Evans talks about making implicit concepts in our domain model explicit.

The book describes this process like so:

Many transformations of domain models and the corresponding code happen when developers recognize a concept that has been hinted at in discussion or present implicitly in the design, and they then represent it explicitly in the model with one or more objects or relationships.

Lu and I were working on a small application to parse the WCF message log file on our project into more readable chunks whereby each request in the file would be outputted into another file so that it would be possible to read them individually.

We decided to create a little domain model for this since the code seemed to be getting a bit tricky to handle when it was all being written inline inside a main method.

To start with we just had a collection of requests which was an accurate representation of the way that the data was being stored in the log file.

We collected all these requests and then created individual files for each of them. We also grouped these request files under directories by the session that the request was from.

The input/output of our application looked a bit like this:

implicitexplicit.gif

The next idea suggested for this little application was that it would be cool if we could put the characters ‘FAIL’ into the file name of any requests which failed and also into the folder name of any sessions which had failing requests inside them.

We tried to do this with our original model but everything we did resulted in adding more and more code to the Request object which didn’t seem to belong to it. The tipping point for me was when we ended up with Request.SessionFolderName as a property.

Eventually we realised that what had been implicit would now need to be made explicit and the Session came to be an object in our domain model.

What I found most interesting about this process was that we were always talking about the Session but it didn’t actually exist in our model!

The model in our code now pretty much represents the format in which we out outputting the data and with Session as an explicit concept it makes it much easier to make changes in the future.

Written by Mark Needham

April 23rd, 2009 at 12:36 pm

DDD: Only for complex projects?

with 8 comments

One of the things I find a bit confusing when it comes to Domain Driven Design is that some of the higher profile speakers/user group contributors on the subject have expressed the opinion that DDD is more suitable when we are dealing with complex projects.

I think this means complex in terms of the domain but I’ve certainly worked on some projects where we’ve been following certainly some of the ideas of DDD and have got some value out of doing so in domains which I wouldn’t say were particularly complex.

What is Domain Driven Design?

One of the quotes from Jimmy Nilsson’s QCon presentation was that ‘DDD is OO done well‘ and I think there are a lot of similarities between the ideas of OO and DDD – in fact I think DDD has ended up covering the ground that OO was initially intended to cover.

Having our code express the domain using its own language rather than the language of the technical solution seems like an approach that would be valuable in any type of project and my recent reading of Code Complete suggests that this is certainly an approach that was used before the term DDD was coined.

However, if we’re truly doing DDD then in theory we should be modeling our domain with the help of a subject matter/business expert but from the projects I’ve worked on we can very rarely get access to these people so the modeling becomes a best attempt based on the information we have rather than a collaborative effort.

I’m not sure whether it’s actually possible to get a truly ubiquitous language that’s used by everyone from the business through to the software team by taking this approach. We certainly have a language of sorts but maybe it’s not truly ubiquitous.

As Luis Abreu points out, I don’t think there is a precise definition of what DDD actually is but for me the essence of DDD is still the same as when I compared it with OO i.e. Domain Driven Design = Object Oriented Programming + Ubiquitous Language.

What that definition doesn’t cover is the organisational patterns we can use to allow our domain model to fit into and interact with other systems, and I think this is a part of DDD which I underestimated when I wrote my previous post.

It also doesn’t take into account the possibility of doing DDD in a non OO language – for example I’m sure it’s possible to follow a DDD approach when using a functional language.

The value in using a DDD approach

As I’ve written before, I think there is value in applying the patterns of DDD even if we aren’t using every single idea that comes from the book. The approach of using just the patterns has even been coined as DDD Lite.

DDD Lite sounds to me like a particular subset of DDD but I would be quite surprised to find a project which used every single idea from the book, so maybe every use of DDD is merely a subset of the whole idea.

I’m not sure which presenter it was, but at QCon London the idea that we can use DDD to drive out the simplicity of our domain was expressed.

I would agree with this and I also think the idea of creating a ubiquitous language is very useful when working in teams, even if the domain is not that complex, so that we can stop doing the costly translations between the different terminologies people may be using to refer to the same things in the domain.

The idea of striving to make concepts in our code explicit rather than implicit is another idea which I think works very well regardless of the complexity of the project. Being able to look at code and understand what is going on without having to know a whole lot of context is invaluable.

Finally the organisational patterns of DDD, as Dan North pointed out at QCon, are valuable even in a non DDD context. We may not always use the DDD terms for what we are doing but I’ve noticed that a lot of the ways we interact with other systems have a corresponding DDD pattern which will explain the benefits and drawbacks of that approach and where it will and won’t be appropriate.

In Summary

I know neither of the authors are writing off DDD for projects with less complex domains but I feel the value that the different ideas can give to most projects is sometimes not recognised.

What the book has done well is bring together some very useful ideas for allowing us to write business software and since this is what a lot of us are doing it’s definitely worth looking at where the ideas of DDD can be applied.

Written by Mark Needham

April 6th, 2009 at 7:21 pm

DDD: Recognising relationships between bounded contexts

with 5 comments

One of the big takeaways for me from the Domain Driven Design track at the recent QCon London conference was that the organisational patterns in the second half of the book are probably more important than the actual patterns themselves.

There are various patterns used to describe the relationships between different bounded contexts:

  • Shared Kernel – This is where two teams share some subset of the domain model. This shouldn’t be changed without the other team being consulted.
  • Customer/Supplier Development Teams – This is where the downstream team acts as a customer to the upstream team. The teams define automated acceptance tests which validate the interface the upstream team provide. The upstream team can then make changes to their code without fear of breaking something downstream. I think this is where Ian Robinson’s Consumer Driven Contracts come into play.
  • Conformist – This is where the downstream team conforms to the model of the upstream team despite that model not meeting their needs. The reason for doing this is so that we will no longer need a complicated anti corruption layer between the two models. This is not the same as customer/supplier because the teams are not using a cooperative approach – the upstream are deriving the interfaces independently of what downstream teams actually need.
  • Partner – This was suggested by Eric Evans during his QCon presentation, and the idea is that two teams have a mutual dependency on each other for delivery. They therefore need to work together on their modeling efforts.

I think it’s useful for us to know which situation we are in because then we can make decisions on what we want to do while being aware of the various trade offs we will need to make.

An example of this is when we recognise that we have a strong dependency on the domain model of another team where I think the approach that we take depends on the relationship the two teams have.

If we have a cooperative relationship between the teams then an approach where we pretty much rely on at least some part of the supplier’s model is less of an issue than if we don’t have this kind of relationship. After all we have an influence on the way the model is being developed and maybe even worked on it with the other team.

On the other hand if we realise that we don’t have a cooperative relationship, which may happen due to a variety of reasons…

When two teams with an upstream/downstream relationship are not effectively being directed from the same source, a cooperative pattern such as CUSTOMER/SUPPLIER TEAMS is not going to work.

This can be the case in a large company in which the two teams are far apart in the management hierarchy or where the shared supervisor is indifferent to the relationship of the two teams. It also arises between teams in different companies when the customer’s business is not individually important to the supplier. Perhaps the supplier has many small customers, or perhaps the supplier is changing market direction and no longer values the old customers. The supplier may just be poorly run. It may have gone out of business. Whatever the reason, the reality is that the downstream is on its own.

(from the book)

…we need to be more careful about which approach we take.

We are now potentially in conformist territory although I don’t think that is necessarily the route that we want to take.

If we choose to conform to the supplier’s model then we need to be aware that any changes made to that model will require us to make changes all over our code and since these changes are likely to all over the place it’s going to be quite expensive to make those changes. On the other hand we don’t have to spend time writing translation code.

The alternative approach is to create an anti corruption layer where we interact with the other team’s service and isolate all that code into one area, possibly behind a repository. The benefit here is that we can isolate all changes in the supplier’s model in one place which from experience saves a lot of time, the disadvantage of course being that we have to write a lot of translation code which can get a bit tricky at times. The supplier’s model still influences our approach but it isn’t our approach.

I’m not sure what pattern this would be defined as – it doesn’t seem to fit directly into any of the above as far as I can see but I think it’s probably quite common in most organisations.

There are always multiple approaches to take to solve a problem but I think it’s useful to know what situation we have before choosing our approach.

Written by Mark Needham

March 30th, 2009 at 10:52 pm

DDD: Repository pattern

with 4 comments

The Repository pattern from Domain Driven Design is one of the cleanest ways I have come across for separating our domain objects from their persistence mechanism.

Until recently every single implementation I had seen of this pattern involved directly using a database as the persistence mechanism with the repository acting as a wrapper around the Object Relational Mapper (Hibernate/NHibernate).

Now I consider there to be two parts to the repository pattern:

  1. The abstraction of the persistence mechanism away from our other code by virtue of the creation of repositories which can be interacted with to save, update and load domain objects.
  2. The need for these repositories to only be available for aggregate roots in our domain and not for every single domain object. Access to other objects would be via the aggregate root which we could retrieve from one of the repositories.

This pattern can also be useful when we retrieve and store data via services which we have been doing recently. Of course eventually the data is stored in a database but much further up stream.

To start with we were doing that directly from our controllers but it became clear that although we weren’t interacting directly with a database the repository pattern would still probably be applicable.

The way we use it is pretty much the same as you would if it was abstracting an ORM:

repository.gif

I think with an ORM the mapping would be done before you got the data back so that’s an implementation detail that is slightly different but as far as I can see the concept is the same.

Written by Mark Needham

March 10th, 2009 at 10:31 am