Mark Needham

Thoughts on Software Development

Archive for the ‘altnetuk’ tag

Similarities between Domain Driven Design & Object Oriented Programming

with 3 comments

At the Alt.NET UK Conference which I attended over the weekend it occurred to me while listening to some of the discussions on Domain Driven Design that a lot of the ideas in DDD are actually very similar to those being practiced in Object Oriented Programming and related best practices.

The similarities

Anaemic Domain Model/Law of Demeter

There was quite a bit of discussion in the session about anaemic domain models.

An anaemic domain model is one where a lot of the objects are merely data holders and do not actually have any behaviour inside them. While it has a fancy name, in OO terms this problem materialises due to our failure to adhere to the Law of Demeter.

My colleague Dan Manges has a brilliant post describing this principle but a tell tale sign is that if you see code like the following in your code base then you’re probably breaking it.

1
object.GetSomething().GetSomethingElse().GetSomethingElse()

This is often referred to as train wreck code and comes from breaking the idea of Tell Don’t Ask. In essence we should not be asking an object for its data and then performing operations on that data, we should be telling the object what we want it to do.

Side Effect Free Functions/Command Query Separation

DDD talks about side effect free functions which are described as follows:

An operation that computes and returns a result without observable side effects

The developer calling an operation must understand its implementation and the implementation of all its delegations in order to anticipate the result.

My colleague Kris Kemper talks about a very similar OOP best practice called command query separation. From Martin Fowler’s description:

The really valuable idea in this principle is that it’s extremely handy if you can clearly separate methods that change state from those that don’t. This is because you can use queries in many situations with much more confidence, introducing them anywhere, changing their order.

It’s not exactly the same but they have a shared intention – helping to make the code read more intuitively so that we can understand what it does without having to read all of the implementation details.

Intention Revealing Interfaces/Meaningful Naming

Intention Revealing Interfaces describe a similar concept to Side Effect Free Functions although they address it slightly differently:

A design in which the names of classes, methods, and other elements convey both the original developer’s purpose in creating them and their value to a client developer.

If a developer must consider the implementation of a component in order to use it, the value of encapsulation is lost.

In OOP this would be described as using meaningful names as detailed in Uncle Bob’s Clean Code (my review).

Bounded Context/Clean Boundaries

DDD’s bounded context describes “The delimited applicability of a particular model” i.e. the context in which is is held valid.

This is quite closely related to the idea of clean boundaries in Clean Code where Uncle Bob states:

Code at the boundaries needs clear separation and tests that define expectations

In both cases we are creating an explicit separation of ‘our code’ from the outside world so to speak. We want to clearly define where ‘our world’ ends by defining the interfaces with which we interact with the outside world.

Anti Corruption Layer/Wrappers

The anti corruption layer in DDD is “an isolating layer to provide clients with functionality in terms of their own domain model.”

It is used to create a boundary for our bounded context so that the models of other systems we interact with doesn’t creep into our system.

This is implemented in OO using one of the wrapper patterns. Examples of these are the Facade, Adapter, or Gateway pattern which all solve the problem in slightly different ways.

The intention in all cases is to have one area of our code which calls 3rd party libraries and shields the rest of the code from them.

Domain Driven Design = Object Oriented Programming + Ubiquitous Language?

While talking through some of these ideas I started to come to the conclusion that maybe the ideas that DDD describe are in fact very similar to those that OOP originally set out to describe.

The bit that DDD gives us which has perhaps been forgotten in OOP over time is describing the interactions in our systems in terms of the business problem which we are trying to solve i.e. the Ubiquitous Language.

From Wikipedia’s Object Oriented Programming entry:

OOP can be used to translate from real-world phenomena to program elements (and vice versa). OOP was even invented for the purpose of physical modeling in the Simula-67 programming language.

The second idea of physical modeling seems to have got lost somewhere along the way and we often end up with code that describes a problem at a very low level. Instead of describing a business process we describe the technical solution to it. You can be writing OO code and still not have your objects representing the terms that the business uses.

There are some things that DDD has certainly made clearer than OOP has managed. Certainly the first part of the book which talks about building a business driven Domain Model is something which we don’t pay enough attention to when using OOP.

For me personally before I read the concepts of DDD I would derive a model that I thought worked and then rarely go back and re-look at it to see if it was actually accurate. Reading DDD has made me aware that this is vital otherwise you eventually end up translating between what the code says and what the business says.

Ideas around maintaining model integrity are also an area I don’t think would necessarily be covered in OOP although some of the implementations use OOP ideas so they are not that dissimilar.

Why the dismissal of DDD?

The reason I decided to explore the similarities between these two concepts wasn’t to dismiss Domain Driven Design – I think the framework it has given us for describing good software design is very useful.

Clearly I have not mapped every single DDD concept to an equivalent in OOP. I think DDD has given a name or term to some things that we may just take for granted in OOP. Certainly the DDD ideas expressed around the design of our model are all good OOP techniques that may not be explicitly stated anywhere.

I wanted to point out these similarities as I feel it can help to reduce the fear of adopting a new concept if we know it has some things in common with what we already know – if a developer knows how to write OO code and knows design concepts very well then the likelihood is that the leap to DDD will not actually be that great.

It would be really good if we could get to the stage where when we teach the concepts of OOP we can do so in a way that emphasises that the objects we create should be closely linked to the business domain and are not just arbitrary choices made by the developers on the team.

Maybe the greatest thing about DDD is that it has brought all these ideas together in one place and made them more visible to practitioners.

I am very interested in how different things overlap, what we can learn from these intersections and what things they have in common. It’s not about the name of the concept for me, but learning what the best way to deliver software and then to maintain that software after it has been delivered.

Written by Mark Needham

September 20th, 2008 at 1:12 pm

Alt.NET UK Conference 2.0

with 10 comments

I spent most of yesterday at the 2nd Alt.NET UK conference at Conway Hall in London.

First of all kudos to Ian Cooper, Alan Dean and Ben Hall for arranging it. There seemed to be a lot more people around than for the one in February which no doubt took a lot of arranging.

It was again run using the open spaces format and we started with an interesting discussion on what Alt.NET actually is. There’s been quite a bit of discussion in the blogosphere and everyone seemed to have a slightly different opinion – for me it’s a group of people who are interested in improving how they work in .NET. Ian Cooper pointed out the Foundations of Programming e-book as something concrete which has been released regarding what Alt.NET actually is.

The day was again split into four streams – I stuck to the more general software development oriented discussions.

Agile

This session focused on agile estimation session techniques. Mike Cohn’s book is probably the best known reading in this area.

Ian Cooper raised an interesting idea around the discussion of estimates – he suggested getting the person with the highest and lowest estimates to explain why they had given that estimate to help bring out the different assumptions that people were making regarding the story.

He also spoke about his use of Alistair Cockburn’s Crystal methodology which advocates a 3 phase approach to development whereby you start with the skeleton of the system early on and gradually fill it out in future iterations. I have read a little bit of the book but never used it myself so it was interesting to hear a practical experience.

The idea of only releasing software when it is actually needed rather than at frequent 2 week intervals regardless was also raised but from what I’ve seen on my projects when there are releases it is because the software is needed by the users. I think this is more a case of pragmatically using the methodology rather than sticking to precise rules.

Acceptance Testing

A lot of the focus in this session was around how we can improve the communication between BAs, QAs and Developers when it comes to writing acceptance criteria.

Gojko Adzic suggested the idea of having an Acceptance Testing 3-Some before a story is played so that these tests can be written with all opinions taken into account. The idea here was that the tests written would be more accurate and hopefully reduce the problem of having to go back and change them later on.

While this idea seems good I am more of the opinion that we should just go with the acceptance tests that the QA writes, implement those, then check with the business whether everything is covered. The feedback loop in this approach is much shorter and as the key to software development for me is getting frequent feedback I prefer this approach. This is certainly the way we have done things on the ThoughtWorks projects I’ve been a part of. Requirements by Collaboration was pointed out as being a good book to read with regards to getting everyone involved in the process.

The idea of having a ubiquitous language that everyone in the team used to describe acceptance tests was another good idea that came out of discussions – although I think if a team is developing software using a Domain Driven approach then this is likely to happen anyway.

A large part of the session focused on UI tests and the problems people experienced with regards to brittleness and long running time. One way to get around this is clearly not to have so many tests at this level – one idea was to only have automated UI level tests for critical scenarios e.g. logging into the system and then manual test other scenarios.

One way we have got around this on the projects I’ve worked on is by having a level of tests one level down which test component interaction separate from the UI – typically called functional tests. These could be used to test things such as NHibernate logic rather than doing this through the UI. We would also look to keep minimal logic in the presentation layer as this is the most difficult part of systems to get automated tests around.

TextTest was mentioned as being an acceptance testing tool which tested the system by going through the log files. This has the added benefit of forcing you to write more useful logging code. Green Pepper was also mentioned as a useful way of using acceptance tests which link together Jira and Confluence.

Domain Driven Design

The discussion (perhaps not surprisingly) focused on the concepts described in Eric Evans’ book.

The value of having rich domain objects with the business logic inside was questioned with Ian Cooper pointing out that business logic need not necessarily be business rules but could also describe the way that we traverse the object graph. In particular the Law of Demeter was discussed as a way of avoiding an anaemic domain model.

The problem of designing from the database upwards resulting in these anaemic objects was raised – one potential solution being driving the design from acceptance tests i.e. top down.

Ian Cooper pointed out that coding in a Domain Driven way with lots of plain C# objects made testing much easier. I think in general keeping the behaviour and data together in an object makes it easy to test. Doing this using a Domain Driven approach just makes it even easier to use the code as a communication mechanism.

There was also discussion around the use of Data Transfer Objects, with the general consensus being that using DTOs was good around the UI to save you having to deal with incomplete domain objects around these areas.

The idea of the UI being outside the bounded context that our domain model is used in was also suggested which strikes me as a good idea – would be good to see it done in practice though.

It was suggested that DDD is only useful in complex domains. I think this is true to an extent but some some of the ideas of DDD are just good software development principles such as having a common/ubiquitous language in the team. Ideas such as bounded context are clearly only necessary when there is a greater level of complexity.

I would certainly recommend picking up a copy of the book – 90% of what was discussed is in there.

Developer Education

This was the most interactive session I attended and the majority of the people in the room were able to offer their opinions which I thought was much more aligned with the spirit of the open spaces format.

The discussion focused on how the environment that you’re working in influences your enthusiasm for learning new technologies and new ways of doing things. I am lucky in that working for ThoughtWorks I have colleagues who are very enthusiastic about technology and encouraging other people to learn.

The Ruby community was pointed out as one where there appears to be much more enthusiasm than there is in the .NET world. I’m not sure how exactly we can measure this but blogging wise the Ruby guys definitely have the edge. I think some of this can be explained that people who ran with Ruby early on are massive technology enthusiasts and you’re unlikely to start working with Ruby because you have to – it normally starts out for the love of the language from my experience.

A suggestion was made that holding Brown Bag sessions at lunch time where people could share their learnings with colleagues was a good way of helping to share knowledge. This is certainly an idea that we use frequently at ThoughtWorks and there is actually even more value in the conversations which come afterwards.

The Google idea of 20 % time to dedicate to your own learning was raised as being ideal, although it was pointed out that this was a hard thing to implement as getting things done always takes precedence.

Overall

It was an interesting day and it’s always good to hear the experiences of other people outside of ThoughtWorks.

I think we need to try and find a way that more people can get involved because most of the sessions I attended were very dominated by a few people who had great knowledge on the subject. While it is no doubt very useful to hear their opinions I think it would be even better if more people could get to speak.

Written by Mark Needham

September 14th, 2008 at 4:28 pm

Posted in .NET

Tagged with , ,