Mark Needham

Thoughts on Software Development

Archive for the ‘QCon’ Category

QCon London 2009: The Power of Value – Power Use of Value Objects in Domain Driven Design – Dan Bergh Johnsson

with 6 comments

The final Domain Driven Design talk I attended at QCon was by Dan Bergh Johnsson about the importance of value objects in our code.

I thought this session fitted in really well as a couple of the previous speakers had spoken of the under utilisation of value objects.

The slides for the presentation are here.

What did I learn?

  • Dan started the talk by outlining the goal for the presentation which was to ‘show how power use of value objects can radically change design and code, hopefully for the better’. A lot of the presentation was spent refactoring code written without value objects into shape.
  • We started out with a brief description of what value objects are not which I found quite amusing. To summarise,they are not DTOS, not Java beans and not objects with public fields. The aim with value objects is to swallow computational complexity from our entities. Creating what Dan termed ‘compound value objects’ provides a way to do this. The benefits of doing this are reduced complexity in entities and code which is more extensible, more testable and has less concurrency issues.
  • I found myself quite intrigued as to how you would be able to spot an opportunity in your code to introduce a value object and almost as soon as I wrote down the question Dan covered it! Some opportunities include strings with format limitations, integers with limitations or arguments/results in service methods. The example used was a phone number which was being passed all over the place as a string – refactoring this allowed the code to become explicit – before the concept existed but it was never properly spelled out – and it pulled all the functionality into one place.
  • Dan’s term for this was ‘data as centres of gravity‘ – once you have the value object anything related to that data will be drawn towards the object until you have a very useful reusable component. He pointed out that ‘your API has 10-30 seconds to direct a programmer to the right spot before they implement it [the functionality] themselves‘. I think this was a fantastic reason for encouraging us to name these objects well as we pretty much only have the amount of time it takes to hit ‘Ctrl-N’ in IntelliJ, for example, and to type in a potential class name.
  • Another interesting point which was being discussed on twitter as the presentation was going on was whether we should be going to our domain expert after discovering these value objects and asking them whether these objects made sense to them. Dan said that this is indeed the best way to go about it. I have to say that what struck me the most across all the presentations was the massive emphasis on getting the domain expert involved all the time.
  • Seemingly randomly Dan pointed out an approach called composite oriented programming which is all about using DDD terminology but inside a framework to drive development. I’ve only had a brief look at the website so I’m not sure if it’s anything worth shouting about.
  • In the second half of the session compound value objects were introduced – these basically encapsulate other value objects to come up with even more explicitly named objects. The examples in the slides are very useful for understanding the ideas here so I’d recommend having a look from slide 44 onwards. The underlying idea is to encapsulate multi object behaviour and context and make implicit context explicit. This idea is certainly one that is new to me so I’m going to be looking at our code base to see if there’s an opportunity to introduce the ideas I learnt in this talk.
  • To close Dan rounded up the benefits we get from introducing value objects into our code – context aware client code, smart services and a library with an API.

Written by Mark Needham

March 15th, 2009 at 9:45 am

Posted in QCon

Tagged with , ,

QCon London 2009: Rebuilding guardian.co.uk with DDD – Phil Wills

with 5 comments

Talk #3 on the Domain Driven Design track at QCon was by Phil Wills about how the Guardian rebuilt their website using Domain Driven Design.

I’d heard a little bit about this beforehand from colleagues who had the chance to work on that project but it seemed like a good opportunity to hear a practical example and the lessons learned along the way.

There are no slides available for this one on the QCon website at the moment.

What did I learn?

  • Phil started by explaining the reasons that they decided to rebuild their website – tedious manual work was required to keep the site up to date, they were struggling to hire new developers due to their choice of technology stack and it was difficult to implement new features.
  • They were able to get domain experts very involved with the team to help define the domain model and trusted the domain expert to know best. I think the difficulty of making this happen is underestimated – none of the projects that I’ve worked on have had the domain expert access that Phil described his team as having. The benefit of having this was that they had times when the business pointed out when the model was getting over complicated. The takeaway comment for me was that we should ‘strive to a point where the business can see and comprehend what you’re trying to achieve with your software
  • Entities are your stars but we need to also think about value objects. Entities have the maintenance overhead of having life cycle that we need to take care of so it makes sense to avoid this wherever possible. Phil showed an example of an entity class with around 50 fields in which could have been severely simplified by introducing value objects. Every time I heard value objects being mentioned it reminded me of micro/tiny types which are not exactly the same but seem to be fairly similar in their intent of improving expressiveness and making the code easier to work with. The importance of value objects was also mentioned in Eric Evans talk.
  • The database is not the model – Phil repeated this multiple times just in case we didn’t get the picture, and it was something Dan mentioned in his talk as well where he referenced Rails use of Active Record as being a particular culprit. Phil mentioned that this had been a difficult idea to grasp for some of the team who didn’t give full status to data that didn’t have a corresponding place in the database.
  • The Guardian’s core domain is articles – they look to use 3rd party solutions for parts of the website which aren’t part of the core domain. For example football league tables are pulled from a 3rd party content provided who sends the data in a format which can be easily displayed on the website. I think I would have made a mistake here and tried to model the league table so it was cool that they recognised that there wasn’t going to be much value in doing this since it’s not their differentiator. This was a really useful example for helping me to understand what the core domain actually is.
  • Although the database is not the model, Phil pointed out that keeping the database near to the model helps save the need to do lots of context switching. Interestingly he also pointed out that it’s not always good to completely normalise data – it can lead to expensive joins later on.
  • One idea which I felt wasn’t completely explained was that of injecting repositories into domain model objects to get access to some data rather than having to do a traversal by accessing it through other means. Phil said this was considered a bit of a hack but was sometimes a good choice. He did also point out that it can hide problems with the model.
  • Future plans for the code include adding domain events and trying to create a less monolithic domain – there is currently a single shared context which makes it difficult to prise stuff that’s not in the core domain away.

Written by Mark Needham

March 14th, 2009 at 2:23 pm

Posted in QCon

Tagged with , ,

QCon London 2009: DDD & BDD – Dan North

with 2 comments

The second presentation in the Domain Driven Design track at QCon was titled ‘DDD & BDD‘ and was presented by my colleague Dan North – a late stand in for Greg Young who apparently injured himself playing ice hockey.

Eric did an interview with Greg at QCon San Francisco 2007 where Greg talks about some of his ideas and apparently there is an InfoQ video kicking around of Greg’s ‘Unshackle Your Domain’ talk from QCon San Francisco 2008 which we were told to pester InfoQ to post on their website!

Anyway this one was Dan’s talk about the relation between Domain Driven Design and Behaviour Driven Development.

The slides for the presentation are here.

What did I learn?

  • Dan briefly covered some of the Domain Driven Design basics, including quoting Jimmy Nilsson on what it actually is:

    Focus on the domain and letting it affect the software very much

    A fairly succinct way of summing up the book in one sentence I think!

  • Dan described the core domain as being your differentiator (what makes you special), the thing the stakeholders care most about, the place where the most interesting conversations are going to be, the richest seams for knowledge crunching. I don’t think I really understood what the core domain was before watching this presentation – it also helped me make sense of Eric Evans’ comment in the previous talk where he spoke of not spreading the modeling too thin, instead keep it focused on the areas that provide you most value.
  • He spoke of the benefits of having a shared language used by everyone on your team – you don’t have the translation effort which is very tiring! Driving the language into everything has the added benefit of giving you a place to put behaviour, otherwise it ends up spread all over the place. Dan spoke of the ubiquitous language only being consistent within a bounded context – not really so ubiquitous after all!
  • Next up was the other side of the coin – Behaviour Driven Development. He described BDD as ‘focusing on the behaviour of an application from the point of view of its stakeholders‘, where the stakeholders are anyone who cares about the application. BDD is about outside-in development where requirements are expressed as stories which have acceptance criteria to help us know when we’re ‘done’. The acceptance criteria are comprised of scenarios made up of steps and these then become acceptance tests when they are completed. He also riffed a bit about bug driven development – outside in taken to the extreme.

    Dan made an interesting point that BDD is about a mindset rather than the tools and I think I agree there – I’ve not used the tools much but I try to consider the behaviour I’m trying to drive whenever I’m writing tests/examples.

  • For a while I’ve preferred to describe the way I write code as being driven by example rather than driven by tests but it is still referred to as TDD – Dan helped me to see how this makes sense. We code by example to implement features but when those examples are done then they act as tests.
  • Dan spoke of the ‘Glaze Effect‘ when speaking with domain experts i.e. when you talk using language from a domain they don’t understand – probably using language that is too technical for them to care.
  • Dan said the first part of the book is effectively ‘The Hobbit’ but the really interesting stuff only comes in ‘The Lord of the Rings’ which is the second part. That’s pretty much my experience as well – I gained way more from reading the second half than the first half. Dan pointed out that the ideas around scaling DDD are applicable anywhere – they’re not specific just to DDD.
  • BDD is about conversations in the ubiquitous language to produce software while DDD is about exploring the domain models that stakeholders use. Discovering these domain models can be valuable even if you don’t end up writing any code.
  • There is a circular dependency between BDD and DDD – we can’t have behaviour driven conversations without DDD and BDD helps structure the conversations you can have in DDD. The structure that the Given, When, Then scenarios provide for having conversations was also identified as being a key part of how BDD can be useful. When it comes to coding, DDD helps drive the design and BDD helps drive what you develop.

Written by Mark Needham

March 14th, 2009 at 1:28 am

Posted in QCon

Tagged with , ,

QCon London 2009: What I’ve learned about DDD since the book – Eric Evans

with 9 comments

I went to the QCon conference in London on Thursday, spending the majority of the day on Eric Evans’ Domain Driven Design track.

The opening presentation was by Eric Evans, himself, and was titled ‘What I’ve learned about DDD since the book‘.

In the 5 years since the book was published, I’ve practiced DDD on various client projects, and I’ve continued to learn about what works, what doesn’t work, and how to conceptualize and describe it all. Also, I’ve gained perspective and learned a great deal from the increasing number of expert practitioners of DDD who have emerged.

We’re currently reading Domain Driven Design in our technical book club in the ThoughtWorks Sydney office so I was intrigued to hear about Eric’s experiences with DDD and how those compared with ours.

The slides from the presentation are here.

What did I learn?

  • We started with a look at what Evans considers the most essential parts of DDD – creative collaboration between the software experts and the domain experts was identified as being important if we are to end up with a good model. If we can make the process of defining the model fun then all the better but we need to utilise domain experts properly and not bore them.

    Taking a domain expert through some screens and talking about the validation needed on different fields is a bad way to use them – they want to do valuable work and if this is their experience of what it’s like working with the software experts then we’ll never see them again.

  • When we’re modeling we need to come up with at least three models – don’t stop at the first model, it’s probably not going to be the best one. If we stop after one model then we’re leaving opportunities on the table – white boarding different models is a very cheap activity so we should make sure we take advantage of that and do it more frequently.

    When we talk of three models Evans’ pointed out that these should be different to each other and that this would involved coming up with some radically different ideas. Creating an environment where we can celebrate ‘bad’ ideas is necessary to encourage people to step into the riskier territory. If we’re only coming up with good ideas we’re not being creative. This was a definite take away for me – I’m certainly guilty of only considering the first model I discover so this is something to improve on.

  • He touched on a couple of others including the need to constantly reshape the domain model as we learn more about it and that we can get the biggest gain from DDD by keeping the focus on our core domain before we got onto explicit context boundaries – I’ve always found this to be the most interesting part of the book and Evans said he wished he’d made it one of the earlier chapters.

    I spoke with him afterwards about whether or not the UI was considered to be a separate bounded context. He said to consider a bounded context as an observation [of the system] and that if the model of the UI was significantly different to the underlying model then it would be reasonable to consider it as another bounded context.

  • We moved onto the building blocks of DDD – services, entities, value, objects, factories, repositories – which Evans considers to be over emphasised. They are important but not essential. Evans did also point out that value objects tend to get neglected. This was also mentioned in several of the other presentations.
  • Despite this Evans added a new building block – domain events. He described this as ‘something happened that the domain experts care about’. They provide a way of representing the state of an entity and lead to clearer, more expressive model. This sounded very similar to an approach Nick has described to me whereby we would have a new object that represented a specific state of an object. ‘Every change to an object is a new object’ was the take away quote from this part of the talk for me – I think an explicit approach to modeling is far superior to an implicit one.

    The example given was a baseball game where a domain event might be someone swinging at the ball – when this happens statisticians will need to be informed so that they can update their statistics i.e. we often want to record to events that happen in our domain. He described the use of an event stream which we could put events onto and they could be subscribed to by whoever cares e.g. the reporting service.

  • Evans made an interesting point when talking about strategic design – just because you have been working in a domain for a long period of time does not make you a domain expert. There is a subtle difference between someone working as a software expert in a domain and the actual domain expert – when looking at problems the software expert is responsible for looking at how software can help, the domain expert is responsible for removing that problem!
  • Evans came up with a context mapping step-by-step which he said could be followed to help us work out where the different bounded contexts in our system are and how they interacted:
    1. What models do we know of? (draw blob for each & name it)
    2. Where does each apply?
    3. Where is information exchanged?
    4. What is the relationship?
    5. Rinse and repeat

    I’ve never drawn a context map before but it sounds like a potentially valuable exercise – might try and do one for my current project!

  • He also added a couple more patterns in this area – big ball of mud and partners. For big ball of mud he said we should identify these in our context maps and then not worry too much about applying design techniques when in this context – just take a pragmatic approach and ‘reach in and change it’

    Partners was described as being similar to a three-legged race – both teams need to cooperate on their modeling efforts because they have a mutual dependency, neither can deliver without the other.

  • Some final take away quotes included ‘not all of a large system will be well designed‘ and ‘precision designs are fragile‘ – where we have the latter in our code we need to protect them with an anti corruption layer and with the former we should pick a specific area (that matters) to design well and accept that other bits might not be as good as this bit.

Gojko Adzic has a write up of this talk as well – a very informative talk and it’s definitely cool to hear the guy who coined the approach talking about it.

Written by Mark Needham

March 13th, 2009 at 8:56 pm

Posted in QCon

Tagged with , ,