Mark Needham

Thoughts on Software Development

Archive for the ‘Conferences’ Category

XP Day: Visualizing what’s happening on our project

without comments

Another presentation that I gave at XP Day was one covering some visualisations Liz, Uday and I have created from various data we have about our project, gathered from Git, Go and Mingle.

These were some of the things that I learned from doing the presentation:

  • The various graphs I presented in the talk have a resolution of 1680 x 1050 which is a much higher resolution than what was available on the projector.

    As a result it was necessary to scroll up and down/side to side when demonstrating each visualisation so that people could actually see them.

    Either I need to work out how to get the resolution of the projector higher or be able to shrink the images to the right size so they’d fit more naturally. I imagine the later would be easier to achieve.

  • My machine refused to switch to Powerpoint when I was presenting so I had to wing it a bit from my memory of how the talk was meant to go.

    As a result of not having the slides to show I ended up just showing the code that we’d written to create the graphs.

    I didn’t think this would work very well but the feedback I got suggested that people enjoyed seeing the code behind the visualisations.

I had a discussion with people during the talk and with others at XP Day about how I could change the visualisations so that they were more useful. These were some of the ideas that other people had:

Topgit
  • Matt Jackson suggested that it would be interesting to graph how often the last ten builds were broken so you could see how it was trending.
  • Actionable metrics – we had a discussion about what somebody is supposed to do as a result of seeing a visualisation of something i.e. what action do we want them to take.

    We achieve this in some cases e.g. with the pair stair it’s clear who you haven’t paired with recently and the impetus is therefore on you to address that if you want to.

  • Phil Parker suggested that metrics that you have an emotional response to are the most effective ones in his experience.

    I think this links to the idea of them being actionable in that if you have an emotional response to something then it often makes you want to go and do something about it.

  • I had an interesting discussion with Benjamin Mitchell in which he suggested that an interesting question to ask is ‘what would better look like?‘ and another one would be ‘what rule would we have to have in place for people to behave like this?‘.

    We realised that in some cases it’s not really clear what better would look like since you can end up with two potentially competing ‘good’ practices e.g. checking in frequently is good but if everyone does it together then it can lead to the build breaking which isn’t as good.

We haven’t tried to tidy up any of the code that we used but it is available on the following github accounts if anyone’s interested:

Written by Mark Needham

November 30th, 2011 at 2:25 am

Posted in XP Day

Tagged with , ,

XP Day: Scala: An Experience Report (Liz Douglass and me)

without comments

At XP Day my colleague Liz Douglass and I presented the following experience report on our last 6 months working together on our project.

We wanted to focus on answering the following questions with our talk:

  • Should the project have been done in Java?
  • Does it really speed up development as was hoped?
  • What features of the language and patterns of usage have been successes?
  • Is it easier to maintain and extend than an equivalent Java code base?

We covered the testing approach we’ve taken, our transition from using Mustache as our templating language to using Jade and the different features of the language and how we’ve been using/abusing them.

The approach we used while presenting was to cover each topic in chronological order such that we showed how the code had evolved from June until November and the things we’d learned over that time.

It was actually an interesting exercise to go through while we were preparing the talk and I think it works reasonably well as it makes it possible to take people on the same journey that you’ve been on.

These were a few of the points that we focused on in the talk:

  • In our code base at the moment we have 449 unit tests, 280 integration tests and 353 functional tests which is a much different ratio than I’ve seen on other code bases that I’ve worked on.

    Normally we’d have way more unit tests and very few functional tests but a lot of the early functionality was transformations from XML to HTML and it was really easy to make a functional test pass so all the tests ended up there.

    Unfortunately the build time has grown in line with the approach as you might expect!

  • We originally started off using Mustache as our templating language but eventually switched to Jade because we were unable to call functions from Mustache templates. This meant that we ended up pushing view logic into our models.

    The disadvantage of switching to Jade is that it becomes possible to put whatever logic you want into the Jade files so we have to remain more disciplined so we don’t create an untestable nightmare.

  • On our team the most controversial language feature that we used was an implicit value which we created to pass the user’s language through the code base so we could display things in English or German.

    Half the team liked it and the other half found it very confusing so we’ve been trying to refactor to a solution where we don’t have it anymore.

    Our general approach to writing code in Scala has been to write it as Java-like as possible so that we don’t shoot ourselves in the foot before we know what we’re doing and it’s arguable that this is one time when we tried to be too clever.

  • In the closing part of our talk we went through a code retrospective which we did with our whole team a couple of months ago.

    In that retrospective we wrote down the things we liked about working with Scala and the things that we didn’t and then compared them with a similar list which had been created during the project inception.

    Those are covered on the last few slides of the deck but it was interesting to note that most of the expected gains were being achieved and some of the doubts hadn’t necessarily materialised.

Our conclusion was that we probably would use Scala if we were to redo the project again, mainly because the data we’re working with is all XML and the support for that is much better in Scala then in Java.

There is much less code than there would be in an equivalent Java code base but I think the maintenance of it probably requires a bit of time working with Scala, it wouldn’t necessarily be something a Java developer would pick up immediately.

We’re still learning how to use traits and options but they’ve worked out reasonably well for us. We haven’t moved onto any of the complicated stuff such as what’s in scalaz and I’m not sure we really need to for a line of business application.

In terms of writing less code using Scala has sped up development but I’m not sure whether the whole finds Scala code as easy to read as they would the equivalent Java so it’s debatable whether we’ve succeeded on that front.

Written by Mark Needham

November 24th, 2011 at 11:52 pm

Posted in XP Day

Tagged with ,

XP Day: Cynefin & Agile (Joseph Pelrine/Steve Freeman)

with one comment

Another session that I attended at XP Day was one facilitated by Steve Freeman and Joseph Pelrine where we discussed the Cynefin model, something that I first came across earlier in the year at XP 2011.

607px Cynefin framework Feb 2011

We spent the first part of the session drawing out the model and coming up with some software examples which might fit into each domain.

  • Simple – when you’re going to checkin run the build
  • Complicated – certain types of architectural decisions
  • Complex – task estimation
  • Chaos – startup explosion

Steve pointed out that with simple/complicated the important thing to remember is that things on the right hand side are repeatable whereas on the other side we could do the same thing again and get a completely different result.

The most interesting part of the discussion for me was when Chris Matts joined in and suggested that in his experience people generally preferred to be in one of the quadrants more than the others.

He used Dan North as his example, suggesting that Dan prefers to be in chaotic situations.

I think I like being in the complex domain when you don’t really know what’s going to happen. I find it quite boring when things are predictable.

Traditional project managers would probably prefer to be in the simple/complicated domains because things are a bit more certain over that size.

Liz and I were discussing afterwards whether that tendency is what tends to lead to people becoming generalists rather than specialists.

If you were to become a specialist in a subject then it would suggest to me that a lot of your time would be spent in the complicated domain honing your skills.

Cynefin

Another discussion was around the desire when building systems to try and move the building of that system, which originally starts off being complex, into the complicated and finally into the simple domain.

Nat Pryce pointed out that we can often end up pushing a system back into chaos if we try and force it into the simple domain.

Pushing something into simple would suggest that anyone would be able to make changes to it without having any specialist/expert skills.

Someone else in the group pointed out that it’s often been thought that we can make the programming of systems something so simple that anyone can do it but that so far that theory has been proved false.

Overall this was an interesting session for me and it makes it a bit easier to understand some of the things that I see in the projects that I work on.

Recommended reading from the session

Written by Mark Needham

November 24th, 2011 at 10:25 pm

Posted in XP Day

Tagged with

XP Day: Refactoring to functional style (Julian Kelsey/Andrew Parker)

without comments

I’m attending XP Day this year and the first talk I attended was one by Julian Kelsey and Andrew Parker titled ‘Refactoring to functional style’.

I’ve worked on a Scala project for the last 6 months and previously given a couple of talks about adopting a functional style of programming in C# so this is a subject area that I find quite interesting.

The talk focused on 5 refactorings that the presenters have identified to help move imperative code to a more functional style:

  • Isolate mutation – keeping mutation in one place rather than leaking it everywhere
  • Isolate predicate – making it possible to filter collections
  • Separate loops – iterating over collections more than once if we’re doing more than one thing with the collection
  • Decide on branches once – putting conditional logic into a map as functions
  • Separate sequence of operations from execution of operations – composing functions and executing them at the end

Since they were coding in Java they made use of the Google Guava collections library to make it easier to work with collections in a functional way.

As you might imagine some of the code ends up being quite verbose due to the inability to pass functions around in Java.

I was reminded of a coding dojo we did a couple of years ago where we compared how code written using lambdaj would compare to Scala code.

Despite the verbosity it was interesting to see that it’s actually possible to achieve a similar style of programming to what you would expect in languages like Scala, F# and Clojure.

My former colleague Dan Bodart has an alternative library for working with collections in Java called totallylazy which based on some of the latest commits looks quite neat.

One interesting thing the speakers suggested is that they are better able to see data dependencies in their code when chaining functions together which they wanted to apply to that data.

I hadn’t really thought about the data dependencies before but I generally find code written using function composition to be easier to read than any other approach I’ve seen so far.

The main reason I picked up for why the authors thought we would want to adopt a functional approach to start with is the fact that it limits the number of things that we have to reason about.

Interestingly Jon Tirsen recently tweeted the following:

In my experience large purely functional codebases are very painful. Shared immutable, local mutable is the way to go.

We’ve mostly kept our Scala code base immutable but it’s not large by any measure (5,000 lines of production code so far) and probably not as complex as the domains Jon has worked with.

It’s an interesting observation though…immutability is no silver bullet!

Written by Mark Needham

November 22nd, 2011 at 12:13 am

Posted in XP Day

Tagged with

XP 2011: How complex is software?

with 4 comments

The last session I attended at XP 2011 was a workshop run by John Mcfadyen where he introduced us to Dave Snowden’s Cynefin model, which is a model used to describe problems, situations and systems.

I’d come across the model previously and it had been all over my twitter stream a couple of weeks ago as a result of Dave Snowden giving a key note at the Lean Systems and Software conference.

These are some of the things I learnt from the workshop:

  • The model is based around understanding the correlation between cause and effect:
    • Simple – there’s an obvious correlation between the two
    • Complicated – there’s a correlation but it’s only clear after some analysis, probably by an expert in that domain
    • Complex – we can only see the correlation in retrospect, not in advance.
    • Chaotic – there is no correlation between cause and effect
    • Disorder – there probably is a correlation but we don’t know what it is
  • My original understanding of the model was that you would try and work out where a system fitted into the model and I was under the assumption that most software projects would be considered ‘complex’.

    After a couple of hours in the workshop it became clear that we can actually break down the bigger system and see where its parts (sub systems) fit into the model as well.

    Since different approaches are required depending upon where in the framework we are, we would act differently depending on the situation.

    For example a software project as a whole might be considered complex but the design of the architecture might only be complicated because some of the team members have worked on something similar previously.

  • Another interesting point that John made is that things don’t have to live in one domain forever – they can move between them!

    For example on a lot of the projects I’ve worked on it often feels that if we knew everything we knew at the end of the project at the beginning we could have finished it significantly quicker.

    Effectively the project starts off being complex but if we got to do exactly the same thing again and it played out in exactly the same way then it might only be complicated.

  • We did a couple of exercises where we had to place different items onto the model – first non software related and then software related.

    It was interesting to note that in the first exercise we had quite a few items in the disorder section but in the latter we had none.

    As you would expect having experience in the group around the domain helps us understand the problems in that domain better.

    If we take this one step further it’s also beneficial to have diversity in the group because then we get a variety of perspectives and one person’s strengths can help make up for another’s weaknesses.

There’s an interesting article in the Harvard Business Review titled ‘A Leader’s Framework for Decision Making‘ where Dave Snowden explains the framework in more detail using scenarios which might fit into the different areas of the model.

The origins of cynefin is another nice write up where Snowden describes how he came up with the model.

Written by Mark Needham

May 19th, 2011 at 9:44 am

Posted in XP 2011

Tagged with ,

“In what world does that make sense”

without comments

In her keynote at XP 2011 Esther Derby encouraged us to ask the question “in what world does that make sense?” whenever we encounter something which we consider to be stupid or ridiculous.

I didn’t think much of it at the time but my colleague Pat Kua has been asking me the question whenever I’ve been describing something that I find confusing to him.

After about the third time I noticed that its quite a nice tool for getting us to reflect on the systems and feedback loops that may be encouraging the behaviour witnessed.

In one of our conversations I expressed confusion at the way something had been communicated in an email.

Answering the question made me think why the person would go for that approach and allowed me to see why what I initially thought was obvious actually wasn’t.

A common example of frustration for consultants at ThoughtWorks is around travel and hotel booking which is booked centrally.

People are often frustrated that they end up with a different hotel/flight than they would prefer.

Asking the question in that case helped to understand that the people doing the booking reported to the finance manager and had been told to ensure costs didn’t exceed a certain level.

In ‘Thinking In Systems‘ the author points out that people (mainly me) often have a very narrow view of the world which from my experience leads to us committing the fundamental attribution error:

…the fundamental attribution error describes the tendency to over-value dispositional or personality-based explanations for the observed behaviours of others while under-valuing situational explanations for those behaviours.

The fundamental attribution error is most visible when people explain the behaviour of others. It does not explain interpretations of one’s own behaviour—where situational factors are often taken into consideration. This discrepancy is called the actor–observer bias.

Asking this question seems to help us avoid falling into this trap.

Now I just need to remember to ask myself the question instead of jumping the inference ladder to conclusions!

Written by Mark Needham

May 14th, 2011 at 9:12 pm

Posted in XP 2011

Tagged with ,

XP 2011: Esther Derby – Still no silver bullets

with 3 comments

The first keynote at XP 2011 was one given by Esther Derby titled ‘Still no silver bullets’ where she talked about some of the reasons why agile adoption seems to work in the small but often fails in the large.

Esther quoted Donella Meadows, the author of ‘Thinking in Systems‘, a few times which was an interesting coincidence for me as I’m currently reading her book.

One of the first quotes from that book was the following:

The original purpose of a hierarchy is always to help its’ originating subsystems do their job better

Esther pointed out that a lot of times it seems like it’s the other way around and that the hierarchy is the point of the system in the first place.

Quite a lot of the rest of the talk was around the idea of looking at the systems in which we’re building software and seeing how we can ensure that people throughout the hierarchy have information that will help them make information.

People closer to the ground have the day to day information but the more senior management will have contextual or system information which is also useful for decision making.

Esther referred to this as the diamond of shared information.

My observation is that while people on the ground criticise management for not having day to day information, we don’t always make the effort or realise that we’re missing the system information.

In his lightening talk on systems thinking my colleague Pat Kua said that one of his roles as a consultant is to help people see the systems in their organisations.

I think this is an astute observation as Deming pointed that it’s difficult to see a system if you are inside it

A system cannot understand itself. The transformation requires a view from outside.

…which means it’s quite a useful thing for an outsider to do.

Another interesting observation made was that it’s useful if everyone within an organisation knows how the organisation makes money as this will help influence the way that they make decisions.

Esther used Amazon as an example of this and suggested that they make all their money off the float? which means that their people want to do whatever they can to ensure that books can get to customers as quickly as possible.

I know roughly how ThoughtWorks makes money but there’s certainly more that I could learn about that.

There were some other stories around people gaming systems in order to meet their targets which reminded me of Liz Keogh’s ‘Evil Hat‘ talk from QCon and is something which John Seddon points out frequently in ‘Freedom from command and control‘.

Esther suggested that while it’s useful to measure things to increase our learning it becomes problematic when we start recording those metrics and use them as targets.

Relative improvement contracts, where we measure our year on year performance versus ourselves or our competition, were suggested as an alternative to having targets. I could easily see those being used in a similar way to targets though so I’m not entirely sure of the difference.

I quite liked the section of the talk where Esther talked about metaphors and the paths that they take people down when they hear a metaphor being used.

The following chain of events was suggested:

Metaphor -> Story -> Possible actions

If our metaphor isn’t leading to useful possible actions then perhaps we need to pick a different one.

George Lakoff’s work on metaphors was suggested for additional reading.

Olaf has also written a blog post about the talk.

Written by Mark Needham

May 13th, 2011 at 12:26 pm

Posted in XP 2011

Tagged with

XP 2011: Michael Feathers – Brutal Refactoring

with 7 comments

The second session that I attended at XP 2011 was Michael Feathers’ tutorial ‘Brutal Refactoring’ where he talked through some of the things that he’s learned since he finished writing ‘Working Effectively With Legacy Code‘.

I’ve found some of Michael’s recent blog posts about analysing the data in our code repositories quite interesting to read and part of this tutorial was based on the research he’s done in that area.

Clean code vs Understandable code

The session started off discussing the difference between clean code and understandable code.

Feathers gave a definition of clean code as code which is simple and has no hidden surprises and suggested that this is quite a high bar to try to obtain.

Understandable code on the other hand is about the amount of thinking that we need to do in order to understand a piece of code and can be a more achievable goal.

He also pointed out that it would be very rare us to take a code base and refactor the whole thing and that we should get used to having our code bases in a state where not every part of it is perfect.

Behavioural economics

Next we discussed behavioural economics where Feathers suggested that the ‘incentives’ structure of programming leads to code ending up in a bad state

i.e. It’s much easier/less time consuming for people to add code to an existing method or to an existing class rather than creating a new method or class.

The main reason for that is the difficulty in choosing a name for those methods/classes rather than the mechanical complexity of doing so in a text editor.

Feathers suggested there are probably also things that we don’t know about ourselves and the way we program.

The shape of code

Next we covered the shape of code in systems and the research that Feathers has been doing and has written a few blog posts about.

Feathers suggested that most code bases will follow a power curve whereby most files are hardly changed at all but there will be a small number of files which get changed a lot.

I tried this out on a couple of code bases that I’ve worked on and noticed the same trend.

Feathers showed a graph with code churn on one axis and code complexity on the other and suggested that we really need to focus our refactoring efforts on code which is changed frequently and is complex!

When chatting afterwards Feathers suggested that it would be interesting to look at the way that certain classes increased in complexity over time and see whether we could map those changes to events that happened to the code base.

Finding the design

Feathers talked about the idea of grouping together clumps of code to try and see if they make sense as a domain concept.

For example if three parameters seem to be being used together throughout the code base then perhaps it means that those 3 parameters together mean something.

He described it as an inside out way of deriving domain concepts as we’re working from what we already have in the code and seeing if it makes sense rather than deriving code from domain concepts.

Rapid scratch refactoring

I’d previously read about scratch refactoring in Working Effectively With Legacy Code, where the idea is that we start refactoring code to how we want it to be without worrying about it actually working, and Feathers gave an example of doing this on a piece of code.

The most interesting thing about this for me was that he did the refactoring in notepad rather than in the IDE.

He said this was because the IDE’s compile warnings were distracting from the goal of the exercise which is to understand how we could improve the code.

Deciding upon architectural rules

Feathers said that when he first starts working with a team he gets people to explain to him how the system works and that as a side effect of doing this exercise they start to see ways that it could be simplified.

My colleague Pat Kua talks about something similar in his on boarding series and one of the benefits of adding new people to teams is that we end up having these discussions.

It helps to have a shared understanding of what the system is so that people will know where to put things.

We could do this by stating some rules about the way code will be designed e.g. receivers will never talk to repositories.

This seems somehow related to a recent observation of mine that it’s much easier to work when we have to do so within some sort of constraints rather than having free reign to do whatever we want.

Systemic concerns

Feathers gave an example of a place where he’d worked where an upstream team decided to lock down a service they were providing by using the Java ‘final’ keyword on their methods so that those methods couldn’t be overridden.

He pointed out that although we can use languages to enforce certain things people will find ways around them which in this case meant that the downstream team created another class wrapping the service which did what they wanted.

He also observed that the code around hard boundaries is likely to be very messy.

We also covered some other topics such as wrapping global variables/singletons in classes and passing those classes around the system and the idea of putting hypotheses/assertions into production code.

Pat’s also written some notes about this session on his blog.

Written by Mark Needham

May 11th, 2011 at 1:35 pm

Posted in XP 2011

Tagged with

XP 2011: J.B. Rainsberger – A Simple Approach to Modular Design

with 4 comments

After finishing my own session at XP 2011 I attended the second half of J.B. Rainsberger’s tutorial on modular design.

For most of the time that I was there he drove out the design for a point of sale system in Java while showing how architectural patterns can emerge in the code just by focusing on improving names and removing duplication.

The second half of the session was much more interesting to watch as this was when J.B. had set all the context with the code and we could start to see the points that he was trying to make.

These were some of the interesting bits that I picked up:

  • J.B. talked a lot about being able to detect smells in code both mechanically and intuitively. The latter comes from our general feel of code based on our experience while the former comes from following a set of rules/heuristics. He wrote about this earlier in the year.

    For example we might feel intuitively that our tests are unclear to read while mechanically we can see that there is duplication between our tests and code which is what’s leading to the unclearness.

  • By removing duplication from the point of sales example code we ended up with the MVC pattern albeit with the responsibilities in the wrong place e.g. the model/controller both had some logic that would typically belong in the view.

    I’m curious as to whether other types of code would naturally lead towards another architectural pattern without us noticing.

    It would make sense if they did seeing as patterns are typically extracted when people see a common way of solving a particular problem.

  • J.B. encouraged us to use long names to help us see problems in the code. For example naming something ‘priceAsText’ might help us see that we have primitive obsession which we may or may not want to do something about.

    It was interesting how using longer/more descriptive names made it easier to see which bits of code were similar to each other even though it wasn’t initially obvious.

  • I hadn’t heard of temporal duplication which was defined as ‘unnecessarily repeating a step across the run time of a program’.

    In the example code we were creating a map of bar code -> price every time we called the method to scan the bar code which was unnecessary – that map could be injected into the class and therefore only be created once.

  • J.B. described his 3 values of software which he suggested he uses to explain why we need to keep our code in good shape:
    1. Features – lead to the customer making money in some shape or form
    2. Design – we want to try and keep the marginal cost of features low i.e. we want to build a code base where there is a similar cost no matter what feature we decided to implement next.
    3. Feedback – we want to get the customer to say “not what I meant” as soon as possible since they’re bound to say it at some stage i.e. we want to reduce the cost of a wrong decision
  • He drew an exponential curve showing the cost of software if we only focus on features. It was interesting to note that if you finish the project early enough then it might not be such a big deal.

I think it’s very easy to dismiss the important of naming in our code because it seems so trivial.

After this session I can now see that I should be spending much more time than I currently do on naming and have much room for improvement.

Written by Mark Needham

May 11th, 2011 at 12:11 pm

Posted in XP 2011

Tagged with

QCon London 2009: The Power of Value – Power Use of Value Objects in Domain Driven Design – Dan Bergh Johnsson

with 6 comments

The final Domain Driven Design talk I attended at QCon was by Dan Bergh Johnsson about the importance of value objects in our code.

I thought this session fitted in really well as a couple of the previous speakers had spoken of the under utilisation of value objects.

The slides for the presentation are here.

What did I learn?

  • Dan started the talk by outlining the goal for the presentation which was to ‘show how power use of value objects can radically change design and code, hopefully for the better’. A lot of the presentation was spent refactoring code written without value objects into shape.
  • We started out with a brief description of what value objects are not which I found quite amusing. To summarise,they are not DTOS, not Java beans and not objects with public fields. The aim with value objects is to swallow computational complexity from our entities. Creating what Dan termed ‘compound value objects’ provides a way to do this. The benefits of doing this are reduced complexity in entities and code which is more extensible, more testable and has less concurrency issues.
  • I found myself quite intrigued as to how you would be able to spot an opportunity in your code to introduce a value object and almost as soon as I wrote down the question Dan covered it! Some opportunities include strings with format limitations, integers with limitations or arguments/results in service methods. The example used was a phone number which was being passed all over the place as a string – refactoring this allowed the code to become explicit – before the concept existed but it was never properly spelled out – and it pulled all the functionality into one place.
  • Dan’s term for this was ‘data as centres of gravity‘ – once you have the value object anything related to that data will be drawn towards the object until you have a very useful reusable component. He pointed out that ‘your API has 10-30 seconds to direct a programmer to the right spot before they implement it [the functionality] themselves‘. I think this was a fantastic reason for encouraging us to name these objects well as we pretty much only have the amount of time it takes to hit ‘Ctrl-N’ in IntelliJ, for example, and to type in a potential class name.
  • Another interesting point which was being discussed on twitter as the presentation was going on was whether we should be going to our domain expert after discovering these value objects and asking them whether these objects made sense to them. Dan said that this is indeed the best way to go about it. I have to say that what struck me the most across all the presentations was the massive emphasis on getting the domain expert involved all the time.
  • Seemingly randomly Dan pointed out an approach called composite oriented programming which is all about using DDD terminology but inside a framework to drive development. I’ve only had a brief look at the website so I’m not sure if it’s anything worth shouting about.
  • In the second half of the session compound value objects were introduced – these basically encapsulate other value objects to come up with even more explicitly named objects. The examples in the slides are very useful for understanding the ideas here so I’d recommend having a look from slide 44 onwards. The underlying idea is to encapsulate multi object behaviour and context and make implicit context explicit. This idea is certainly one that is new to me so I’m going to be looking at our code base to see if there’s an opportunity to introduce the ideas I learnt in this talk.
  • To close Dan rounded up the benefits we get from introducing value objects into our code – context aware client code, smart services and a library with an API.

Written by Mark Needham

March 15th, 2009 at 9:45 am

Posted in QCon

Tagged with , ,