Mark Needham

Thoughts on Software Development

OOP: Behavioural and Structural constraints

with one comment

A few months ago I wrote a post describing how we should test the behaviour of code rather than the implementation whereby we would write tests against the public API of an object rather than exposing other internal data of the object and testing against that directly.

While I still think this is a useful way of testing code I didn’t really have a good definition for what makes that a test of an object’s behaviour.

I’ve been reading through James Odell’s ‘Advanced Object-Oriented Analysis and Design Using UML‘ and he describes it like so:

Behavioural constraints limit the way object state changes may occur

In Meilir Page-Jones language I think this would describe informative and imperative messages:

• Informative – a message telling an object about something that happened in the past.
• Imperative – a message telling an object to take some action on itself.

Both of these types of messages change the state of the object so in C# or Java these would be the public methods on an object that clients interact with.

That seems to describe the way that we would test the object. These would be the methods that we’d call in our test.

Odell goes on to describe structural constraints:

Structural constraints limit the way objects associate with each other, that is, they restrict object state.

This seems close to an interrogative message:

• Interrogative – a message asking an object to reveal something about itself.

This would seem closer to the way that we would verify whether the object’s state changed as expected. We’re querying the object through the structural constraints that have been setup.

I can think of two main reasons why this approach is more effective than just testing directly against the internals of an object:

• It ensures we’re testing something useful otherwise we might be writing tests on our code for a scenario that will never happen.
• We have a better idea of when we’ve finished writing our tests since we know when we’ve tested all the behaviour.

Written by Mark Needham

December 31st, 2009 at 4:08 pm

Posted in OOP

Tagged with

Coding Dojo #18: Groovy Bowling Game

This week’s dojo involved coding a familiar problem – the bowling game – in a different language, Groovy.

The code we wrote is available on bitbucket.

The Format

Cam, Dean and I took turns pairing with each other with the code projected onto a TV. As there were only a few of us the discussion on where we were taking the code tended to included everyone rather than just the two at the keyboard.

What We Learnt

• I’ve sometimes wondered about the wisdom of running dojos in newer languages but this one worked quite well because Cam has been learning Groovy and he was able to point us in the right direction when we started writing Java-esque Groovy code. The particular syntax that I didn’t know about was that you can define and put items into a list in a much simpler way than in Java:

I was starting to write code like this:

def frames = new List<Frame>() frames.Add(frame)

Which Cam simplified down to:

def frames = [] frames << frame

I didn’t feel that I missed the static typing you get in Java although IntelliJ wasn’t quite as useful when it came to suggesting which methods you could call on a particular object.

• I’m even more convinced that using various languages functional equivalent of the ‘for each’ loop, in this case ‘eachWith’ and ‘eachWithIndex’ is not the way to go and we could see our code becoming very complicated when trying to work out how to score strikes thanks to our use of it!
• I think we actually got further this time in terms of the implementation although we did slow down when it came to scoring strikes to try and work out exactly how we wanted to do it.

Prior to this we had been following the idea of just getting the tests to pass and driving the design of the code that way but we at this stage it seemed foolish to keep doing that as the code would increased dramatically in complexity by doing so.

The two approaches we were thinking of involved using the state pattern to determine what the current frame outcome was and then work out the cumulative score based on that by looking forward to future frames or an approach that would make use of functional collection parameters (not sure exactly which ones!) to calculate the score in a more function rather than OO way.

For next time

• We’ll probably keep going with some more Groovy as it’s actually more interesting than I thought it would be. I’m also keen to do a coding dojo where we never make use of the if statement.

Written by Mark Needham

June 26th, 2009 at 6:15 pm

Posted in Coding Dojo

Tagged with , , ,

OO with a bit of functional mixed in

From my experiences playing around with F# and doing a bit of functional C# I’m beginning to think that the combination of functional and object oriented programming actually results in code which I think is more expressive and easy to work with than code written only with an object oriented approach in mind.

I’m also finding it much more fun to write code this way!

In a recent post Dean Wampler questions whether the supremacy of object oriented programming is over before going on to suggest that the future is probably going to be a mix of functional programming and object oriented programming.

I agree with his conclusion but there are some things Dean talks about which I don’t really understand:

The problem is that there is never a stable, clear object model in applications any more. What constitutes a BankAccount or Customer or whatever is fluid. It changes with each iteration. It’s different from one subsystem to another even within the same iteration! I see a lot of misfit object models that try to be all things to all people, so they are bloated and the teams that own them can’t be agile. The other extreme is “balkanization”, where each subsystem has its own model. We tend to think the latter case is bad. However, is lean and mean, but non-standard, worse than bloated, yet standardized?

I don’t think an object model needs to be stable – for me the whole point is to iterate it until we get something that fits the domain that we’re working in.

I’m not sure who thinks it’s bad for each subsystem to have its own model – this is certainly an approach that I think is quite useful. Having the same model across different subsystems makes our life significantly more difficult. There are several solutions for this outlined in Domain Driven Design.

Dean goes on to suggest that in a lot of applications data is just data and that having that data wrapped in objects doesn’t add much value.

I’ve worked on some projects which took that approach and I found the opposite to be true – if we have some data in an application it is very likely that there is going to be some sort of behaviour associated to it, meaning that it more than likely represents some concept in the business domain. I find it much easier to communicate with team mates about domain concepts if they’re represented explicitly as an object instead of just as a hash map of data for example.

Creating objects also helps manage the complexity by hiding information and from my experience it’s much easier to make changes in our code base when we’ve managed data & behaviour in this manner.

I think there is still a place for the functional programming approach though. Functional collection parameters for example are an excellent way to reduce accidental complexity in our code and removing useless state from our applications when performing operations on collections.

I don’t think using this type of approach to coding necessarily means that we need to expose the state of our objects though – we can still make use of these language features within our objects.

The most interesting thing for me about using this approach to some areas areas of coding when using C# is that you do need to change your mindset about how to solve a problem.

I typically solve problems with a procedural mindset where you just consider the next step you need to take sequentially to solve the problem. This can end up leading to quite verbose solutions.

The functional mindset seems to be more about considering the problem as a whole and then working out how we can simplify that problem from the outside in which is a bit of a paradigm shift. I don’t think I’ve completely made but it can certainly lead to solutions which are much easier to understand.

The other idea of functional programming that I’ve been experimenting with is that of trying to keep objects as immutable as possible. This pretty much means that every operation that I perform on an object which would previously mutate the object now returns a new instance of it.

This is much easier in F# than in C# where you end up writing quite a lot of extra code to make that possible and can be a bit confusing if you’re not used to that approach.

Sadek Drobi did a presentation at QCon London where he spoke more about taking a functional programming approach on a C# project and while he’s gone further than I have with the functional approach my current thinking is that we should model our domain and manage complexity with objects but when it comes to solving problems within those objects which are more algorithmic in nature the functional approach works better.

Written by Mark Needham

April 25th, 2009 at 11:14 am

Posted in OOP

Tagged with ,

OO: Reducing the cost of…lots of stuff!

I’ve been working in the world of professional software development for a few years now and pretty much take it as a given that the best way to write code which is easy for other people to understand and work with is to write that code in an object oriented way.

Not everyone agrees with this approach of course and I’ve been told on occasions that I’m ‘over object orienting’ (is that even a word?) solutions. However, I think there are big advantages to be had from following an OO approach and I thought I’d explore these here.

Writing code in an object oriented way provides a way for us to manage complexity in an application. There always seems to be some degree of complexity, even in simple looking problems, so it makes sense to look for ways to mitigate that.

I think it’s fair to say that it is easier to write code in a procedural or quasi procedural/object oriented way whereby we tend to use ‘objects’ as data containers which we can shove data into and then take data out to use wherever we need to, and to start with we really don’t see any ill effects from doing this – the application still does what it’s supposed to, our acceptance tests pass and we chalk up our story points.

Then comes the time when we need to make changes to that code, and those times keep coming – a quote from Toby Young’s presentation at QCon London has him suggesting the following:

92% of cost is maintaining/supporting/upgrading existing systems; only 8% on new stuff.

Given that type of data I think it makes sense for us to try and make it as easy as possible to make those changes.

From my experience we tend to get this ‘nightmare to change mix of object oriented and procedural code’ mainly due to violations of the law of demeter i.e. putting getters on our classes and from copious use of setters which leave our objects in an inconsistent state.

We can immediately get a lot of benefits by not doing this – by setting up our objects in their constructors and by telling objects what to do rather than asking for data from them and choosing what to do with that data elsewhere.

So what are these benefits?

Reduced cost of change

One example I came across lately was to do with the creation of models to use on our views – using the ASP.NET MVC framework.

The models were being assembled somewhat similarly to this:

var parentModel = new ParentModel { ChildModel = new ChildModel { Property1 = new OtherObject(SomeCollection(), parentModel.SomeOtherProperty); } };

It starts off fairly harmless, using the object initializer syntax, but what if we decide to change Property1 to be say Property2 as happened to us?

If we had set this through the constructor then we would have only had to make the change in one place and then we could have moved on.

In actual fact it turned out that there were 4 places where we were setting Property1 – all of them from within the controller in a similar way to the above example.

After a bit of investigation I realised that only one of those cases was being tested even though the property was being constructed slightly differently in a couple of the places.

As Michael Feathers teaches us in Working Effectively with Legacy Code if we’re going to try and refactor code we need to ensure first of all that there is a test which covers that functionality so that if we mistake we are informed of it.

This took longer than I expected due to the fact that there was quite a bit of setup needed since the only place to test this was through the controller.

If we had followed an object oriented approach here then not only would these tests have been easier to write, but they would have been written in the first place and not got forgotten about in the mess of the controller.

In theory what was a very simple change ended up taking a few hours.

Code that is much easier to test

As I mentioned earlier when we code in an object oriented way it becomes much easier to test because the context that we need to consider in our tests is massively reduced.

In the above example we should be able to test whether of not Property1 was being set correctly directly from the ChildModel rather than having to test it from our controller. Unfortunately by having it set through a setter we can’t do this in a meaningful way.

The creation of ‘OtherObject’ should be done inside the ChildModel and we can pass in the other data into the constructor of the class and then call Property1 in our test to see whether or not we get the expected result. We might end up with a ChildModel that looks more like this:

public class ChildModel { public ChildModel(IEnumerable<string> someCollection, string selectedValue) { this.someCollection = someCollection; this.selectedValue = selectedValue; }   public SomeObject Property1 { get { return new SomeObject(someCollection, selectedValue); } } }

I know the example is contrived but hopefully the idea that putting this type of logic leads to easier to test units of code is clear.

Less need to debug

Another truly annoying consequence of creating train wreck code is that you end up with null pointer/reference exceptions all over the place and it’s quite difficult to immediately tell which part of the train caused the problem.

Since it would take a long time to get that segment of code under test and identify the problem that way out comes the debugger so that we can find out what went wrong.

It might help solve the problem this time but unless we change our coding style to remove this type of problem from happening then it’s just as likely to happen again tomorrow and we’ll be back to square one.

I hate using the debugger – it takes a long time to step through code compared to a TDD cycle and once you’ve solved the problem there isn’t an executable example/test that you can run to ensure that it doesn’t make a reappearance.

If we can write our code in an object oriented fashion then we pretty much remove this problem and our time can be spent much more effectively adding real value to our application.

Written by Mark Needham

March 12th, 2009 at 4:04 am

Posted in OOP

Tagged with ,

OO: Micro Types

Micro or Tiny types present an approach to coding which seems to divide opinion in my experience, from those who think it’s a brilliant idea to those who believe it’s static typing gone mad.

I fall into the former group.

So what is it?

The idea is fairly simple – all primitives and strings in our code are wrapped by a class, meaning that we never pass primitives around.

In essence Rule #3 of Jeff Bay’s Object Calisthenics.

As I mentioned on a previous post about wrapping dates, I was first introduced to the idea by Darren Hobbs as a way of making APIs easier for others to use.

In the world of Java method signatures of 3rd party libraries with minimal Javadoc documentation tend to read like so when you look at their method signatures in your chosen editor.

doSomething(string, string, string, string)

The parameter name is sometimes not available meaning that it is now almost impossible to work out what each of those strings is supposed to represent – guesswork becomes the way forward!

I noticed an even more subtle example when working on a project last year where there was a method to transfer money between accounts. It looked like a bit like this:

public void transferMoney(Account debitAccount, Account creditAccount) { // code }

See how easy it would be to get those accounts the wrong way around and suddenly the money is going in the wrong direction!

I always had to look twice to make sure we were doing the right thing – it was quite confusing.

Using micro types we could solve this problem by wrapping account with a more specific class. The signature could potentially read like so:

public void transferMoney(DebitAccount debitAccount, CreditAccount creditAccount) { // code }

And the confusion has been removed.

The cost of doing this is obviously that we need to write more code – for the above example maybe something like this:

public class DebitAccount { private Account debitAccount;   public DebitAccount(Account debitAccount) { this.debitAccount = debitAccount; } }

We’d then delegate the necessary method calls through to the underlying Account although we probably don’t need to expose as many methods as the normal account object would since we only care about it in this specific context.

I had the opportunity to work on a project led by Nick for a couple of months last year where we were micro typing everything and I quite enjoyed it although opinion was again split.

I felt it helped to keep behaviour and the data together and was constantly forcing you to open your mind to new abstractions.

The other argument against the approach is that you are creating objects which have no behaviour.

I find here that it depends what you classify as behaviour – for me if there is some logic around the way that a string is formatted when it is going to be displayed then that is behaviour and we should look to put that logic as close to the data as possible i.e. within the micro type.

On that project each object rendered itself into a ViewData container which we accessed from our views.

public class Micro { private string micro;   public Micro(string micro) { this.micro = micro; }   public void renderTo(ViewData viewData) { viewData.add(micro); } }

If an object contained more than one piece of data it could then decide which bits needed to be rendered.

It’s certainly not for everyone but it’s an approach that I felt made coding much more enjoyable and code much easier to navigate.

Written by Mark Needham

March 10th, 2009 at 10:40 pm

Posted in OOP

Tagged with ,

OOP: What does an object’s responsibility entail?

One of the interesting discussions I’ve been having recently with some colleagues is around where the responsibility lies for describing the representation of an object when it is to be used in another bounded context – e.g. on the user interface or in a call to another system.

I believe that an object should be responsible for deciding how its data is used rather than having another object reach into it, retrieve its data and then decide what to do with it.

Therefore if we had an object Foo whose data was needed for a service call to another system my favoured approach would be for Foo to populate the FooMessage with the required data.

public class Foo { private string bar;   public Foo(string bar) { this.bar = bar; }   public void Populate(IFooMessage fooMessage) { fooMessage.Bar = bar; } }
public interface IFooMessage { string Bar { get; set; } }
public class FooMessage : IFooMessage { public string Bar { get; set; } }

The advantage of this approach is that Foo has kept control over its internal data, encapsulation has been preserved. Although we could just expose ‘Bar’ and make it possible to build the FooMessage from somewhere else, this violates Tell Don’t Ask and opens up the possibility that Foo’s internal data could be used elsewhere in the system outside of its control.

The question to be answered is whether or not it should be Foo’s responsibility to generate a representation of itself. In discussion about this it was suggested that Foo has more than one responsibility if we design the class as I have.

Uncle Bob’s definition of the Single Responsibility Principle (SRP) in Agile Software Development: Principles, Patterns and Principles describes it thus:

A class should have only one reason to change.

In this example Foo would need to change if there was a change to the way that it needed to be represented as an IFooMessage as well as for other changes not related to messaging. Foo is not dependent on a concrete class though, only an interface definition, so the coupling isn’t as tight as it might be.

It might just be my interpretation of SRP but it seems like there is a trade off to be made between ensuring encapsulation and SRP in this instance.

The other way to create a FooMessage is to create a mapper class which takes the data out of the Foo class and then creates the FooMessage for us.

We might end up with something like this:

public class FooMapper { public FooMessage ConvertToMessage(Foo foo) { return new FooMessage { Bar = foo.Bar; } } }

Whereby Foo needs to expose Bar as a property in order to allow this mapping to be done.

This is somewhat similar to the way that NHibernate handles the persistence and loading of objects for us – unless we use the Active Record pattern an object is not responsible for saving/loading itself.

What I don’t like about this approach is that it doesn’t strike me as being particularly object oriented – in fact the mapper class would be an example of an agent noun class.

A cleaner solution which allows us to keep encapsulation in tact would be to make use of reflection to build the FooMessage from our mapper, probably by creating private properties on Foo as they are easier to reflect on than fields. The downside to the reflection approach is that code written in this way is probably more difficult to understand.

I know I’ve oversimplified the problem with my example but ignoring that, where should this type of code go or do we choose to make a trade off between the two approaches and pick which one best suits our situation?

Written by Mark Needham

February 9th, 2009 at 4:52 pm

Posted in OOP

Tagged with

Object Calisthenics: First thoughts

We ran an Object Calisthenics variation of Coding Dojo on Wednesday night as part of ThoughtWorks Geek Night in Sydney.

Object Calisthenics is an idea suggest by Jeff Bay in The ThoughtWorks Anthology , and lists 9 rules to writing better Object Oriented code. For those who haven’t seen the book, the 9 rules are:

1. Use only one level of indentation per method
2. Don’t use the else keyword
3. Wrap all primitives and strings
4. Use only one dot per line
5. Don’t abbreviate
6. Keep all entities small
7. Don’t use any classes with more than two instance variables
8. Use first-class collections
9. Don’t use any getters/setters/properties

We decided to try and solve the Bowling Game Problem while applying these rules. We coded in Java as this was a language everyone in the room was comfortable with. It would have been cool to try out Ruby or another language but I’m not sure if this type of setting is the best place to learn a new language from scratch.

I hadn’t arranged a projector so we couldn’t adopt the Randori approach. Instead we split into three pairs rotating every half an hour, discussing how each pair was approaching the problem at each change.

Learning from the problem

I was surprised how difficult the problem was to solve using the Object Calisthenics rules. There were several occasions when it would have been really ease to expose some state by introducing a getter but we had to try another way to attack the problem.

We have been following the approach of wrapping all primitives and strings on my current project as ‘micro types‘ so this rule wasn’t new to me but the general feeling early on was that it was quite annoying to have to do. From my experience on my project it does help to encourage a more object oriented approach of keeping the data with the behaviour.

This approach to object orientation is very extreme but the author suggests giving it a try on some small projects as being able to code like this will result in you seeing problems in a different way. I noticed today that I was always on the lookout for ways to ensure that we didn’t expose any state so it’s had a slight influence on my approach already.

We had an interesting discussion about mid way through about whether we should implement equals and hashcode methods on objects just so that we can test their equality. My general feeling is that this is fine although it has been pointed out to me that doing this is actually adding production code just for a test and should be avoided unless we need to put the object into a HashMap or HashSet when the equals/hashcode methods are actually needed. The only alternative I can think of is to not test object equality and instead only test equality where we have primitives or to test for equality by using reflection.

From seeing the approaches others had taken I realised that the approach we took on my machine was too difficult – we would have been more successful by adopting baby steps.

We initially started out trying to design a solution to the problem on a white board before getting to the coding but this didn’t work particularly well so we abandoned this and went straight to the code.

Each machine had three different pairs working on the problem over the duration of the night, with one person always staying on the machine and the others rotating. While we all had slightly different approaches to the problem it would have been interesting to see if we could have progressed further using the Randori approach with everyone having input to the same code base.

None of the pairs managed to complete the problem, and there was concern that the problem was too big to fit into the 90 minutes we spent coding. After speaking with Danilo and reading his Coding Dojo paper it seems that this is not necessarily a bad thing and the focus is supposed to be more on the learning than problem completion.

It was certainly an interesting experience and I had the opportunity to work with some people that I haven’t worked with before. We are hopefully going to make these Coding Dojos a regular feature and try out some different approaches to see which works best for us.

On this occasion I selected the problem but in the future we would look to make it a group based decision depending on what people are keen to learn.

Written by Mark Needham

November 6th, 2008 at 9:30 pm

Similarities between Domain Driven Design & Object Oriented Programming

At the Alt.NET UK Conference which I attended over the weekend it occurred to me while listening to some of the discussions on Domain Driven Design that a lot of the ideas in DDD are actually very similar to those being practiced in Object Oriented Programming and related best practices.

The similarities

Anaemic Domain Model/Law of Demeter

There was quite a bit of discussion in the session about anaemic domain models.

An anaemic domain model is one where a lot of the objects are merely data holders and do not actually have any behaviour inside them. While it has a fancy name, in OO terms this problem materialises due to our failure to adhere to the Law of Demeter.

My colleague Dan Manges has a brilliant post describing this principle but a tell tale sign is that if you see code like the following in your code base then you’re probably breaking it.

1 object.GetSomething().GetSomethingElse().GetSomethingElse()

This is often referred to as train wreck code and comes from breaking the idea of Tell Don’t Ask. In essence we should not be asking an object for its data and then performing operations on that data, we should be telling the object what we want it to do.

Side Effect Free Functions/Command Query Separation

DDD talks about side effect free functions which are described as follows:

An operation that computes and returns a result without observable side effects

The developer calling an operation must understand its implementation and the implementation of all its delegations in order to anticipate the result.

My colleague Kris Kemper talks about a very similar OOP best practice called command query separation. From Martin Fowler’s description:

The really valuable idea in this principle is that it’s extremely handy if you can clearly separate methods that change state from those that don’t. This is because you can use queries in many situations with much more confidence, introducing them anywhere, changing their order.

It’s not exactly the same but they have a shared intention – helping to make the code read more intuitively so that we can understand what it does without having to read all of the implementation details.

Intention Revealing Interfaces/Meaningful Naming

Intention Revealing Interfaces describe a similar concept to Side Effect Free Functions although they address it slightly differently:

A design in which the names of classes, methods, and other elements convey both the original developer’s purpose in creating them and their value to a client developer.

If a developer must consider the implementation of a component in order to use it, the value of encapsulation is lost.

In OOP this would be described as using meaningful names as detailed in Uncle Bob’s Clean Code (my review).

Bounded Context/Clean Boundaries

DDD’s bounded context describes “The delimited applicability of a particular model” i.e. the context in which is is held valid.

This is quite closely related to the idea of clean boundaries in Clean Code where Uncle Bob states:

Code at the boundaries needs clear separation and tests that define expectations

In both cases we are creating an explicit separation of ‘our code’ from the outside world so to speak. We want to clearly define where ‘our world’ ends by defining the interfaces with which we interact with the outside world.

Anti Corruption Layer/Wrappers

The anti corruption layer in DDD is “an isolating layer to provide clients with functionality in terms of their own domain model.”

It is used to create a boundary for our bounded context so that the models of other systems we interact with doesn’t creep into our system.

This is implemented in OO using one of the wrapper patterns. Examples of these are the Facade, Adapter, or Gateway pattern which all solve the problem in slightly different ways.

The intention in all cases is to have one area of our code which calls 3rd party libraries and shields the rest of the code from them.

Domain Driven Design = Object Oriented Programming + Ubiquitous Language?

While talking through some of these ideas I started to come to the conclusion that maybe the ideas that DDD describe are in fact very similar to those that OOP originally set out to describe.

The bit that DDD gives us which has perhaps been forgotten in OOP over time is describing the interactions in our systems in terms of the business problem which we are trying to solve i.e. the Ubiquitous Language.

From Wikipedia’s Object Oriented Programming entry:

OOP can be used to translate from real-world phenomena to program elements (and vice versa). OOP was even invented for the purpose of physical modeling in the Simula-67 programming language.

The second idea of physical modeling seems to have got lost somewhere along the way and we often end up with code that describes a problem at a very low level. Instead of describing a business process we describe the technical solution to it. You can be writing OO code and still not have your objects representing the terms that the business uses.

There are some things that DDD has certainly made clearer than OOP has managed. Certainly the first part of the book which talks about building a business driven Domain Model is something which we don’t pay enough attention to when using OOP.

For me personally before I read the concepts of DDD I would derive a model that I thought worked and then rarely go back and re-look at it to see if it was actually accurate. Reading DDD has made me aware that this is vital otherwise you eventually end up translating between what the code says and what the business says.

Ideas around maintaining model integrity are also an area I don’t think would necessarily be covered in OOP although some of the implementations use OOP ideas so they are not that dissimilar.

Why the dismissal of DDD?

The reason I decided to explore the similarities between these two concepts wasn’t to dismiss Domain Driven Design – I think the framework it has given us for describing good software design is very useful.

Clearly I have not mapped every single DDD concept to an equivalent in OOP. I think DDD has given a name or term to some things that we may just take for granted in OOP. Certainly the DDD ideas expressed around the design of our model are all good OOP techniques that may not be explicitly stated anywhere.

I wanted to point out these similarities as I feel it can help to reduce the fear of adopting a new concept if we know it has some things in common with what we already know – if a developer knows how to write OO code and knows design concepts very well then the likelihood is that the leap to DDD will not actually be that great.

It would be really good if we could get to the stage where when we teach the concepts of OOP we can do so in a way that emphasises that the objects we create should be closely linked to the business domain and are not just arbitrary choices made by the developers on the team.

Maybe the greatest thing about DDD is that it has brought all these ideas together in one place and made them more visible to practitioners.

I am very interested in how different things overlap, what we can learn from these intersections and what things they have in common. It’s not about the name of the concept for me, but learning what the best way to deliver software and then to maintain that software after it has been delivered.

Written by Mark Needham

September 20th, 2008 at 1:12 pm

My Software Development journey so far

While reading some of the rough drafts of Apprenticeship Patterns online I started thinking about the stages I have gone through on my Software Development journey so far.

I have worked in the industry for just over 3 years; 1 year at Reed Business and 2 years at ThoughtWorks. Over that time my thoughts, opinions and ways of doing things have changed, and no doubt these will continue to evolve as I learn more and more.

My time at RBI

I started working at RBI in August 2005 a few months after I finished University. My experience up to this point involved several years coding PHP in a very procedural way and a little bit of Java.

I was hired by RBI as a C# Web Developer and my work there involved working on several internal projects and looking after one of their websites.

At this stage I was still very much convinced that the art of software development lay in learning languages, so I used to spend all my time reading about C# and playing around with all the different APIs.

At this stage I was using Visual Studio without Resharper so I didn’t have the ease of Refactoring or moving code around that I now take for granted.

One of my colleagues took me under his wing and started teaching me how to write better code – separation of code across presentation/business/data layers was my first lesson. Suddenly it became so much easier to make changes! All the code I wrote was still in a non TDD way and after one episode where I created a bug in production I started to think that surely there was a better way to develop software.

Eventually my colleague suggested to me that if I really wanted to learn how to write software then the best place to do so was at ThoughtWorks.

ThoughtWorks Days

I started working at ThoughtWorks in August 2006, hired through the TWU graduate program.

I thought I had a fairly good idea of how to write Object Oriented code but that theory was quickly disproved as I went through Object Boot Camp as part of my TWU training. The Single Responsibility principle was the overwhelming lesson learned as part of this. I also remember believing at this stage that it was all about Design Patterns.

I came back to the UK and did a couple of small projects where I first came across continuous integration and TDD before going onto my first big project.

I remember my first day on that project involved pairing with Darren Hobbs and being amazed at the speed with which he was able to move around the code using IntelliJ. It became clear to me that I had a long way to go.

Working on this project for the best part of the year I learned a lot, including how to write code in a Test Driven way, that everything you do in software is a trade off, but most importantly I learned how to master the IDE – if you can do this then you feel more confident and you can complete tasks much more quickly. This is always the advice I see given to new Graduates at ThoughtWorks – learn how to use your tools!

I moved onto my second project where I was immediately surprised at how much easier I found it to move around the code base than I had at the start of my first project.

We were designing a client side application so a big part of my learning here was around testing presentation logic. Jeremy Miller’s blog proved invaluable at this stage.

It was also the first time I came across the concept of Domain Driven Design – it was amazing how much easier it was to develop software when the developers were using the same language as the BA, QA and in fact the business. InfoQ’s cut down version of Eric Evans’ famous book proved useful in helping me understand the concepts that I was seeing in our code base. I remember thinking at the time that I didn’t need to bother reading DDD as it was all covered in this version – I was wrong!

We had an very lightweight version of Agile being used on this project – we tried to have minimal process and do as many things as possible just when we needed them. It was almost Lean in nature although this was never explicit. It was interesting to me how easy and fun software development could be when it was done like this.

My third project was the first time that I got the opportunity to work with legacy code – i.e. code that hadn’t been unit tested. My early lessons on trade offs came back to me here as I realised that not writing unit tests is a trade off – you can choose to go more quickly initially by not writing them but eventually it comes back to haunt you.

I was working with Alexandre Martins on this project, and his enthusiasm for writing clean Object Orientated code gave me a new outlook on writing code. Working with him got me in the frame of mind of hating exposing the internals of classes and constantly looking for other ways to solve the problem when I was considering doing so.

Halvard Skogsrud’s knowledge around concurrency was another eye opener for me around how non functional requirements should have an impact on the way that software is designed. It also introduced me to the way that other languages such as Erlang handle concurrency – and behind this the idea of having as much of your code immutable as possible to avoid threading issues.

During a debate at a ThoughtWorks Geek Night another colleague brought up Alistair Cockburn’s Hexagonal Architecture, which was the first time that I had come across an Architectural Design Pattern. This is a useful technique when thinking about the design of systems at a higher level.

On my next project I did a lot of work around build and deployment which gave me the insight that developing software is about more than just the code. This was a lesson first taught to me by Chris Read a year before but it finally made sense to me.

A big part of this project was inter process communication between different components of the system which introduced me to the idea of event driven messaging. I immediately saw the benefits of this over the RPC style messaging I had seen previously.

I also had the opportunity to do some work with Ruby on Rails and in particular around the use of Active Resource. This introduced me to the idea of RESTful web services which feels like a much more natural way to communicate over the web than any of the other approaches I have come across.

In Summary

The interesting thing for me is that I didn’t plan to gain any of these learnings, they came about as a natural progression from my interest in software development and from working on different projects with different people.

The biggest things I have learned since I started working in software development are that it is much more an art than a science and that there is no right or wrong, just trade offs that we should be aware of.

I still have a lot to learn but I thought it would be good to have a look at what I’ve learnt so far in the hope it can help others just starting out on their journey.

It would be interesting to hear about others’ journeys and the similarities and differences you have experienced.

Written by Mark Needham

September 1st, 2008 at 1:01 am

Encapsulation in build scripts using nant

When writing build scripts it’s very easy for it to descend into complete Xml hell when you’re using a tool like nant.

I wondered previously whether it was possible to TDD build files and while this is difficult given the dependency model most build tools follow. That doesn’t mean we can’t apply other good design principles from the coding world however.

Encapsulation is one of the key principles of OOP and it can be applied in build files too. Stephen Chu talks about this in his post on Pragmatic Nant Scripting where he recommends having 3 different levels of targets to help create this encapsulation.

I’ve been trying to follow this advice with our build scripts and today Bernardo made the suggestion of using macros in an English readable way. He calls it OO Scripting – it’s effectively a DSL inside a DSL if you like.

I was having problems with the ncover nant task – the following error message was being thrown every time I called it:

I managed to find the source code for that class and had a look at it but I couldn’t figure out what was going wrong without debugging through it. The strange thing was that it worked fine from the command line which suggested to me that I was getting something simple wrong.

I created a cover.tests macro to encapsulate the details of how I was executing the coverage.

The plan was to get it working using an exec call to the ncover executable and then phase the ncover nant task back in when I’d figured out what I was doing wrong.

This is what I started out with:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 <macrodef name="cover.tests"> <attributes> <attribute name="in.assemblies" /> </attributes> <sequential> <copy file="\path\to\Coverage.xsl" tofile="${report.dir}\Coverage.xsl" /> <exec program="..\lib\NCover-1.5.8\NCover.Console.exe"> <arg value="..\lib\nunit-2.4\nunit-console.exe" /> <arg value="${build.dir}\UnitTests\UnitTests.dll" /> <arg value="//a" /> <arg value="${in.assemblies}" /> <arg value="//x" /> <arg value="${report.dir}\Unit.Test.Coverage.xml" /> </exec> </sequential> </macrodef>

//a is the assemblies to include in the report

//x is the name of the report xml file which will be created

The full list is here.

The macro was called like this:

 1 2 3   

I substituted the ncover task back in with the same parameters as above and low and behold it worked!

 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16   

I’m not sure exactly what the problem parameter was but encapsulating this part of the build gave me the option of working that out in a way that impacted very little of the rest of the build file.

*Update*
Fixed the first example to include the opening as pointed out by Vikram in the comments. Thanks again Vikram for pointing that out!

Written by Mark Needham

August 21st, 2008 at 12:40 am

Posted in Build

Tagged with , , , ,