Mark Needham

Thoughts on Software Development

Archive for April, 2009

Coding Dojo #13: TDD as if you meant it

with 9 comments

We decided to follow Keith Braithwaite’s ‘TDD as if you meant it‘ exercise which he led at the Software Craftsmanship Conference and which I originally read about on Gojko Adzic’s blog.

We worked on implementing a Flash Message interceptor, to hook into the Spring framework, that one of my colleague’s has been working on – the idea is to show a flash method to the user, that message being stored in the session on a Post and then removed on a Get in the ‘Post-Redirect-Get‘ cycle. It’s similar to the ‘:flash’ messages that get passed around in Rails.

The Format

We used the Randori approach with five people participating for the whole session.

What We Learnt

  • We were following these rules for coding :
    1. write exactly ONE failing test
    2. make the test from (1) pass by first writing implementation code IN THE TEST
    3. create a new implementation method/function by:
      1. doing extract method on implementation code created as per (2), or
      2. moving implementation code as per (2) into an existing implementation method
    4. only ever create new methods IN THE TEST CLASS
    5. only ever create implementation classes to provide a destination for extracting a method created as per (4).
    6. populate implementation classes by doing move method from a test class into them
    7. refactor as required
    8. go to (1)

    Despite having read about Gojko’s experiences of this exercise I still found it amazingly frustrating early on taking such small steps and the others pointed out a couple of times that the steps we were taking were too big. The most difficult thing for me was the idea of writing the implementation in the test and working out what counts as implementation code and what counts as test setup. The line seemed to be a little bit blurred at times.

  • We worked out after writing 5 or 6 tests that we had overcomplicated things – we originally started out using a map to represent the session and then called the get and put methods on that to represent putting the flash message into and taking it out from the session. We decided to redo these tests so that the flash message was just represented as a string which we manipulated. This second approach guided us towards the idea of having a FlashMessageStore object with a much simpler API than a Map.
  • We started off only extracting methods when there was duplication in our tests which forced us to do so. As a result of doing this I think we ended up having the implementation details in the test for longer than the exercise intended. We didn’t introduce a class to hold our methods for quite a while either – the idea we were following was that we wouldn’t create a class unless we had three methods to put on it. Once we got the hang of the exercise we started creating those methods and classes much earlier.
  • Dave pointed out an interesting way of writing guard blocks which I hadn’t seen before – the idea is that if you want to exit from a method it’s fine not to have the {} on the if statement as long as you keep the whole statement on one line. Roughly something like this:
    public void clearFlashMessageIfRequestTypeIsGet(Request request) {
    	if(!"get".equalsIgnoreCase(request.GetType()) return;
    	// do other stuff
  • It seemed like following these rules guided us towards tiny/micro types in our code and the methods we extracted were really small compared to the ones we would have written if we’d started off writing the test and implementation separately.
  • We had an interesting discussion towards the end of the dojo about the merits of wrapping framework libraries with our own types. For example we might choose to wrap the HttpServletRequest in our own Request object so that we can control the API in our code. Although I think this might initially be a bit confusing to people who expect to see a certain set of methods available when they see they are using a Request object, this approach seems to be following Steve McConnell’s advice in Code Complete of ‘coding into a language’ rather than in a language – we are moulding the framework to our system’s needs.

For next time

  • I really enjoy the dojos where we experiment with an approach which is slightly different than what you do on projects. Although the language (Java) was familiar to everyone it was still challenging to code a very different way than what we’re used to. We’ll probably try this exercise again the next time with another problem.

Written by Mark Needham

April 30th, 2009 at 6:12 am

Posted in Coding Dojo

Tagged with

F#: Overloading functions/pattern matching

with 3 comments

While trying to refactor my twitter application into a state where I could use Erlang style message passing to process some requests asynchronously while still hitting twitter to get more messages I came across the problem of wanting to overload a method.

By default it seems that you can’t do method overloading in F# unless you make use of the OverloadID attribute which I learnt about from reading Scott Seely’s blog post:

Adding the OverloadID attribute to a member permits it to be part of a group overloaded by the same name and arity. The string must be a unique name amongst those in the overload set. Overrides of this method, if permitted, must be given the same OverloadID, and the OverloadID must be specified in both signature and implementation files if signature files are used.

I therefore ended up with this somewhat horrible looking code:

type TwitterMessageProcessor =
	let agent = MailboxProcessor.Start(fun inbox ->
	(* ... *)
		member x.Send(message:TwitterStatus) = agent.Post(Phrase(message))
		member x.Send(message: seq<TwitterStatus>) = message |> Seq.iter (fun message -> agent.Post(Phrase(message)))

This allows me to either send a single TwitterStatus or a collection of TwitterStatuses to the TwitterMessageProcessor but it feels like the C# approach to solving the problem.

Talking to Dave about this problem he suggested that maybe pattern matching was the way to go about this problem but I wasn’t sure how to do pattern matching based on the input potentially being a different type.

A bit of googling turned up a Stack Overflow thread about defining functions to work on multiple types of parameters.

I tried this out and ended up with the following code which uses type constraints:

type TwitterMessageProcessor() =
	let agent = MailboxProcessor.Start(fun inbox ->
	(* ... *)
		member x.Send(message) =
     		match box message with
         		| :? seq<TwitterStatus> as message -> message |> Seq.iter (fun message -> agent.Post(Phrase(message)))
         		| :? TwitterStatus as message -> agent.Post(Phrase(message))
         		| _ -> failwith "Unmatched message type"

This seems a bit nicer but obviously we lose our type safety that we had before – you can pretty much send in anything you want to the Send method. Looking at the code in Reflector confirms this:

public void Send<T>(T message)
    object obj2 = message;
    IEnumerable<TwitterStatus> $typeTestResult@37 = obj2 as IEnumerable<TwitterStatus>;
    if ($typeTestResult@37 != null)
        IEnumerable<TwitterStatus> message = $typeTestResult@37;
        Seq.iter<TwitterStatus>(new Twitter.clo@37_1(this), message);
        TwitterStatus $typeTestResult@38 = obj2 as TwitterStatus;
        if ($typeTestResult@38 != null)
            TwitterStatus message = $typeTestResult@38;
            this.agent@16.Post(new Twitter.Message._Phrase(message));
            Operators.failwith<Unit>("Unmatched message type");

I’m not really happy with either of these solutions but I haven’t come across a better way to achieve this but I’d be interested in doing so if anyone has any ideas.

Written by Mark Needham

April 28th, 2009 at 11:43 pm

Posted in F#

Tagged with

Coding: Weak/Strong APIs

with 4 comments

An interesting problem that I’ve come across a few times in the last couple of week centres around how strongly typed we should make the arguments to public methods on our objects.

There seem to be benefits and drawbacks with each approach so I’m not sure which approach is better – it possibly depends on the context.

When we have a strong API the idea is that we pass an object as the argument to a method on another object.

To given an example, recently a colleague and I were working on a little application to compare two objects and show the differences between them.

One of the decisions to be made was how to accumulate the differences. We created PropertyDifference and PropertyDifferences objects to do this, but the question became who should have the responsibility for creating the PropertyDifference.

The choice was between having PropertyDifferences API look like this:

public class PropertyDifferences
	public void Add(PropertyDifference propertyDifference) 
		// code to add a difference

Or like this:

public class PropertyDifferences
	public void Add(string propertyName, object actualValue, object expectedValue) 
		var propertyDifference = new PropertyDifference(propertyName, actualValue, expectedValue)
		// code to add a difference

In the former the client (ObjectComparer) needs to create the PropertyDifference before passing it to PropertyDifferences whereas in the latter that responsibility for doing that rests with PropertyDifferences.

I’m in favour of the former strong API approach which James Noble describes in his paper ‘Arguments and Results‘ as the Arguments Object.

What I like about this approach is that it simplifies the API of PropertyDifferences – we just have to pass in a PropertyDifference and then we don’t need to worry about it. I also find it more expressive than having each of the arguments individually.

However, while reading through Object Design this morning I’ve started to see that there can be some benefits in the weak API approach as well.

(from page 7 of the book)

As we conceive our design, we must constantly consider each object’s value to its immediate neighbourhood. Does it provide a useful service? Is it easy to talk to? Is it a pest because it is constantly asking for help?…The fewer demands an object makes, the easier it is to use.

By that logic PropertyDifferences is making itself more difficult to use by demanding that it gets sent a PropertyDifference since objects which use it now need to create that object. I suppose the other way of reading that could be that if PropertyDifferences demands that it gets three different arguments instead of one then that is more demanding.

The other advantage I can see with the weak API is that we can reduce the places in the code where we new up a PropertyDifference. If we then decide to change the way that you create a PropertyDifference then we have less places to change.

The down side is that we end up coupling PropertyDifferences and PropertyDifference which maybe isn’t so bad since they are fairly closely related.

I still favour having a stronger API on objects since I believe it makes objects more expressive and I currently consider that to be the most important thing when coding but weaker APIs certainly have their place too.

Written by Mark Needham

April 27th, 2009 at 8:30 pm

Posted in Coding

Tagged with

F#: Not equal/Not operator

with one comment

While continuing playing with my F# twitter application I was trying to work out how to exclude the tweets that I posted from the list that gets displayed.

I actually originally had the logic the wrong way round so that it was only showing my tweets!

let excludeSelf (statuses:seq<TwitterStatus>) = 
    statuses |> Seq.filter (fun eachStatus ->  eachStatus.User.ScreenName.Equals("markhneedham"))

Coming from the world of Java and C# ‘!’ would be the operator to find the screen names that don’t match my own name. So I tried that.

let excludeSelf (statuses:seq<TwitterStatus>) = 
    statuses |> Seq.filter (fun eachStatus -> !(eachStatus.User.ScreenName.Equals("markhneedham")))

Which leads to the error:

Type constraint mismatch. The type 'bool' is not compatible with the type 'bool ref'.

If we look at the definition of ‘!’ we see the following:

> val it : ('a ref -> 'a)

It’s not a logical negation operator at all. In actual fact it’s an operator used to deference a mutable reference cell.

What I was looking for was actually the ‘not’ operator.

let excludeSelf (statuses:seq<TwitterStatus>) = 
    statuses |> Seq.filter (fun eachStatus ->  not (eachStatus.User.ScreenName.Equals("markhneedham")))

I could also have used the ‘<>‘ operator although that would have changed the implementation slightly:

let excludeSelf (statuses:seq<TwitterStatus>) = 
    statuses |> Seq.filter (fun eachStatus ->  eachStatus.User.ScreenName <> "markhneedham")

The Microsoft Research F# website seems to be quite a useful reference for understanding the different operators.

Written by Mark Needham

April 25th, 2009 at 10:12 pm

Posted in F#

Tagged with

Writing unit tests can be fun

with 3 comments

I recently came across Pavel Brodzinski’s blog and while browsing through some of his most recent posts I came across one discussing when unit testing doesn’t work.

The majority of what Pavel says I’ve seen happen before on projects I’ve worked on but I disagree with his suggestion that writing unit tests is boring:

6. Writing unit tests is boring. That’s not amusing or challenging algorithmic problem. That’s not cool hacking trick which you can show off with in front of your geeky friends. That’s not a new technology which gets a lot of buzz. It’s boring. People don’t like boring things. People tend to skip them.

Small steps

While working on a little application to parse some log files last week I had to implement an algorithm to find the the closing tag of an xml element in a stream of text.

I had a bit of an idea of how to do that but coming up with little examples to drive out the algorithm helped me a lot as I find it very difficult to keep large problems in my head.

The key with following the small steps approach is to only writing one test at a time as that helps keep you focused on just that one use of this class which I find much easier than considering all the cases at the same time.

The feeling of progress all the time, however small, contributes to my enjoyment of using this approach.

Test first

I think a lot of the enjoyment comes from writing unit tests before writing code, TDD style.

The process of moving up and down the code as we discover different objects that should be created and different places where functionality should be written means that writing our tests/examples first is a much more enjoyable process than writing them afterwards.

The additional enjoyment in this process comes from the fact that we often discover scenarios of code use and problems that we probably wouldn’t have come across if we hadn’t driven our code that way.

Ping pong pairing

I think this is the most fun variation of pair programming that I’ve experienced, the basic idea being that one person writes a test, the other writes the code and then the next test before the first person writes the code for that test.

I like it to become a bit of a game whereby when it’s your turn to write the code you write just the minimal amount of code possible to make the test pass before driving out a proper implementation with the next test you write.

I think this makes the whole process much more light hearted than it can be otherwise.

In Summary

The underlying premise of what makes writing unit tests pretty much seems to be about driving our code through those unit tests and preferably while working with someone else.

Even if we choose not to unit test because we find it boring we’re still going to test the code whether or not we do it in an automated way!

Written by Mark Needham

April 25th, 2009 at 7:51 pm

Posted in Testing

Tagged with ,

OO with a bit of functional mixed in

with 2 comments

From my experiences playing around with F# and doing a bit of functional C# I’m beginning to think that the combination of functional and object oriented programming actually results in code which I think is more expressive and easy to work with than code written only with an object oriented approach in mind.

I’m also finding it much more fun to write code this way!

In a recent post Dean Wampler questions whether the supremacy of object oriented programming is over before going on to suggest that the future is probably going to be a mix of functional programming and object oriented programming.

I agree with his conclusion but there are some things Dean talks about which I don’t really understand:

The problem is that there is never a stable, clear object model in applications any more. What constitutes a BankAccount or Customer or whatever is fluid. It changes with each iteration. It’s different from one subsystem to another even within the same iteration! I see a lot of misfit object models that try to be all things to all people, so they are bloated and the teams that own them can’t be agile. The other extreme is “balkanization”, where each subsystem has its own model. We tend to think the latter case is bad. However, is lean and mean, but non-standard, worse than bloated, yet standardized?

I don’t think an object model needs to be stable – for me the whole point is to iterate it until we get something that fits the domain that we’re working in.

I’m not sure who thinks it’s bad for each subsystem to have its own model – this is certainly an approach that I think is quite useful. Having the same model across different subsystems makes our life significantly more difficult. There are several solutions for this outlined in Domain Driven Design.

Dean goes on to suggest that in a lot of applications data is just data and that having that data wrapped in objects doesn’t add much value.

I’ve worked on some projects which took that approach and I found the opposite to be true – if we have some data in an application it is very likely that there is going to be some sort of behaviour associated to it, meaning that it more than likely represents some concept in the business domain. I find it much easier to communicate with team mates about domain concepts if they’re represented explicitly as an object instead of just as a hash map of data for example.

Creating objects also helps manage the complexity by hiding information and from my experience it’s much easier to make changes in our code base when we’ve managed data & behaviour in this manner.

I think there is still a place for the functional programming approach though. Functional collection parameters for example are an excellent way to reduce accidental complexity in our code and removing useless state from our applications when performing operations on collections.

I don’t think using this type of approach to coding necessarily means that we need to expose the state of our objects though – we can still make use of these language features within our objects.

The most interesting thing for me about using this approach to some areas areas of coding when using C# is that you do need to change your mindset about how to solve a problem.

I typically solve problems with a procedural mindset where you just consider the next step you need to take sequentially to solve the problem. This can end up leading to quite verbose solutions.

The functional mindset seems to be more about considering the problem as a whole and then working out how we can simplify that problem from the outside in which is a bit of a paradigm shift. I don’t think I’ve completely made but it can certainly lead to solutions which are much easier to understand.

The other idea of functional programming that I’ve been experimenting with is that of trying to keep objects as immutable as possible. This pretty much means that every operation that I perform on an object which would previously mutate the object now returns a new instance of it.

This is much easier in F# than in C# where you end up writing quite a lot of extra code to make that possible and can be a bit confusing if you’re not used to that approach.

Sadek Drobi did a presentation at QCon London where he spoke more about taking a functional programming approach on a C# project and while he’s gone further than I have with the functional approach my current thinking is that we should model our domain and manage complexity with objects but when it comes to solving problems within those objects which are more algorithmic in nature the functional approach works better.

Written by Mark Needham

April 25th, 2009 at 11:14 am

Posted in OOP

Tagged with ,

Pimp my architecture – Dan North

with 5 comments

My colleague Dan North presented a version of a talk he first did at QCon London titled ‘Pimp my architecture‘ at the ThoughtWorks Sydney community college on Wednesday night. He’ll also be presenting it at JAOO in Sydney and Brisbane in a couple of weeks time.

The slides for the talk are here and it’s also available on InfoQ.

What did I learn?

  • I quite liked the way the talk was laid out – Dan laid out a series of problems that he’s seen on some projects he’s worked on and then showed on the next slide where he planned to take the architecture. The rest of the talk then detailed the story of how the team got there.

    To begin with it was a case of SOA gone bad with clients heavily coupled to services via WSDL definitions, a lot of code duplication, a complex/flaky architecture featuring a non standard JBoss and a team where the developers worked in silos.

    The aim was to get to a ‘good SOA’ where the services were actually helpful to clients, the code would be stable in production/deterministically deployable and a happy team would now exist.

  • The role of an architect is to play a key role in design, to be the technology expert on the team, to act as a coach and a social anthropologist/shaman. An interesting theme for me in the talk was that so much of it centred around the human aspects of architecture. I guess maybe that’s obvious but a lot of what I’ve read about architecture comes from the technical side and providing the technical direction so it was refreshing to see a different approach.
  • As mentioned above, Dan spoke of the need to have a project shaman – someone who can share the history of the project and why certain decisions were made on the project and explain those to people when they join the team. It can be the architect but it doesn’t actually matter as long as someone on the team assumes the role. Another colleague of mine pointed out that this role is also about envisioning the future of the system. As with most things when we know the context in which something was done the decision doesn’t seem quite so stupid.
  • One of the interesting ideas which Dan spoke of was that of having a transitional architecture on your way to the architecture that you actually want. You know that it’s not the end goal but it’s a much better place to be than where you were and can provide a nice stepping stone to where you want to be. The example he gave was refactoring the architecture to a stage where the services could be accessed via a service locator. It’s never going to be the end goal but provides a nice middle ground.
  • Dan provided quite an interesting alternative to the ‘yesterday I did this, today I’m doing that…’ style of standups which I haven’t seen used before. The idea is that the team consider what would make today the best possible day in achieving their goal, and then going around the circle each team member adds in information that will help the team teach that goal – be it stuff they learnt the day before or areas that they have been struggling in. He also spoke of the idea of people helping each other rather than pairing to try and get past the reluctance of having two people working on the same machine.
  • The idea that you you get what you measure was also mentioned – if we measure the performance of developers by the number of story points they complete then we increase the chance that they’re just going to finish stories as quickly as possible without caring as much about the quality of the code being written potentially leading to more buggy code. I’m interested in reading the Influencer to understand more about this type of thing.
  • Dan pointed out that we should never write caches unless that happens to be our differentiator – they’ve been done loads of times before and there are plenty to choose from. This ties in with the Domain Driven Design idea of focusing our efforts on the core domain and not worrying so much about other areas since they aren’t what makes us special.
  • It was also pointed out that we won’t fix everything on a project. I think this is a very astute observation and makes it easier to work with code bases where we want to make a lot of changes. At times it can feel that you want to change just about everything but clearly that’s not going to happen.

Written by Mark Needham

April 25th, 2009 at 1:26 am

DDD: Making implicit concepts explicit

with 5 comments

One of my favourite parts of the Domain Driven Design book is where Eric Evans talks about making implicit concepts in our domain model explicit.

The book describes this process like so:

Many transformations of domain models and the corresponding code happen when developers recognize a concept that has been hinted at in discussion or present implicitly in the design, and they then represent it explicitly in the model with one or more objects or relationships.

Lu and I were working on a small application to parse the WCF message log file on our project into more readable chunks whereby each request in the file would be outputted into another file so that it would be possible to read them individually.

We decided to create a little domain model for this since the code seemed to be getting a bit tricky to handle when it was all being written inline inside a main method.

To start with we just had a collection of requests which was an accurate representation of the way that the data was being stored in the log file.

We collected all these requests and then created individual files for each of them. We also grouped these request files under directories by the session that the request was from.

The input/output of our application looked a bit like this:


The next idea suggested for this little application was that it would be cool if we could put the characters ‘FAIL’ into the file name of any requests which failed and also into the folder name of any sessions which had failing requests inside them.

We tried to do this with our original model but everything we did resulted in adding more and more code to the Request object which didn’t seem to belong to it. The tipping point for me was when we ended up with Request.SessionFolderName as a property.

Eventually we realised that what had been implicit would now need to be made explicit and the Session came to be an object in our domain model.

What I found most interesting about this process was that we were always talking about the Session but it didn’t actually exist in our model!

The model in our code now pretty much represents the format in which we out outputting the data and with Session as an explicit concept it makes it much easier to make changes in the future.

Written by Mark Needham

April 23rd, 2009 at 12:36 pm

The Five Dysfunctions of a Team: Book Review

with 4 comments

The Book

The Five Dysfunctions of a Team by Patrick Lencioni

The Review

I heard about this book a while ago but I was intrigued to actually get a copy by Darren Cotterill, the Iteration Manager on the project I’m working on at the moment.

I was particularly interested in learning whether the ideas of agile and/or lean help to solve any of these dysfunctions.

What did I learn?

  • The book is split into two sections. In the first section a story is told about an organisation with a dysfunctional team and the dysfunctions are gradually introduced. The second section covers them in more detail and provides ways to overcome. The dysfunctions are as follows:
    1. Absence of Trust – team members are unwilling to be vulnerable within the group
    2. Fear of Conflict – team cannot engage in unfiltered and passionate debate of ideas
    3. Lack of Commitment – team members rarely have buy in or commit to decisions
    4. Avoidance of Accountability – team members don’t call their peers on actions/behaviours which hurt the team
    5. Inattention to Results – team members put their individual needs before those of the team
  • One of the most interesting arguments the book raises is around getting everyone to be focused on the same goal whereby the collective ego gets precedence over individual egos. This requires a lack of politics which is defined as ‘when people choose their words and actions based on how they want others to react rather than what they really think’. This idea also seems similar to the idea in lean thinking of favouring the big picture over local optimisations – the team as a whole succeeding is more important than any individual success.
  • The other danger of individual goals being favoured over those of the collective is identified as being specialism in teams whereby everyone is responsible for their part and noone else knows anything about it. Pair programming with frequent rotation is one approach that we can use in software development teams to help avoid this specialisation as well as encouraging team members to become generalising specialists rather than expert in just one area.
  • The ideas around healthy conflict are quite interesting. Meetings should have some level of conflict otherwise we probably just have false harmony. This sounds a little similar to the idea in lean that ‘no problem is a problem’ – i.e. we shouldn’t keep things to ourself but instead get them out there and find a way to solve the problem. The author also points out that conflict is never going to feel comfortable but that doesn’t mean that we shouldn’t engage in it.
  • I particularly liked the ideas for creating trust on a team – team members are given the opportunity to share some information about themselves including their greatest strength and weakness in relation to the team. I’ve not seen this explicitly done on any teams I’ve worked on but I think that when pair programming people do share this kind of information so maybe we do actually get some of the benefits of this approach. The idea is that team members should be ‘confident that their peers intentions are good’. Reading this reminded me of the retrospective prime directive.
  • An idea which I don’t completely agree with is that we should look to make decisions because ‘a decision is better than no decision’ – the author claims that not making decisions can lead to a lack of confidence in the team and that dysfunctional teams wait until they have enough data to be certain that their decision is correct. He does then go on to point out that if the decision is wrong then we should not be afraid to change it which I do agree with. In software development teams though I question the value of making decisions too early – there is some value in following an approach such as set based concurrent engineering where we try out several approaches before converging on the actual solution later on.

In Summary

I found this book really interesting and I could definitely relate to some of the the things that were talked about.

I think lean/agile ideas do solve some of the problems but certainly not all of them and it would definitely be interesting to try out some of the exercises suggested on future teams I work on.

Written by Mark Needham

April 22nd, 2009 at 6:50 am

Posted in Books

Tagged with ,

Learning through teaching

with 5 comments

I’ve been watching one of the podcasts recorded from the Alt.NET Houston conference titled ‘Why blog and open source‘ and one of the interesting ideas that stood out amongst the opinions express is that people write about their experience in order to understand the topics better themselves.

I’ve found this to be a very valuable way of learning – in fact it’s probably more beneficial to the teacher than the student, somewhat ironically.

I’ve noticed that there are a few opportunities that we can take advantage of in order to increase our knowledge on topics by teaching other people about them.

By ‘teaching’ I mean explaining your understanding of a topic to other people rather than the traditional classroom approach where the ‘teacher’ is the expert and the ‘students’ are mostly passive learners.


Blogging was the topic being discussed on the podcast, and I’ve found that writing about a topic is a pretty good way of organising your thoughts on that topic and seeing whether you understand it well enough to construct some opinions that you would have the ability to defend.

I’ve often found that when I start to write about a topic I discover some things about it that I hadn’t previously considered and of course the ability for people to give you feedback on what you write means that it can become a two way conversation. There have certainly been a couple of occasions when people with more knowledge of a topic than me have pointed out where my understand of something can be improved.

The same thing applies when presenting about a topic although the feedback will be more immediate.

Book Clubs

One approach we’ve been trying out in our Domain Driven Design book club recently is to split the sub chapters between each of the members of the group and then everyone presents their part in the next meeting.

The advantage of this approach is that everyone gets the opportunity to teach the rest of the group in a topic area in which they have more knowledge and/or have studied more recently.

If you know you need to explain something to other people then I think it encourages you to approach the subject differently than if you are just reading through it for yourself. You also need to understand the topic more clearly yourself and be able to put it into your own words to explain to other people.

Pair Programming

I think one of the situations that can be the most frustrating in pair programming is when you understand something really well and your pair doesn’t understand it as well.

The temptation is to wait until they’re not around to implement your ideas but this misses a great opportunity to explore how well you really understand the topic area.

The questions posed by someone with less knowledge on a topic than you will force you to come up with good reasons for your opinions that you can explain in a more simple way than you may be used to. We may be forced to come up with metaphors that make it easier to explain something and certainly coming up with these metaphors may improve our own understanding.

In summary

I think there are a lot of opportunities in the world of software development to teach what we know to others and although it may seem that we are doing a favour to other people, I think we will find that we learn just as much as them from doing so if not more.

Of course it’s not the only approach to learning but I’ve found it to be a surprisingly effective one.

Written by Mark Needham

April 21st, 2009 at 7:38 am

Posted in Learning

Tagged with