Mark Needham

Thoughts on Software Development

Archive for October, 2008

If you use an ‘if’ you deserve to suffer

with 3 comments

One of the things I dislike the most when coding is writing if statements. and while I don’t believe that if should be completely abolished from our toolkit, I think the anti if campaign started about a year ago is going along the right lines.

While there is certainly value in using an if statement as a guard block it usually feels that we have missed an abstraction if we are using it elsewhere.

Given my dislike of the construct, when I have to use it I like to being very explicit about the scope in which it is applicable. I think this approach stems from being caught out back in my university days and trying to debug something along the lines of:

if(thisHappens())
	doThis();
	doThat();

The code was much messier than that but I spent ages trying to work out why the correct behaviour wasn’t happening. Eventually I realised that ‘doThat()’ was being called every time regardless of the value returned by ‘thisHappens()’.

Ever since then I have written if statements like so:

if(thisHappens()) {
	doThis();
	doThat();
}

It doesn’t look as nice but with one glance at the code I can see exactly what is happening. I definitely prefer to have this advantage although I do appreciate that if there is only one statement following the if statement then YAGNI might be applicable.

I prefer to take the more conservative approach – once bitten, twice shy.

So what is the title of this post all about?

Often when pair programming there is some discussion over which approach is better and in one such session last week a colleague of mine, who happened to favour the more verbose approach, came up with a phrase along those lines when I asked why they favoured that approach. If you’re going to use an if statement then you’re just going to have to put up with the eye sore those extra curly braces create in your code!

Being in favour of this approach already I really like the idea and while maybe not quite as scientific as describing the technical reasons for doing so, it is perhaps more effective.

Written by Mark Needham

October 21st, 2008 at 7:19 am

Posted in Coding

Tagged with ,

Build: Checkout and Go

with 5 comments

On the previous project I was working on one of the pain points we were having was around setting up developer environments such that you could get the code up and running on a machine as quickly as possible.

I would go to a newly formatted machine ready to set it up for development and run into a cascading list of dependencies I hadn’t considered.

SVN wasn’t installed, then Ruby, then we had the wrong version of Java and all the while we were wasting time when this process could have been automated.

One of the things we were working on on my previous project was how to get our project into such a state that we could just checkout the code on a machine and get it ready to run on there relatively quickly.

I haven’t yet come up with a good solution for automating all dependency checking but an approach that my colleague has been advocating on my current project is that of aiming for a ‘Checkout and Go‘ build.

The idea is fairly simple – when a developer joins the team they should be able to checkout all the code from source control, browse to the root directory and run a script called ‘go’ which will download and install everything needed to run the build on their machine.

We are using buildr, so in our case this mainly involves populating our JRuby and Ruby gem repositories. buildr takes care of downloading our Java dependencies.

Potential issues

The original Checkout and Go build script was only written for use from the Unix shell which obviously restricted its use somewhat! A couple of people on the team were running on Windows so we now have two versions of a very similar script.

There are some parts of our setup process which only need to be run the first time the developer runs the build. There is a trade off to be made with regards to whether it is better to incorporate these into the main script and write the associated logic around ensuring they don’t get run every time as opposed to leaving developers to run them once when they first checkout the code.

One interesting thing we have noticed is that although the theory is these scripts only need to be run once, there are actually new dependencies being introduced now and then which requires them to be re-run. If there is frequent change then the benefit of the Checkout and Go approach clearly increases.

Alternative approach

The alternative to the Checkout and Go approach is to have a setup where we only provide a build file which, working on the assumption that the developer will take care of the dependencies themself – these dependencies would then typically be written up on a Wiki page.

The thing I like about this approach is that it is much simpler to setup initially – there is no code to write around automating the download and setup of dependencies.

The disadvantage is that it is non deterministic – as we are relying on a human to execute a series of instructions the chance of someone doing something just slightly different is higher, therefore leading to the problem of developer machines not being setup in an identical manner.

In Conclusion

We don’t have this process perfect yet and there is a bit of a trade off with regards to making the go script really complicated to allow a completely automated process as opposed to covering maybe 90% of the setup in an automated script and then having the remaining 10% done manually or in other test scripts which a developer can run when required.

This idea has been the best one that I have seen with regards to getting developer environments setup and running quickly but it would be good to know if anyone has any better ideas around this.

Written by Mark Needham

October 19th, 2008 at 10:49 pm

Posted in Build

Tagged with

Learnings from Code Kata #1

without comments

I’ve been reading My Job Went To India and one of the chapters midway through the second section talks about the value of practicing coding using code katas.

I’ve not tried doing these before but I thought it would be an interesting activity to try out.

The Kata

Code Kata One – Supermarket Pricing

What I learnt

  • As this kata is not supposed to be a coding exercise I started out just modeling ideas in my head about how I would do it before I realised that this wasn’t working as an effective way for me to learn. I decided to try and test drive some of my ideas to see whether they would actually work or not.

    It also gave me the chance to play around with git – I put the code I wrote on github – and re-commence my battle with buildr.

  • Having decided to fire up IntelliJ and try out some of my ideas in code I realised that test driving straight away wasn’t going to work well – I hadn’t spent enough time designing the objects that I wanted to test. I still wanted to try out my ideas in code though so I spent about ten minutes roughly coding out how I expected it to work before using a test driven approach to drive out the (admittedly simple) algorithms.

    While Test Driven Development is a very effective technique for driving out object interactions and ensuring code correctness, it was good to have a reminder that there still needs to be some up front design work before diving in.

  • One of the mistakes I made early on was over engineering my solution – I saw an opportunity to put one of my favourite patterns, the double dispatch pattern into the code. I did this straight away before stepping back and realising that it wasn’t really needed in this instance.

    Speaking to a couple of my colleagues about this the next day they reiterated the need to keep it simple.

  • I consider myself to be reasonably competent when it comes to object modeling – it is certainly something I enjoy doing – so I found it a bit disheartening that I struggled quite a bit to come up with solutions to this problem. I decided to return to the book and re-read the chapter. It had this to say:

    Looking back on it now, I see that the awkward feeling I got from these experiences was a good sign

    I was stretching my mental muscles and building my coding chops… if I sit down to practice coding and nothing but elegant code comes out, I’m probably sitting somewhere near the center of my current capabilities instead of the edges, where a good practice session should place me.

    It reminded me about an article I read earlier this year about ‘the expert mind‘ which talks of the value of effortful study.

    Ericsson argues that what matters is not experience per se but “effortful study,” which entails continually tackling challenges that lie just beyond one’s competence.

    This makes sense to me – we don’t improve that much by doing activities which we already know how to but equally we don’t want to be overwhelmed by the difficulty of the activity.

    While this task was difficult I think that reading most of Object Design prior to coming across this kata made it a bit easier than it would have otherwise been.

  • One unusual thing for me while I was trying out this exercise was that I wasn’t pairing with someone else. I think this made it more difficult as I didn’t have anyone to bounce ideas off and sub optimal solutions weren’t getting rejected as quickly due to the I had a longer feedback cycle. On the other hand, it provided good practice for improving my ability to spot unworkable ideas more quickly.
  • I really wanted to write the bits of code I wrote in a completely object oriented way. This meant having no getters on any of the classes which forced me to think much more about the responsibility of each object. Although I wrote very little code in this instance, this is a practice that will be useful for me when coding in other situations. I had a bit of difficulty trying to keep the code well encapsulated while also allowing it to describe the domain concepts but hopefully this is something that will become easier the more I practice.

Written by Mark Needham

October 18th, 2008 at 7:47 pm

Posted in Code Katas

Tagged with

Pair Programming: Pair Flow

with 4 comments

In an earlier post about Team Productivity I stumbled upon the idea that while pair programming there could be such a concept as pair flow.

The term ‘flow’ is used to describe a situation where you are totally immersed in the work you’re doing and where time seems to go by without you even noticing.

This can also happen when pair programming and I think there are some factors which can make it more likely.

Code Intuitively/Making the IDE do the work

Coding intuitively was one of the first things I was taught by one of my colleagues when I paired with him nearly two years ago.

The idea is that you should code in such a way that your pair can clearly follow your line of thinking based on watching what you are typing. This includes naming variables, methods and classes in a way that communicates their intent.

In addition, we need to make the IDE do most of the work – this means creating variables by typing in ‘new MyObject()’ and then getting the IDE to handle the assignment for example.

One of the easiest ways to get out of a state of pair flow is to manually type everything – the navigator will lose focus much more quickly and as a result of this possibility it becomes vital that experienced users of an IDE share their knowledge on the shortcuts with other team members.

Test Driven Development

Using a TDD approach makes it much easier to get into a state of pair flow because there are lots of little goals (i.e. passing tests) that we can achieve along the way which helps to provide a structure around the work that is being done.

The times when I have experienced pair flow have been when I’ve been constantly engaged in design discussion followed by implementing the ideas derived from these discussions.

One of the most effective approaches that I have used when combining pairing and test driven development is ping pong programming – one person writes the test, the other makes it pass, they they write a test and the original person makes it pass.

The benefit of using this approach is that it allows both people to remain focused on the story being worked on, and also provides a useful way to drive out the design.

Constant Communication

Talk, talk, talk!

There’s nothing likely to make pair programming painful than a lack of communication between a pair.

I’ve noticed that pairing seems to be at its most effective when the driver is not only coding intuitively but also explaining their thought process along the way. I have lost count of the number of times that I have been describing how I’m planning to do something and my pair points out a better way of doing it.

As a navigator don’t be afraid to ask questions or make suggestions about how to improve the code being written.

Communication doesn’t only have to be at the work station either. Going to a whiteboard and sketching out a design to confirm a dual understanding is equally important.

Sufficiently challenging problem

If the problem being solved is very easy or very boring then pair flow is not going to happen because both people are not engaged.

I posted earlier about some of the situations where there might be some doubt over whether they are suitable for pairing and I think it is important to make sure there is value in what we choose to pair on.

The best tasks for concentrating both people in a pair are ones where there is some complex business logic to implement or where there are significant design decisions to be made in the implementation of a piece of functionality.

Written by Mark Needham

October 17th, 2008 at 12:18 am

Browsing around the Unix shell more easily

with 14 comments

Following on from my post about getting the pwd to display on the bash prompt all the time I have learnt a couple of other tricks to make the shell experience more productive.

Aliases are the first new concept I came across and several members of my current team and I now have these setup.

We are primarily using them to provide a shortcut command to get to various locations in the file system. For example I have the following ‘work’ alias in my ~/.bash_profile file:

alias work='cd ~/path/to/my/current/project'

I can then go to the bash prompt and type ‘work’ and it navigates straight there. You can put as many different aliases as you want in there, just don’t forget to execute the following command after adding new ones to get them reflected in the current shell:

. ~/.bash_profile

A very simple idea but one that helps save so many keystrokes for me every day.

Another couple of cool commands I recently discovered are pushd and popd

They help provide a stack to store directories on, which I have found particularly useful when browsing between distant directories.

For example suppose I am in the directory ‘/Users/mneedham/Desktop/Blog/’ but I want to go to ‘/Users/mneedham/Projects/Ruby/path/to/some/code’ to take a look at some code.

Before changing to that directory I can execute:

pushd .

This will push the current directory (‘/Users/mneedham/Desktop/Blog/’) onto the stack. Then once I’m done I just need to run:

popd

I’m back to ‘/Users/mneedham/Desktop/Blog/’ with a lot less typing.

Running the following command shows a list of the directories currently on the stack:

dirs

I love navigating with the shell so if you’ve get any other useful tips please share them!

Written by Mark Needham

October 15th, 2008 at 10:31 pm

Posted in Shell Scripting

Tagged with

Java vs .NET: An Overview

with 5 comments

A couple of months ago my colleague Mark Thomas posted about working on a C# project after 10 years working in Java, and being someone who has worked on projects in both languages fairly consistently (3 Java projects, 2 .NET projects) over the last two years I thought it would be interesting to do a comparison between the two.

The standard ThoughtWorks joke is that you just need to remember to capitalise the first letter of method names in C# and then you’re good to go but I think there’s more to it than that.

The Language & Framework

There is really not much difference between the syntax of Java and C# and I’m not that interested in going into it it massive detail here. There are other websites which cover it in more detail.

Language features wise C# seems to be marginally ahead – the introduction of lambda expressions, implicitly typed local variables and extension methods in C# 3.0 is something not yet matched in Java.

From my experience C#/.NET has much better support for front end rich GUI applications (WinForms, WPF) while Java is probably better for back end work. When it comes to web applications Java probably holds a marginal edge although the soon to be production released ASP.NET MVC framework is a very nice piece of kit.

I have no data to justify saying that, merely thoughts based on experience, but from conversations with friends who work in investment banking I have learned that this is the way the two languages are used there as well.

Other language support

If you are looking for language support on the respective platforms beyond Java/C#, Java probably has a slight edge.

Groovy is a dynamic lanuage with a Java style syntax and should therefore be easier for Java developers to pick up. I’m not aware of a dynamic language with C# style syntax for .NET although Boo is an alternative which compiles to run on the Common Language Infrastructure.

If you need Ruby support Java has JRuby while .NET has IronRuby. JRuby is the more mature of the two options here. If Python is what you need then both contenders compete here too with Java’s offering of Jython and .NET’s IronPython.

Functional language wise .NET has a CTP release of F#, while Java has support for Scala.

Use of 3rd party APIs/Open Source Software

I’ve found that in the Java projects that I’ve worked on use significantly more open source software than the .NET ones. I’m yet to be convinced that this is necessarily a good thing although my Java colleagues are confident that it is.

To give an example, there are multiple different Java libraries for Xml parsing whereas in C# everyone just uses the default one that’s provided.

This provides the opportunity to learn new and better ways of doing things on the one hand, but the potential to spend serious amounts of time evaluating which tool to use instead of just getting on with it on the other.

From a Java perspective it certainly provides extra challenges in trying to get your applications integrated with the range of different application and web servers on the market. In .NET it would simply be a case of getting it to work on IIS – of course easier said than done!

IDEs

I think Java clearly leads in this area with IntelliJ out ahead of anything else I’ve ever worked with. Eclipse is a popular open source alternative but for me it is far less intuitive to use than IntelliJ.

Visual Studio only becomes usable once Resharper is installed but when that’s done it becomes better than eclipse if not quite as usable as IntelliJ. My colleague Pat Kua also listed some ideas to make it run even better. SharpDevelop is a free IDE for .NET development although I haven’t used it so I’m not sure how good it is.

Build and Deployment

Partly due to its better support of Ruby, Java has a much wider range of tools for working with the build.

In .NET NAnt is the only serious contender, and although msbuild is often used to handle the compiling of the code its verbosity of non intuitive approach means I can’t imagine recommending it for a whole build file.

Java wise we have Ant, Maven, a Groovy based wrapper around Ant called Gant, the Ruby based buildr and the dependency management tool Ivy.

Communities

From my experience the community around .NET is more accessible to your average developer than what I’ve noticed in the Java world.

The Alt.NET group is an initiative started last year by several of the leading lights in the .NET world and aims to make the world of .NET development a better and more productive place.

Java has the Java Community Process driving it forward from a community perspective and perhaps due to the lower reliance on the drag and drop tools which are encouraged by Microsoft tools, the standard of your average Java developer may in fact be higher.

When it comes to finding the answers to questions both are mainstream enough that this is fairly easy.

Overall

I’ve tried to cover some of the areas which I considered important when using these two approaches. I’m sure there are some comparisons I have missed out so it would be interesting to hear from others who have worked on both platforms.

This is all written from my knowledge (and a bit of research) so if I’ve missed anything please mention it in the comments.

*Updated*
The paragraph about ‘Other Language Support’ was updated to reflect Robin Clowers’ comments.

Written by Mark Needham

October 15th, 2008 at 12:09 am

Posted in .NET,Java

Tagged with , ,

Context Driven Learning

without comments

One pattern I’ve noticed over the last couple of years with regards to my own learning is that I find it very difficult to learn new things unless I can directly apply what I have learnt to a real life situation.

I feel this was part of the reason I found the way material is taught at universities so difficult to understand – nearly every course I studied was taught on its own without any reference to the others, and rarely did I get to use the ideas I learnt in a practical context.

To give an example from the professional world, last year I was working on a project for an investment bank and I became very interested in the domain and started reading a lot of finance books including Den of Thieves, Traders Guns & Money, When Genius Failed, and several others.

I finished on the project and started working on a project in a completely different domain. I tried to continue reading the books but my interest had waned considerably now that I had no context to apply my reading to.

I noticed a similar trend with software books – I read Refactoring to Patterns when I reached the stage of knowing unclean code when I saw it, but being unable to fix it; some of Working Effectively with Legacy Code only when I worked on a project which had a lot of untested code; and Domain Driven Design when I became interested in how to interact with 3rd party systems cleanly on my next project,

A similar idea is described in Apprenticeship Patterns, in the chapter titled ‘The Right Book At The Right Time‘. The authors talk about working out which book is right for you at the time and not just reading the most prestigious book.

The barometer I am using at the moment for knowing whether it is the right book to read right now is how strong my motivation is to read it – for example I have been struggling to read Interface Oriented Design for the past 3 weeks but as soon as I saw we had a copy of Test Driven Development By Example in the office I picked it up and finished it within a day.

What do I gain from this approach?

The main advantage of learning in this way is that it helps concentrate your focus on what will be most useful for your current situation.

The benefits of this approach are also pointed out in My Job Went to India in the chapter titled ‘Be Where You’re At’ which talks about the benefits of focusing on the here and now rather than always thinking of the future:

Focusing on the present allows you to enjoy the small victories of daily work life: the feeling of a job well done, the feeling of being pulled in as an expert on a critical business problem, the feeling of being an integral member of a team that gels.

If we can focus our learning around what we’re actually working on then it certainly becomes even more enjoyable from my experience.

For me personally it also plays directly to my learning strengths – I understand material I read much more easily when I can apply it in context soon after if not immediately.

Drawbacks of this approach

The main problem I can see with this approach to learning is that you become somewhat reliant with regards to your learning on the projects that you are working on.

I started to realise while writing this that what I’m describing might more accurately be called project driven learning and it is certainly the case that projects can provide a great deal of context for learning.

I don’t think that learning needs to be totally guided by projects though – there are areas of software development that I am very interested in and I look for opportunities to learn more about these areas on my projects as well as from conversing with my colleagues.

Admittedly my motivation to learn coding wise comes from projects I do at work – I haven’t yet generated the enthusiasm to work on open source projects although as an avid open source user, I am very grateful to those who do take this approach.

I understand that this approach may not work for other people and certainly those who ran with Ruby & Rails early on in their own time were learning ideas that they didn’t necessarily have a context for and were perhaps creating their own context.

I also appreciate that there are many people who are able to just read books or papers on a vast array of different subjects at the same time and understand the concepts perfectly, but me – I need a context!

Written by Mark Needham

October 13th, 2008 at 8:44 pm

Posted in Learning

Tagged with ,

Using test guided techniques for spiking

without comments

I think that out of all the Extreme Programming practices Test Driven Development is the one which I like the best. I feel it provides a structure for development work and helps me to remain focused on what I am trying to achieve rather than writing code which may not necessarily be needed.

However, there are times when it’s difficult to use a TDD approach, and Pat Kua suggested earlier this year that if you’re using a TDD approach all the time you’re doing something wrong.

As Pat points out spiking is one time when it can pay off to not to test first, although as was pointed out on the TDD mailing list this doesn’t necessarily mean that you can’t take a test driven approach to learning new APIs or trying out new things.

Kent Beck speaks of Learning Tests – code written using tests to improve our understanding of an API and also guard against changes in future updates of the API – in Test Driven Development by Example, an idea which is referenced in Chapter 8 of Uncle Bob’s Clean Code. This is not a new approach.

Tools like the JUnit TestRunner provide a really easy way to try things out and get immediate feedback as to whether or not the API works as you expect. As Ben Hall writes on twitter it also provides a level of documentation which you can refer back to later.

Even if we don’t want to write an actual test the principles of getting quick feedback and working in small steps can still be used in our exploration activities.

To give a couple of examples, Damana and I didn’t write unit tests when we were exploring Ruby LDAP options but we were only writing a couple of lines at a time then running them using TextMate to see if our understanding was correct. We were then able to keep this code in a ‘spikes’ directory for future reference.

A couple of years ago a colleague and I were exploring (what was at the time) Tangosol Coherence’s API. We were using a method on the API to filter some data but for some reason it wasn’t returning the data that we expected.

Convinced that we were using the API correctly we decided to code up two JUnit tests – one with a call to the method which we felt had a bug in it, and another achieving the same ‘filter’ using two other methods on the API.

This helped us prove that there was a bug in the API and we ended up using the workaround we had discovered to solve our problem.

I’m sure there are other approaches that can achieve the same outcome but if you know how to test drive code then it makes sense to use an approach that is familiar to you.

Written by Mark Needham

October 12th, 2008 at 1:49 pm

Posted in Testing

Tagged with ,

What is a unit test?

with 2 comments

One of the questions which came up during the Sydney Alt.NET User Group meeting at the start of October was around what a unit test actually is.

I suppose the somewhat naive or simplistic definition is that it is just any test written using an xUnit framework such as NUnit or JUnit. However, integration or acceptance tests are often written using these frameworks so this definition doesn’t hold.

While discussing this last week a colleague came up with what I considered to be a very clear yet precise definition. To paraphrase: ‘A unit test has no dependencies‘.

This means that if the class that we are testing does have dependencies then we need we need to remove these from our test either by using a mocking framework or by stubbing them out.

Dependencies might include calls to a database, web services, 3rd party APIs – we don’t want our unit tests to rely on these being available in order to execute our test.

Why should I care?

If we depend on things outside of our control then we are making our tests fragile and unrepeatable – if a test fails because a dependency outside our control is unreliable we cannot fix it easily.

There is definitely room for integration tests in a system but we can gain much more benefit from them when this integration is not mixed in with testing the functionality of a single unit.

The goal with a unit test is that we should be able to start up our IDE and run the test – there should be nothing else to setup to make this happen.

The grey area

The grey area that I have noticed is around file system interactions in unit tests.

I wrote previously about a way that I have seen using for testing file system operations but I have often written tests which load test data from an XML file before loading it into the test.

Doing that creates a dependency on the file system although it makes the test a lot cleaner than having a huge string containing all the data. If the file is included as part of the project then I think it doesn’t necessarily have to be a problem.

What makes a good unit test?

A well written unit test in my book should be simple to understand and run quickly. This is especially helpful when we are practicing TDD as it allows us to keep the cycles between writing code and tests very small.

My colleague Phillip Calcado has a post about an approach to make the former happen but the final word goes to Uncle Bob who suggests the F.I.R.S.T acronym in Clean Code to describe what well written (clean) unit tests should look like:

  • Fast – they should ruin quickly. This encourages you to run them often.
  • Independent – they should not depend on each other. It should be possible to run the tests in any order.
  • Repeatable – it should be possible to run them in any environment. They should be environment independent.
  • Self-Validating – they should either pass or fail.
  • Timely – they should be written in a timely manner i.e. just before the production code is written.

Written by Mark Needham

October 10th, 2008 at 11:21 pm

Posted in Testing

Tagged with ,

Pair Programming: Why would I pair on this?

with one comment

In the comments of my previous post on pairing Vivek made the following comment about when we should pair:

The simplest principle I have is to use “conscious” pairing vs. “unconscious” pairing. A pair should always *know* why they are pairing.

On previous projects I have worked on there have been several tasks where it has been suggested that there is little value in pairing. I decided to try and apply Vivek’s principle of knowing why we might pair on these tasks to see if there is actually any value in doing so.

Build

The value from pairing on the build when its in its infancy shouldn’t be underrated – the build plays a vital role on projects – providing a decisive point of success or failure for the running of our code.

I have worked on teams where the velocity has plummeted due to a fragile build, eventually ending up in development being frozen while we put it back together again.

While I’m not saying these problems would be 100% avoided by having a pair working on the build – certainly having two people working on it (at least initially) helps reduce the possibility of crazy decisions being made.

If it’s not possible to put a pair on this then at least ensure that the approach being taken with the build is well communicated to the rest of the team so that suggestions can be made and then applied.

Verdict: Pair initially to get it setup. Maybe work alone for small fixes later on.

Spiking

I have noticed that there are two types of ‘spiking’ that we typically end up doing:

  • Spiking to work out how to use an API to solve a problem
  • Spiking various different options to solve a problem

If the library to investigate is clear then it may be more beneficial to take a pairing approach. I have worked with colleagues before who had a much different approach to working out how an API should be used than I did and our combined approach has led to quicker understanding than working separately may have done.

When we need to investigate a lot of options then this investigation will certainly be completed more quickly if a pair splits up and investigates the options between them before coming back together to discuss their findings. When this initial (brief) investigation has been done then I have noticed it works quite effectively to revert to pairing to drill down into a specific library.

Verdict: Decide based on the spiking situation

Bug Fixing

If we’re doing our job properly as agile developers bugs should be relatively easy to pin down to a specific area of the code.

For these types of bugs the value of taking a pairing approach is that we’ll probably write a test to prove the existence of the problem and from my experience we get better tests when pairing.

An approach taken on a previous project was to have one pair focused on bugs and just going through them all until we cleared out the highest priority ones. Another interesting part of this for me is that I would normally consider bug fixing to be very boring but when done with a pair I actually quite enjoyed it and it was often fun hunting down problems.

Verdict: Have a bug fixing pair to clear out high priority bugs

Writing Documentation

Despite working in an agile environment there are times when we need to write some documentation – whether this be writing some information onto the Wiki regarding use of APIs or creating architecture diagrams for discussion.

This is one area where I feel there is actually very little value in pairing. We tried to pair on this on one project I worked on but the navigator often found it to be very boring and we saw more value in one person driving the initial document and then someone else reviewing it afterwards.

Verdict: Don’t pair

Release or Deployment Tasks

This has often ended up being a task taken on by the Tech Lead or a Senior Developer in previous teams I have worked on.

A lot of the work is similar to writing documentation for which I would advocate a non pairing approach but have someone else look over the release document before sending it out.

However, knowing the production environment and how it all fits together is useful information for other members of the team to know so there may be room for some pairing for part of this process.

This way if/when problems arise in production and the Tech Lead isn’t around the team will still be able to address them.

Verdict: Don’t pair for the documents but maybe for other parts of the process

That’s just some of the grey areas of pairing that I could think of from a quick brain storm. I’m sure there are others too.

Written by Mark Needham

October 9th, 2008 at 12:38 am