Mark Needham

Thoughts on Software Development

Pair Programming: Slowly but surely

I recently watched a video recorded by Uncle Bob at the Chicago Alt.NET meeting where amongst other things he talked about the importance of going slowly but surely when we’re developing code i.e. spending the time to get it right first time instead of rushing through and having to go back and fix our mistakes.

While pairing with a colleague recently it became clear to me that pair programming, when done well, drives you towards a state where you are being much more careful about the work being produced.

Two particular parts of our pairing session made this stand out for me.

1. We were trying to work out the best way to get some data from the UI into our application. I had an idea of the best way to do this but my pair pointed out an alternative which I originally thought would might our tasks more difficult.

After talking through the different approaches and trying out the alternative approach in code it actually turned out that my colleagues’ approach led to a much simpler solution and we were able to get that part of our task done much more quickly than I had anticipated.

2. A bit later we were writing some tests for getting this data into our application using an ASP.NET MVC binder. Not not knowing exactly how to do this I decided to go for the obvious implementation and then triangulate this with the second test.

It was a bit painful putting this hardcoding in to make the test pass and I was starting to wonder whether just going ahead and implementing the binding properly would have been preferable.

As we got to the fifth field we needed to bind we realised that we had no way of getting an additional piece of data that we needed. Luckily we hadn’t gone too far down the route we were heading so it was quite easy to go and make the changes to the UI to ensure we could get the extra bit of data that we needed.

As a result of us having to stop and actually look back at what we’d just coded it became clear that we could simplify our approach further, so we did!

The resulting code was much different and eventually cleaner than the original solution we were driving to.

Taking our time over our code is something which is invaluable – nearly every time I take a short cut or try to do something without thinking about it properly it ends up taking longer than it would if done properly – the somewhat ironic outcome that Uncle Bob points out in the video.

When we are pairing if we want to take one of these shortcuts we need to convince our pair as well as ourself and from my experience we tend to realise that what we’re suggesting doesn’t make sense and end up coding a better solution.

That’s not to say that we sometimes don’t have to take the pragmatic approach and not do a bit of refactoring until later so we can get features completed. After all that is what we are being paid to do.

Software development for me is a lot more about thinking through our options than coding the first thing that comes to mind and pair programming helps drive this approach to problem solving.

Written by Mark Needham

March 31st, 2009 at 11:15 pm

Posted in Pair Programming

Tagged with

DDD: Recognising relationships between bounded contexts

One of the big takeaways for me from the Domain Driven Design track at the recent QCon London conference was that the organisational patterns in the second half of the book are probably more important than the actual patterns themselves.

There are various patterns used to describe the relationships between different bounded contexts:

• Shared Kernel – This is where two teams share some subset of the domain model. This shouldn’t be changed without the other team being consulted.
• Customer/Supplier Development Teams – This is where the downstream team acts as a customer to the upstream team. The teams define automated acceptance tests which validate the interface the upstream team provide. The upstream team can then make changes to their code without fear of breaking something downstream. I think this is where Ian Robinson’s Consumer Driven Contracts come into play.
• Conformist – This is where the downstream team conforms to the model of the upstream team despite that model not meeting their needs. The reason for doing this is so that we will no longer need a complicated anti corruption layer between the two models. This is not the same as customer/supplier because the teams are not using a cooperative approach – the upstream are deriving the interfaces independently of what downstream teams actually need.
• Partner – This was suggested by Eric Evans during his QCon presentation, and the idea is that two teams have a mutual dependency on each other for delivery. They therefore need to work together on their modeling efforts.

I think it’s useful for us to know which situation we are in because then we can make decisions on what we want to do while being aware of the various trade offs we will need to make.

An example of this is when we recognise that we have a strong dependency on the domain model of another team where I think the approach that we take depends on the relationship the two teams have.

If we have a cooperative relationship between the teams then an approach where we pretty much rely on at least some part of the supplier’s model is less of an issue than if we don’t have this kind of relationship. After all we have an influence on the way the model is being developed and maybe even worked on it with the other team.

On the other hand if we realise that we don’t have a cooperative relationship, which may happen due to a variety of reasons…

When two teams with an upstream/downstream relationship are not effectively being directed from the same source, a cooperative pattern such as CUSTOMER/SUPPLIER TEAMS is not going to work.

This can be the case in a large company in which the two teams are far apart in the management hierarchy or where the shared supervisor is indifferent to the relationship of the two teams. It also arises between teams in different companies when the customer’s business is not individually important to the supplier. Perhaps the supplier has many small customers, or perhaps the supplier is changing market direction and no longer values the old customers. The supplier may just be poorly run. It may have gone out of business. Whatever the reason, the reality is that the downstream is on its own.

(from the book)

…we need to be more careful about which approach we take.

We are now potentially in conformist territory although I don’t think that is necessarily the route that we want to take.

If we choose to conform to the supplier’s model then we need to be aware that any changes made to that model will require us to make changes all over our code and since these changes are likely to all over the place it’s going to be quite expensive to make those changes. On the other hand we don’t have to spend time writing translation code.

The alternative approach is to create an anti corruption layer where we interact with the other team’s service and isolate all that code into one area, possibly behind a repository. The benefit here is that we can isolate all changes in the supplier’s model in one place which from experience saves a lot of time, the disadvantage of course being that we have to write a lot of translation code which can get a bit tricky at times. The supplier’s model still influences our approach but it isn’t our approach.

I’m not sure what pattern this would be defined as – it doesn’t seem to fit directly into any of the above as far as I can see but I think it’s probably quite common in most organisations.

There are always multiple approaches to take to solve a problem but I think it’s useful to know what situation we have before choosing our approach.

Written by Mark Needham

March 30th, 2009 at 10:52 pm

Pair Programming: From a Lean angle

I recently watched a presentation about lean thinking and I started seeing parallels in a lot of what they were saying with the benefits that I believe we see in projects when the team pair programs.

Big Picture vs Local Optimisations

One of the biggest arguments used against pair programming is that we get half as much work done because we have two people working on one computer.

Even if we ignore the immediate flaws in that argument I think this is a case of looking at individual productivity when in fact what we really care about is the team’s productivity i.e. looking at the local optimisations instead of the big picture.

I’ve worked on teams which pair programmed the whole time and teams where pair programming was less prevalent and the difference in how well the knowledge of the code base was spread throughout the team is massively different.

When you have developers working alone knowledge sharing is much lower – people tend to become quite specialised in one area of the code meaning that the next time there’s work around that area they do it and so it spirals on and on until you’re completely reliant on them. If that person is then ill for a day we have a big problem doing any work in that area.

In terms of waste

There were 8 different types of waste described in the presentation:

• Extra features (Over Production)
• Delays (Wait and Queue)
• Hand-Offs (Internal Transport)
• Re-learning (Over Processing)
• Partially done work (Inventory)
• Defects
• Unused Employee Creativity

When people are working alone they may try to pass on their knowledge to others in the team but it’s never as effective as if the other person has worked on the problem with them at the same time – there is always some information lost in a hand over – the waste of internal transport.

In the event that a person with a lot of knowledge in a certain area of the code base is ill then someone else will end up having to pick up where they left off and learn how the code works from scratch – the waste of over processing. This applies beyond just the simple situation where someone is ill – when pair programming is not being practiced people have less idea about the problems their colleagues have already solved since the benefits we would normally achieve by rotating pairs are not achieved.

Research suggests that pair programming can lead to a reduction in defects in the code produced due to the fact that we always have two people looking at the code. I think this is only true if both people in the pair are engaged – if one person isn’t then I can’t see how the defect count would change compared to having people work alone.

One of the other benefits I have found with pair programming is that it really makes you think about the value that you are adding by writing a certain piece of code. I think we are much less likely to gold plate with two people at the computer rather than just one. We therefore don’t end up with unnecessary extra features which don’t really add that much value to the customer.

When it comes to task switching I think this will always happen to an extent within a project team – people are often called away from the story work they are doing to help out with something else. When they are pairing this isn’t as disruptive since their pair can continue working on the problem until they return. If people work alone then the story work will end up on hold until they return and take the time to regain the context to continue.

It’s arguable but I’ve noticed that due to the extra discussions that happen when people are pair programming there tends to be more focus on ways to improve the way that things are being done, be it the way the code is being written, the way the tests are designed or the processes being followed. I feel pair programming encourages employee creativity which can only be a good thing as far as I’m concerned.

I can’t think of any obvious ways that pair programming would reduce the other two types of waste but I find it interesting that the majority of them are covered.

In Summary

This was just a brief look at what I consider to be one of the most effective team learning tools available to us from the angle of a methodology which recognises that learning quickly is important for successful delivery of software projects.

Every time I see pair programming not being done I become more convinced of its value.

Written by Mark Needham

March 29th, 2009 at 4:54 pm

Posted in Pair Programming

Tagged with

F#: Forcing type to unit for Assert.ShouldThrow in XUnit.NET

I’ve started playing around with F# again and decided to try and create some unit tests around the examples I’m following from Real World Functional Programming. After reading Matt Podwysocki’s blog post about XUnit.NET I decided that would probably be the best framework for me to use.

The example I’m writing tests around is:

let convertDataRow(str:string) = let cells = List.of_seq(str.Split([|','|])) match cells with | label::value::_ -> let numericValue = Int32.Parse(value) (label, numericValue) | _ -> failwith "Incorrect data format!"

I started driving that out from scratch but ran into a problem trying to assert the error case when an invalid data format is passed in.

The method to use for the assertion is ‘Assert.ShouldThrow’ which takes in an Assert.ThrowsDelegate which takes in an argument of type unit->unit.

The code that I really want to write is this:

[<Fact>] let should_throw_exception_given_invalid_data () = let methodCall = convertDataRow "blah" Assert.Throws<FailureException>(Assert.ThrowsDelegate(methodCall))

which doesn’t compile giving the error ‘This expression has type string*int but is used here with type unit->unit’.

I got around the first unit by wrapping the convertDateRow in a function which takes in no arguments but the output was proving tricky. I realised that putting a call to printfn would solve that problem, leaving me with this truly hacky solution:

[<Fact>] let should_throw_exception_given_invalid_data () = let methodCall = fun () -> (convertDataRow "blah";printfn "") Assert.Throws<FailureException>(Assert.ThrowsDelegate(methodCall))

Truly horrible and luckily there is a way to not do that printfn which I came across on the hubfs forum:

[<Fact>] let should_throw_exception_given_invalid_data () = let methodCall = (fun () -> convertDataRow "blah" |> ignore) Assert.Throws<FailureException>(Assert.ThrowsDelegate(methodCall))

The ignore function provides a neat way of ignoring the passed value i.e. it throws away the result of computations.

Written by Mark Needham

March 28th, 2009 at 2:35 am

Posted in F#

Tagged with ,

Coding: Isolate the data not just the endpoint

One of the fairly standard ways of shielding our applications when integrating with other systems is to create a wrapper around it so that all interaction with it is in one place.

As I mentioned in a previous post we have been using the repository pattern to achieve this in our code.

One service which we needed to integrate lately provided data for populating data on drop downs on our UI so the service provided two pieces of data – a Value (which needed to be sent to another service when a certain option was selected) and a Label (which was the value for us to display on the screen).

Our original approach was to pass both bits of the data through the system and we populated the dropdowns such that the value being passed back to the service would be the Value but the value shown to the user would be the Label.

The option part of the drop down list would therefore look like this:

<select> ... <option value="Value">Label</option> </select>

With the data flowing through our application like so:

Although this approach worked it made our code really complicated and we were actually passing Value around the code even though our application didn’t care about it at all, only the service did.

A neat re-design idea a couple of my colleagues came up with to was to only pass the Label through the application and then just do a mapping in the Repository from the Label -> Value so we could send the correct value to the service.

The code then became much simpler:

And we had isolated the bit of code that led to the complexity in the first place.

The lesson here for me is that it’s not enough merely to isolate the endpoint, we also need to think about which data our application actually needs and only pass through the data we actually use.

Written by Mark Needham

March 25th, 2009 at 11:28 pm

Posted in Coding

Tagged with ,

QTB: Lean Times Require Lean Thinking

I went to watch the latest ThoughtWorks Quarterly Technology Briefing on Tuesday, which was presented by my colleague Jason Yip and Paul Heaton, titled ‘Lean Times Require Lean Thinking

I’ve been reading quite a bit of lean related material lately but I thought it would be interesting to hear about it directly from the perspective of two people who have been involved with applying the concepts in organisations.

What did I learn?

• It was pointed out that lean thinking is particularly relevant at the moment with the global financial crisis requiring organisations to come up with more effective ways of operating with little cash at their disposal. Toyota of course derived the Toyota Production System when they were in big trouble and needed to find a way out of their own financial crisis in the 1950s.
• Lean is not just about manufacturing, it is being applied in many other industries as well. Paul pointed out that KMT are introducing it to the service side of many organisations. I think it is a different challenge introducing it into software development and while the Poppendiecks have written some excellent material on lean software development, there is still more for us to learn about how to do this successfully.
• Although I’ve read quite a bit of material about lean I’ve never been convinced with the normal definition that I hear of lean in that ‘it’s about reducing waste’ but I didn’t have a better definition until Jason came up with ‘it’s about engaging everyone to solve problems‘. It does still feel a bit generic but I like it better than the other definition.
• The most interesting part of the presentation for me was when Jason spoke about the different types of waste in lean in terms of software development:
• Extra features (Over Production)
• Delays (Wait and Queue) e.g. waiting for business sign off of stories
• Hand-Offs (Internal Transport) e.g. passing work onto someone else
• Re-learning (Over Processing) e.g. the same problems coming back when we have already previously learn about them. The first time we find a problem that counts as learning.
• Partially done work (Inventory) – e.g. work requiring late integration which hasn’t been done yet. At an extreme I think this could be taken to mean any work which isn’t in production since it is only when we put something into production that the value from it is realised.
• Task switching (Motion) – e.g. doing several projects at the same time. Here we end up with the problem that all of these projects are delivered late. Jason pointed out that just because people are busy doesn’t necessarily mean they are adding value.
• Defects
• Unused Employee Creativity

The book ‘Learning to See‘ was suggested as a particularly useful one for learning how to identify waste.

• There was mention of set based concurrent engineering which Brad Cross has an excellent post about. The idea is that when there is doubt about the best solution to a problem we pursue several different options at the same time before deciding on the best option at the last responsible moment.
• Jason spoke about the difference between authority focus and responsibility focus, the latter being a more lean approach where we focus on ‘What is the right thing to do?’ and ‘How can I help?’ rather than the far more common approach I have noticed of ‘Whose job is this?’ and ‘Not my problem’. If we can get the responsibility focus going then suddenly the working environment becomes much more pleasant. Related to this I quite liked Liz Keogh’s recent post where she talks about rephrasing the language we use when talking about problems to avoid the blame culture.
• Value streaming was also mentioned with relation to how our goal is to find added value for our customer and that most organisations only achieve around 20% of value added activity in their value streams. A comment which really stood out for me was how ‘no problem is a problem‘ in lean thinking. People like to hear good news and you can often be referred to as being negative when you point out problems. In lean we recognise there are going to be problems and get these raised and sorted out as soon as possible.

Written by Mark Needham

March 25th, 2009 at 12:36 am

Posted in OOP,QTB,Software Development

Tagged with , ,

ASP.NET MVC: Pre-compiling views when using SafeEncodingCSharpCodeProvider

We’ve been doing some work to get our views in ASP.NET MVC to be pre-compiled which allows us to see any errors in them at compilation rather than at run time.

It’s relatively simple to do. You just need to add the following code into your .csproj file anywhere below the element:

<Target Name="AfterBuild"> <AspNetCompiler VirtualPath="/" PhysicalPath="$(ProjectDir)\..\$(ProjectName)"/> </Target>

where VirtualPath refers to the virtual path defined inside your project file and PhysicalPath is the path to the folder which contains the project with the views in.

As I previously mentioned we’re using Steve Sanderson’s SafeEncodingHelper to protect our website from cross scripting attacks.

A problem we ran into when trying to pre-compile these views is that when the AfterBuild target gets run it tries to compile our views using the SafeEncodingCSharpCodeProvider, leading to this error:

 [msbuild] /global.asax(1): error ASPPARSE: The CodeDom provider type "SafeEncodingHelper.SafeEncodingCSharpCodeProvider, SafeEncodingHelper" could not belocated. (\path\to\web.config line 143)

From what we could tell it looked like the AspNetCompiler was expecting the dll containing SafeEncodingCSharpCodeProvider to be within the directory we specified for the PhysicalPath but we were actually compiling it to another directory instead.

 <target name="compile"> <msbuild project="solutionFile.sln"> <property name="OutputPath" value="/some/output/path" /> </msbuild> </target>

We only noticed this on our build machine because when Visual Studio builds the solution it builds each project into ProjectName/bin which meant that locally we always had the dll available since we rarely do ‘Project Clean’ from the IDE.

The solution/hack to our problem was to build just that project in Nant without specifying an OutputPath – by default msbuild builds into the /bin directory of the project which is exactly what we need! Our compile target now looks like this:

 <target name="compile"> <msbuild project="projectWithViewsIn.csproj"> </msbuild> <msbuild project="solutionFile.sln"> <property name="OutputPath" value="/some/output/path" /> </msbuild> </target>

It’s not the greatest solution ever but it’s an easier one than changing how we use the compilation path throughout the build file.

Written by Mark Needham

March 24th, 2009 at 10:55 pm

Posted in .NET

Tagged with

Coding: Making the debugger redundant

I recently wrote my dislike of the debugger and related to this, I spent some time last year watching some videos from JAOO 2007 on MSDN’s Channel 9. One of my favourites is an interview featuring Joe Armstrong and Eric Meijer where Joe Armstrong points out that when coding Erlang he never has to use a debugger because state is immutable.

In Erlang, once you set the value of a variable ‘x’ it cannot be changed. Therefore if the value of ‘x’ is incorrect at some point in your program you only need to look in one place to see why that has happened.

With imperative languages like Java and C# variables can be set as many times as you like assuming they’ve not been declared as readonly for example.

It got me thinking about how the way that we can reduce the need to use the Debugger when writing code in imperative languages. Debugging is so boring and takes so long that spending large amounts of doing it both crushes the spirits and slows you down considerably.

Test Driven Development

Before I learnt TDD if I had a problem with my code the only way I could really find out more about that problem was to turn to the debugger.

One of the aims of writing code test first is to remove the need to debug. As Pat Kua points out in his blog, when you use a TDD approach to writing code, a nice side effect is that you tend to stop using the debugger so much.

Doing TDD is not enough though, we want to look to design our tests for failure so that they do fail we have a useful error message that helps us work out why something failed rather than having to get out the debugger to work it out. Hamcrest matchers are really useful for this, particularly when it comes to analysing test case failures from a continuous integration tool’s console.

Writing our tests in a consistent style also helps especially when it comes to setting up mocks and stubs from my experience. If we know how and where these have been setup then we don’t need to resort to the debugger to work out why one was or wasn’t called – it should be obvious just from reading the test.

Immutability

This is an idea which I touched on in a post I wrote around how writing clean OO code can help reduce the cost of change in our applications, the suffering that having too much mutable state can cause you becoming abundantly clear to me after a coding dojo session where we did just that.

Even using the debugger was difficult because we were trying to remember what the state was meant to be compared to how it actually was.

Greg Young has an interesting presentation which he gave at a Europe Virtual Alt.NET meeting in February (there is also a similar interview on InfoQ) where he talks about how we can model state transitions explicitly by using command objects rather than implicitly by having domain objects keep track of a lot of internal state.

He also describes the use of getters/setters as a domain anti-pattern which I would certainly agree with as it results in behaviour being defined away from the data, usually resulting in unexpected state changes in our objects which we can’t figure out without getting out the debugger.

Minimise dependencies

Ensuring that our classes don’t have too many dependencies is another useful approach – an anti-pattern which tends to happen quite frequently in the controller of the MVC pattern.

When too much is happening in classes they become difficult to understand and by virtue difficult to test, resulting in increased debugger usage because we’ve probably missed out some paths through the code inadvertently.

When this happens we want to try and pull some of the similar operations out into another controller to make our life easier.

In Summary

These are some of the ways that I have noticed help reduce our need to rely on the debugger.

Using TDD as an approach to coding helped me cut down my debugger usage a lot and it is no longer my first choice of tool when there is a problem with code.

I’m sure there are other ways to reduce the need to debug, I just haven’t discovered them yet!

Written by Mark Needham

March 22nd, 2009 at 7:52 pm

Posted in Coding

Tagged with , , ,

Lean Thinking: Book Review

The Book

Lean Thinking by James P. Womack and Daniel T. Jones

The Review

This is the latest book in my lean learning after The Toyota Way, Taiichi Ohno’s Workplace Management and Lean Software Development and seemed like the most logical one to read next as it came at lean from a slightly different angle.

I found this the most hard going of the books I’ve read on the subject so far.

What did I learn?

• The underlying themes the book points out for successfully getting an organisation to adopt a lean approach is that we must have a change agent, lean knowledge and a lever for change. Interestingly that lever for change can often be a recession when a firm needs to make changes in order to survive – when times are good there is no need to change so it’s easier to just keep the status quo.
• My favourite quote from the book is the following which talks about the mindset needed for lean thinking:

Create a mindset in which temporary failure in pursuit of the right goal is acceptable but no amount of improvement is ever enough.

I like this because too often the human instinct is to take the risk free approach where we are afraid of failure and therefore miss opportunities to get better. The lean approach allows us to get past this in pursuit of a greater goal.

• One idea which really resonated with me as someone working in the industry was how Pratt & Whitney had to get past the managerial attitude of “ship on time and you’ll be fine [even if you’re shipping junk]” in order to make improvements. Often with software projects I have found that there appears to be a real focus on the data of promised delivery even though it would be beneficial to ship a bit later and ensure greater quality. Often there is no actual loss (except loss of face) from doing this either.
• Co-locating teams is a constant thread throughout the book and I’ve found this to be an important factor in successfully delivering software as well. The author also talks about the need to look at the cost across the whole life cycle of the product rather than just the cheaper production cost of offshoring operations. Around the time I read this chapter I was drinking some Ribena which said on the label that the blackcurrants were picked in New Zealand, the drink bottled in China and I was drinking it in Australia. There were quite a lot of transportation costs involved in the life cycle of that drink! In software it is the cost of communication rather than transportation that we need to consider when deciding to spread a team across different locations.
• The idea of takt time stood out for me – the idea here is to only produce at the rate at which the product is being demanded. This means that sales people shouldn’t go trying to create spikes in demand which I think is quite different to the way that typical organisations operate. Software wise I suppose this would be about delivering at the pace at which the customer needs the functionality and trying to release regularly so there aren’t spikes in the requirements.
• In the Wiremold case study the idea of reducing the suppliers so that there are less integration points in the whole process is described. In software having less moving parts certainly makes it easier for us to go faster.
• An interesting thing that is pointed out is that in all the firms case studied there are never any layoffs directly linked to lean improvements. This despite the fact that a lot less people will be needed once the process has been improved. It is pointed out that if people lose their jobs from lean then they’re going to do their best to sabotage it. The importance of everyone being involved in the continuous improvement is also emphasised. There are endless steps of improvement, we are never done.
• The importance of having a standard language when talking about lean is emphasised, helping ensure that everyone is talking about the same things. I think the idea is fairly similar to that of the ubiquitous language from Domain Driven Design.
• One of the wishes of the author is to create lean enterprises where lean thinking is applied all along the value stream from the raw materials all the way to the customer. The difficulty of getting all the firms to work together to allow this to happen is described but I can’t really see how this is going to happen for the time being.

In Summary

I found this book very heavy reading – it’s taken me almost three months to complete it! The stories hold good lessons but I found The Toyota Way to be a much easier read.

Written by Mark Needham

March 21st, 2009 at 10:36 am

Posted in Books

Tagged with

Coding: Reassessing what the debugger is for

When I first started programming in a ‘proper’ IDE one of the things that I thought was really cool was the ability to debug through my code whenever something wasn’t working quite the way I expected it to.

Now the debugger is not a completely pointless tool – indeed there is sometimes no other easy way to work out what’s going wrong – but I think it now becomes the default problem solver whenever a bit of code is not working as we expect it to.

Admittedly the name ‘debugger‘ doesn’t really help us here as the name describes a tool that “helps in locating and correcting programming errors” which is all well and good but I think it should be one of the last tools that we turn to rather than one of the first.

Why?

From my experience I have found the debugger to be a very slow way of diagnosing, fixing and then ensuring that bugs don’t come back into my code again.

No doubt some people are experts at setting up the breakpoints and getting the debugger to step to exactly the right place and locating the problem, but even when we’ve done that we don’t have a way of ensuring it doesn’t reoccur unless we go and write an example/test that exposes it.

Another problem I have come across when debugging through code is that the code can sometimes act differently when we slow down its speed of execution, meaning that the bug we see without the debugger is not necessarily repeatable with it.

When using the debugger is reasonable

Despite my dislike of the debugger there are certainly occasions where it is very useful and superior to alternative approaches.

Tracing problems in framework or 3rd party code is one of those occasions. For example we were recently getting an error a few levels inside the ASP.NET MVC code and didn’t have a good idea of why it was happening.

Going through the code for 20 minutes or so with the debugger turned out to be a very useful exercise and we were able to find the problem and then change what we were doing so it didn’t reoccur.

Another time when it is useful is when we have code on another server that isn’t working – hooking in a remote debugger is very useful for discovering problems which may or may not be related to the fact that the environment the code is running under there is slightly different to our local one.

Alternatives

One of the most ironic cases I have seen of what I consider debugger misuse is using it to debug through a test failure as soon as it fails!

A key benefit of writing tests is that it should stop the need to use the debugger so something has gone a bit wrong if we’re using the debugger in this case.

The typical situation is when there has been a null pointer exception somewhere and we want to work out why that’s happened. The debugger is rarely the best choice for doing that.

It is usually quite easy to work out just from reading the error message we get from our testing framework where the problem is, and if it’s not then we should look at writing our tests in a way that is more conducive for solving these types of problems.

An approach I recently learnt for narrowing down test failures is to use the Saff Squeeze. By using this approach we look to reduce the areas in our code where the failure is happening until we find the exact location. We can then put a test around this to ensure it doesn’t happen again.

It’s definitely more difficult to do this than to just get out the debugger but I think we get greater insight into areas of our code that aren’t well tested and we can also tighten up our tests while doing so.

Another approach while I have certainly overlooked in the past is looking at the logs to see what’s going on. If we are using logging effectively then it should have recorded the information needed to help us diagnose the problem quickly.

In Summary

Of course these approaches may not work out for us sometimes in which case I have no problem with getting out the debugger.

Taking time to think whether we actually need to do so or if another approach might be better is certainly a valuable thing to do though.

Written by Mark Needham

March 20th, 2009 at 9:39 pm

Posted in Coding

Tagged with