Mark Needham

Thoughts on Software Development

Archive for the ‘TDD’ tag

TDD: Thoughts on using a clock in tests

with 6 comments

A few months ago Uncle Bob wrote a post about TDD where he suggested that he preferred to use hand created stubs in his tests wherever possible and only resorted to using a Mockito created stub as a last resort.

I wrote previously about my thoughts of where to use each of the two approaches and one example of where hand written stubs seems to make sense is the clock.

I wonder if this ties in with J.B. Rainsberger’s theory of irrelevant details in the tests which make use of it.

We would typically define an interface and stub version of the clock like so:

public interface IClock
{
	DateTime Now();
}
public class ControlledClock : IClock
{
	private readonly DateTime dateTime;
 
	public ControlledClock(DateTime dateTime)
	{
		this.dateTime = dateTime;
	}
 
	public DateTime Now() 
	{ 
		return dateTime; 
	}
}

I forgot about it to start with and was stubbing it out using Rhino Mocks but I realised that every single test needed something similar to the following code:

1
2
3
var theCurrentTime = new DateTime(2010, 1, 16);
var clock = MockRepository.GenerateStub<IClock>();
clock.Stub(c => c.Now()).Return(theCurrentTime);

We can extract out the creation of the first two lines into fields but the third line remains and typically that might end up being pushed into a setup method which is run before each test.

With a clock it’s maybe not such a big deal but with other dependencies from my experience it can become very difficult to follow where exactly the various return values are being setup.

When we use a hand written stub we only have to write the following code and then the date is controlled everywhere that calls ‘Now()’:

private readonly IClock clock = new ControlledClock(new DateTime(2010,1,16));

Following on from that my colleague Mike Wagg suggested the idea of creating extension methods on integers to allow us to fluently define values relative to the clock.

[Test]
public void ShouldShowFoosOlderThanToday()
{
	var clock = new ControlledClock(new DateTime(2010,1,16));
	var fooService = MockRepository.GenerateStub<IFooService>();
 
	var fooFromYesterday = new Foo { Date = 1.DayBefore(clock) });
	var aCollectionOfFoos = new List<Foo> { fooFromYesterday };
	fooService.Stub(f => f.GetFoos()).Return(aCollectionOfFoos);
 
	var oldFoos = new FooFinder(clock, fooService).GetFoosFromEarlierThanToday();
 
	Assert.That(oldFoos.Count, Is.EqualTo(1));
	// and so on
}

The extension method on integer would be like this:

public static class IntegerExtensions
{
	public static DateTime DayBefore(this int value, IClock clock)
	{
		return clock.Now.Subtract(TimeSpan.FromDays(value));
	}
}

It reads pretty well and seems more intention revealing than any other approaches I’ve tried out so I think I’ll be continuing to use this approach.

Written by Mark Needham

January 15th, 2010 at 9:56 pm

Posted in Testing

Tagged with ,

TDD: Hand written stubs vs Framework generated stubs

with 8 comments

A few months ago Uncle Bob wrote a post about TDD where he suggested that he preferred to use hand created stubs in his tests wherever possible and only resorted to using a Mockito created stub as a last resort.

I’ve tended to use framework created ones but my colleague Matt Dunn and I noticed that it didn’t seem to work out too well for us writing some tests around a controller where the majority of our tests were making exactly the same call to that repository and expected to receive the same return value but a few select edge cases expected something different.

The edge cases came along later on by which time we’d already gone past the stage of putting the stub expectation setup in every test and had moved it up into a setup method which ran before all test.

We tried to keep on using the same stub instance of the dependency which meant that we were trying to setup more than one stub expectation on the same method with different return values.

[TestFixture]
public SomeControllerTests
{
	private ITheDependency theDependency;
 
	[SetUp]
	public void Before()
	{
		theDependency = MockRepository.GenerateStub<ITheDependency>();
		theDependency.Stub(t => t.SomeMethod()).Return(someResult);
 
		controller = new Controller(theDependency);
	}
 
	....
 
	[Test]
	public void SomeAnnoyingEdgeCaseTest()
	{
		theDependency.Stub(t => t.SomeMethod().Return(someOtherResult);
 
		controller.DoSomething();
 
		// and so on...
	}
 
}

We were struggling to remember where we had setup the various return values and could see a few ways to try and reduce this pain.

The first was to extract those edge case tests out into another test fixture where we would setup the stub expectation on a per test basis instead of using a blanket approach for all of them.

This generally works pretty well although it means that we now have our tests in more than one place which can be a bit annoying unless you know that’s what’s being done.

Another way which Matt suggested is to make use of a hand written stub for the tests which aren’t bothered about the return value and then use a framework created one for specific cases.

The added benefit of that is that it’s more obvious that the latter tests care about the result returned because it’s explicitly stated in the body of the test.

We found that our tests were easier to read once we’d done this and we spent much less time trying to work out what was going on.

I think framework generated stubs do have thier place though and as Colin Goudie points out in the comments on Uncle Bob’s post it probably makes more sense to make use of a framework generated stub when each of our tests returns different values for the same method call.

If we don’t do that then we end up writing lots of different hand written stubs which I don’t think adds much value.

I still start out with a framework generated stub but if it’s becoming overly complicated or repetitive then I’m happy to switch to a hand written one.

It’d be interesting to hear others’ approaches on this.

Written by Mark Needham

January 15th, 2010 at 9:44 pm

Posted in Testing

Tagged with

Roy Osherove’s TDD Kata: An F# attempt

with 5 comments

As I’ve mentioned in a few of my recent posts I’ve been having another go at Roy Osherove’s TDD Kata but this time in F#.

One thing I’ve been struggling with when coding in F# is working out how many intermediate variables we actually need. They can be useful for expressing intent better but they’re clutter in a way.

I’ve included my solution at the end and in the active pattern which determines whether or not we have a custom delimeter defined in our input string I can’t decide whether or not to create a value to represent the expressions that determine that.

3
4
5
6
7
8
9
10
    let (|CustomDelimeter|NoCustomDelimeter|) (value:string) = 
        ...
        let hasACustomDelimeter = value.Length > 2 && "//".Equals(value.Substring(0, 2))
 
        if (hasACustomDelimeter) then
            if ("[".Equals(value.Substring(2,1))) then CustomDelimeter(delimeters value)
            else CustomDelimeter([| value.Substring(2, value.IndexOf("\n") - 2) |])
        else NoCustomDelimeter(",")

In a way it’s quite obvious that the expression on line 3 is what we’re using to determine if the input string has a custom delimeter because we state that on the next line.

    let (|CustomDelimeter|NoCustomDelimeter|) (value:string) = 
        ...
        if (value.Length > 2 && "//".Equals(value.Substring(0, 2))) then
            if ("[".Equals(value.Substring(2,1))) then CustomDelimeter(delimeters value)
            else CustomDelimeter([| value.Substring(2, value.IndexOf("\n") - 2) |])
        else NoCustomDelimeter(",")

I can’t decide which I prefer so any thoughts on that would be welcome.

I ran into a bit of trouble trying to make the following requirement work because my original parse function was hiding the fact that the code was failing on this step:

Delimiters can be of any length with the following format:  “//[delimiter]\n” for example: “//***\n1***2***3” should return 6

The parse function was originally defined to returns a zero value if it failed to parse the string which meant that the function which decomposed the string into a sequences of numbers could fail and we wouldn’t see an exception, just a failing test.

    let parse value = 
        let (itParsed, value) = Decimal.TryParse value
        if (itParsed) then value else 0.0m

Having the function defined like this simplified the code a bit because I didn’t need to deal with ignoring some characters at the beginning of the string when a custom delimeter was being specified.

One of the instructions for the exercise is to focus on writing tests for the valid inputs and not for invalid inputs which I initially struggled with. Usually if I was test driving code I would have written tests against invalid inputs to help me drive out the design.

Once I started focusing on just making the test past instead of finding a generic solution for the whole problem this became much easier and I didn’t need to test with the invalid inputs.

I wrote tests for the code in C# using NUnit so that I could run the tests from Resharper. I still haven’t found a good way to run automated tests from inside Visual Studio when they’re written in F# otherwise I’d have probably just done that.

All the tests I wrote were against the ‘add’ function but the way the code is written at the moment it would be possible to write tests against the other functions directly if I wanted to.

If I was working in C# perhaps some of those functions would be classes and I would write tests directly against those but I haven’t done that here and I’m not sure whether it is necessary. ‘digits’ is the only function where that would seem to add value.

This is the code I’ve got at the moment:

module FSharpCalculator
    open System
    open System.Text.RegularExpressions
 
    let split (delimeter:array<string>) (value:string) = value.Split (delimeter, StringSplitOptions.None)
    let toDecimal value = Decimal.Parse value
 
    let (|CustomDelimeter|NoCustomDelimeter|) (value:string) = 
        let delimeters (value:string) = Regex.Matches(value, "\[([^]]*)\]") |> Seq.cast |> 
                                        Seq.map (fun (x:Match) -> x.Groups) |>
                                        Seq.map (fun x -> x |> Seq.cast<Group> |> Seq.nth 1) |>
                                        Seq.map (fun x -> x.Value) |>
                                        Seq.to_array
 
        if (value.Length > 2 && "//".Equals(value.Substring(0, 2))) then
            if ("[".Equals(value.Substring(2,1))) then CustomDelimeter(delimeters value)
            else CustomDelimeter([| value.Substring(2, value.IndexOf("\n") - 2) |])
        else NoCustomDelimeter(",")    
 
    let digits value = match value with 
                       | CustomDelimeter(delimeters)  -> value.Substring(value.IndexOf("\n")) |> split delimeters  |> Array.map toDecimal 
                       | NoCustomDelimeter(delimeter) -> value.Replace("\n", delimeter) |> split [|delimeter |] |> Array.map toDecimal
 
    let buildExceptionMessage negatives = 
        sprintf "No negative numbers allowed. You provided %s" (String.Join(",", negatives |> Array.map (fun x -> x.ToString())))
 
    let (|ContainsNegatives|NoNegatives|) digits =
        if (digits |> Array.exists (fun x -> x < 0.0m)) 
        then ContainsNegatives(digits |> Array.filter (fun x -> x < 0.0m))
        else NoNegatives(digits)
 
    let add value = if ("".Equals(value) or "\n".Equals(value)) then 0.0m
                    else match digits value |> Array.filter (fun x -> x < 1000m) with 
                         | ContainsNegatives(negatives) -> raise (ArgumentException (buildExceptionMessage negatives))
                         | NoNegatives(digits)          -> digits |> Array.sum

Written by Mark Needham

January 10th, 2010 at 1:46 am

Posted in F#

Tagged with ,

TDD: Hungarian notation for mocks/stubs

with 9 comments

A fairly common discussion that I’ve had with several of my colleagues is around the way that we name the variables used for mocks and stubs in our tests.

There seems to be about a 50/50 split between including ‘Stub’ or ‘Mock’ on the end of those variable names and not doing so.

In a simple example test using Rhino Mocks as the testing framework this would be the contrast between the two approaches:

[Test]
public void ShouldDoSomething()
{
	var someDependency = MockRepository.CreateMock<ISomeDependency>();
 
	someDependency.Expect(x => x.SomeMethod()).Return("someValue");
 
	var myController = new MyController(someDependency);
        myController.DoSomething()
 
	someDependency.VerifyAllExpectations();
}
[Test]
public void ShouldDoSomething()
{
	var someDependencyMock = MockRepository.CreateMock<ISomeDependency>();
 
	someDependencyMock.Expect(x => x.SomeMethod()).Return("someValue");
 
	var myController = new MyController(someDependencyMock);
        myController.DoSomething()
 
	someDependencyMock.VerifyAllExpectations();
}

I favour the former where we don’t specify this information because I think it adds unnecessary noise to the name and is a detail about the implementation of the object behind that variable which I don’t care about when I’m reading the test.

I do care about it if I want to change something but if that’s the case then I can easily see that it’s a mock or stub by looking at the place where it’s instantiated.

From my experience we often tend to end up with the situation where the variable name suggests that something is a mock or stub and then it’s used in a different way:

[Test]
public void ShouldDoSomething()
{
	var someDependencyMock = MockRepository.CreateMock<ISomeDependency>();
 
	someDependencyMock.Stub(x => x.SomeMethod()).Return("someValue");
 
	var myController = new MyController(someDependencyMock);
        myController.DoSomething()
}

That then becomes a pretty misleading test because the reader is unsure whether the name is correct and the stub call is incorrect or whether it should in fact be a stub and the name is wrong.

The one time that I’ve seen that extra information being useful is when we have really long tests – perhaps when writing tests around legacy code which is tricky to get under test.

In this situation it is very nice to be able to easily see exactly what we’re doing with each of our dependencies.

Hopefully this is only a temporary situation before we can work out how to write simpler tests which don’t require this extra information.

Written by Mark Needham

January 6th, 2010 at 12:08 am

Posted in Testing

Tagged with

Roy Osherove’s TDD Kata: My first attempt

with 5 comments

I recently came across Roy Osherove’s commentary on Corey Haines’ attempt at Roy’s TDD Kata so I thought I’d try it out in C#.

Andrew Woodward has recorded his version of the kata where he avoids using the mouse for the whole exercise so I tried to avoid using the mouse as well and it was surprisingly difficult!

I’ve only done the first part of the exercise so far which is as follows:

  1. Create a simple String calculator with a method int Add(string numbers)
    1. The method can take 0, 1 or 2 numbers, and will return their sum (for an empty string it will return 0) for example “” or “1” or “1,2”
    2. Start with the simplest test case of an empty string and move to 1 and two numbers
    3. Remember to solve things as simply as possible so that you force yourself to write tests you did not think about
    4. Remember to refactor after each passing test
  2. Allow the Add method to handle an unknown amount of numbers
  3. Allow the Add method to handle new lines between numbers (instead of commas).
    1. the following input is ok:  “1\n2,3”  (will equal 6)
    2. the following input is NOT ok:  “1,\n” 
    3. Make sure you only test for correct inputs. there is no need to test for invalid inputs for these katas
  4. Allow the Add method to handle a different delimiter:
    1. to change a delimiter, the beginning of the string will contain a separate line that looks like this:   “//[delimiter]\n[numbers…]” for example “//;\n1;2” should return three where the default delimiter is ‘;’ .
    2. the first line is optional. all existing scenarios should still be supported
  5. Calling Add with a negative number will throw an exception “negatives not allowed” – and the negative that was passed.if there are multiple negatives, show all of them in the

Mouseless coding

I know a lot of the Resharper shortcuts but I found myself using the mouse mostly to switch to the solution explorer and run the tests.

These are some of the shortcuts that have become more obvious to me from trying not to use the mouse:

  • I’m using a Mac and VMWare so I followed the instructions on Chris Chew’s blog to setup the key binding for ‘Alt-Insert’. I also setup a key binding for ‘Ctrl-~’ to map to ‘Menu’ to allow me to right click on the solution explorer menu to create my unit tests project, to add references and so on. I found that I needed to use VMWare 2.0 to get those key bindings setup – I couldn’t work out how to do it with the earlier versions.
  • I found that I had to use ‘Ctrl-Tab‘ to get to the various menus such as Solution Explorer and the Unit Test Runner. ‘Ctrl-E‘ also became useful for switching between the different code files.

Simplest thing possible

The first run through of the exercise I made use of a guard block for the empty string case and then went straight to ‘String.Split’ to get each of the numbers and then add them together.

It annoyed me that there had to be a special case for the empty string so I changed my solution to make use of a regular expression instead:

Regex.Matches(numbers, "\\d").Cast<Match>().Select(x => int.Parse(x.Value)).Aggregate(0, (acc, num) => acc + num);

That works for nearly all of the cases provided but it’s not incremental at all and it doesn’t even care if there are delimeters between each of the numbers or not, it just gets the numbers!

It eventually came unstuck when trying to work out if there were negative numbers or not. I considered trying to work out how to do that with a regular expression but it did feel as if I’d totally missed the point of the exercise:

Remember to solve things as simply as possible so that you force yourself to write tests you did not think about

I decided to watch Corey’s video to see how he’d achieved this and I realised he was doing much smaller steps than me.

I started again following his lead and found it interesting that I wasn’t naturally seeing the smallest step but more often than not the more general solution to a problem.

For example the first part of the problem is to add together two numbers separated by a comma.

Given an input of “1,2″ we should get a result of 3.

I really wanted to write this code to do that:

if(number == "") return 0;
return number.Split(',').Aggregate(0, (acc, num) => acc + int.Parse(num));

But a simpler version would be this (assuming that we’ve already written the code for handling a single number):

if (number == "") return 0;
if (number.Length == 1) return int.Parse(number); 
return int.Parse(number.SubString(0,1)) + int.Parse(number.SubString(2, 1));

After writing a few more examples we do eventually end up at something closer to that first solution.

Describing the relationships in code

I’m normally a fan of doing simple incremental steps but for me the first solution expresses the intent of our solution much more than the second one does and the step from using ‘SubString’ to using ‘Split’ doesn’t seem that incremental to me. It’s a bit of a leap.

This exercise reminds me a bit of a post by Reg Braithwaite where he talks about programming golf. In this post he makes the following statement:

The goal is readable code that expresses the underlying relationships.

In the second version of this we’re describing the relationship very specifically and then we’ll generalise that relationship later when we have an example which forces us to do that. I think that’s a good thing that the incremental approach encourages.

Programming in the large/medium/small

In this exercise I found that the biggest benefit of only coding what you needed was that the code was easier to change when a slightly different requirement was added. If we’ve already generalised our solution then it can be quite difficult to add that new requirement.

I recently read a post by Matt Podwysocki where he talks about three different types of programming:

  • Programming in the large: a high level that affects as well as crosscuts multiple classes and functions
  • Programming in the medium: a single API or group of related APIs in such things as classes, interfaces, modules
  • Programming in the small: individual function/method bodies

From my experience generalising code prematurely hurts us the most when we’re programming in the large/medium and it’s really difficult to recover once we’ve done that.

I’m not so sure where the line is when programming in the small. I feel like generalising code inside small functions is not such a bad thing although based on this experience perhaps that’s me just trying to justify my currently favoured approach!

Written by Mark Needham

December 25th, 2009 at 10:25 pm

Posted in Coding

Tagged with

TDD: Only mock types you own

with 9 comments

Liz recently posted about mock objects and the original ‘mock roles, not objects‘ paper and one thing that stood out for me is the idea that we should only mock types that we own.

I think this is quite an important guideline to follow otherwise we can end up in a world of pain.

One area which seems particularly vulnerable to this type of thing is when it comes to testing code which interacts with Hibernate.

A common pattern that I’ve noticed is to create a mock for the ‘EntityManager‘ and then verify that certain methods on it were called when we persist or load an object for example.

There are a couple of reasons why doing this isn’t a great idea:

  1. We have no idea what the correct method calls are in the first place so we’re just guessing based on looking through the Hibernate code and selecting the methods that we think make it work correctly.
  2. If the library code gets changed then our tests break even though functionally the code might still work

The suggestion in the paper when confronted with this situation is to put a wrapper around the library and then presumably test that the correct methods were called on the wrapper.

Programmers should not write mocks for fixed types, such as those defined by the runtime or external libraries. Instead they should write thin wrappers to implement the application abstractions in terms of the underlying infrastructure. Those wrappers will have been defined as part of a need-driven test.

I’ve never actually used that approach but I’ve found that with Hibernate in particular it makes much more sense to write functional tests which verify the expected behaviour of using the library.

With other libraries which perhaps don’t have side effects like Hibernate does those tests would be closer to unit tests but the goal is still to test the result that we get from using the library rather than being concerned with the way that the library achieves that result.

Written by Mark Needham

December 13th, 2009 at 9:47 pm

Posted in Testing

Tagged with ,

TDD: Big leaps and small steps

with 6 comments

About a month ago or so Gary Bernhardt wrote a post showing how to get started with TDD and while the post is quite interesting, several comments on the post pointed out that he had jumped from iteratively solving the problem straight to the solution with his final step.

Something which I’ve noticed while solving algorithmic problems in couple of different functional programming languages is that the test driven approach doesn’t work so well for these types of problems.

Dan North points out something similar in an OreDev presentation where he talks about writing a BDD framework in Clojure.

To paraphrase:

If you can’t explain to me where this approach breaks down then you don’t know it well enough. You’re trying to sell a silver bullet.

The classic failure mode for iterative development is the big algorithm case. That’s about dancing with the code and massaging it until all the test cases pass.

Uncle Bob also points this out while referring to the way we develop code around the UI:

There is a lot of coding that goes into a Velocity template. But to use TDD for those templates would be absurd. The problem is that I’m not at all sure what I want a page to look like. I need the freedom to fiddle around with the formatting and the structure until everything is just the way I want it. Trying to do that fiddling with TDD is futile. Once I have the page the way I like it, then I’ll write some tests that make sure the templates work as written.

I think the common theme is that TDD works pretty well when we have a rough idea of where we intend to go with the code but we just don’t know the exact path yet. We can take small steps and incrementally work out exactly how we’re going to get there.

When we don’t really know how to solve the problem – which more often than not seems to be the case with algorithmic type problems – then at some stage we will take a big leap from being nowhere near a working solution to the working solution.

In those cases I think it still makes sense to have some automated tests both to act as regression to ensure we don’t break the code and to tell us when we’ve written the algorithm correctly.

An example of a problem where TDD doesn’t work that well is solving the traveling salesman problem.

In this case the solution to the problem is the implementation of an algorithm and it’s pretty difficult to get there unless you actually know the algorithm.

During that dojo Julio actually spent some time working on the problem a different way – by implementing the algorithm directly – and he managed to get much further than we did.

It seems to me that perhaps this explains why although TDD is a useful design technique it’s not the only one that we should look to use.

When we have worked out where we are driving a design then TDD can be quite a useful tool for working incrementally towards that but it’s no substitute for taking the time to think about what exactly we’re trying to solve.

Written by Mark Needham

December 10th, 2009 at 10:14 pm

Posted in Testing

Tagged with

TDD: Testing delegation

with 3 comments

I recently came across an interesting blog post by Rod Hilton on unit testing and it reminded me of a couple of conversations Phil, Raph and I were having about the best way to test classes which delegate some responsibility to another class.

An example that we ran into recently was where we wrote some code which required one controller to delegate to another.

public class ControllerOne extends Controller {
    public ModelAndView handleRequest(HttpServletRequest request, HttpServletResponse response) throws Exception {
    }
}
public class ControllerTwo extends Controller {
	private final ControllerOne controllerOne;
 
	public ControllerTwo(ControllerOne controllerOne) {
		this.controllerOne = controllerOne;
	}
 
    public ModelAndView handleRequest(HttpServletRequest request, HttpServletResponse response) throws Exception {
		....
		return controllerOne.handleRequest(...);
    }
}

My initial thought when working out how to test this code was that we should check that the request is actually getting routed via ControllerOne:

@Test
public void theTest() {
	ControllerOne controller One = mock(ControllerOne.class);
 
	ControllerTwo controllerTwo = new ControllerTwo(controllerOne);
 
	controllerTwo.handleRequest(...)
 
	verify(controllerOne).handleRequest(...);
}

When we discussed this Raph and Phil both pointed out that we didn’t care that specifically about the implementation of how the request was handled. What we care about is that the result we get after the request is handled is as expected.

We therefore changed our test to be more like this:

@Test
public void theTest() {
	ControllerOne controller One = mock(ControllerOne.class);
	ModelAndView myModelAndView = new ModelAndView();
	when(controllerOne.handleRequest(...).thenReturn(myModelAndView);
 
	ControllerTwo controllerTwo = new ControllerTwo(controllerOne);
 
	ModelAndView actualModelAndView = controllerTwo.handleRequest(...)
 
	assertThat(actualModelAndView, equalTo(myModelAndView));
}

I’ve been finding more and more recently that when it comes to writing tests which do some sort of delegation the ‘stub + assert’ approach seems to work out better than just verifying.

You lose the fine grained test that verifying mocks provides but we can still pretty much tell indirectly whether the dependency was called because if it wasn’t then it’s unlikely (but of course still possible) that we would have received the correct ‘ModelAndView’ in our assertion.

My current approach is that I’d probably only mock and verify an interaction if the dependency is a service which makes a network call or similarly expensive call where the interaction is as important as the result obtained.

For example we probably wouldn’t want to make that call multiple times and with verification we’re able to ensure that doesn’t happen.

I find as I’ve used mocking frameworks more I feel like I’m drifting from a mockist style of testing to one getting closer to the classicist approach. I wonder if that’s quite a normal progression.

Written by Mark Needham

November 27th, 2009 at 2:43 pm

Posted in Testing

Tagged with

TDD: Useful when new on a project

without comments

Something which I’ve noticed over the last few projects that I’ve worked on is that at the beginning when I don’t know very much at all about the code base, domain and so on is that pairing with someone to TDD something seems to make it significantly easier for me to follow what’s going on than other approaches I’ve seen.

I thought that it was probably because I’m more used to that approach than any other but in Michael Feathers’ description of TDD in ‘Working Effectively With Legacy Code‘ he points out the following:

One of the most valuable things about TDD is that it lets us concentrate on one thing at a time. We are either writing code or refactoring, we are never doing both at once.

It also temporarily removes us from the feeling of drowning in all the new information which is typical at the start of projects. In a way it reminds me of the ‘Retreat Into Competence‘ pattern from ‘Apprenticeship Patterns‘ as it gives us a chance to regain composure and take in all the new information.

We can probably make a much more useful contribution if we only need to understand one small piece of the system rather than having to understand everything which can often be the case with what Feathers coins the edit & pray approach to development.

It’s still necessary to spend some time trawling around the code base working out how everything fits together but I’m now more aware that it’s also really useful to take some time to just focus on one much smaller class or piece of functionality.

Perhaps also something to keep in mind the next time I’m pairing with someone who’s new to a project that I’ve been working on for a while.

Written by Mark Needham

November 6th, 2009 at 9:57 pm

Posted in Testing

Tagged with

TDD: Keeping assertions clear

with 7 comments

Something which I noticed was a problem with the first example test that I provided in my post about API readability and testability is that the assertion we are making is not that great.

[Test]
public void ShouldConstructModelForSomeSituation()
{
	Assert.AreEqual(DateTime.Today.ToDisplayFormat(), model.SomeDate()); 
}

It’s not really obvious what the expected result is supposed to be except that it should be the ‘DisplayFormat’. If that fails then we’ll need to navigate to the ‘ToDisplayFormat’ method to work out what that method does.

I think it should be possible to immediately know why a test failed so that we can address the problem straight away without too much investigation.

In this example we changed the way the code was working which coincidentally allowed us to make the assertion more obvious.

[Test]
public void ShouldConstructModelForSomeSituation()
{
	Assert.AreEqual("10 Oct 2009", model.SomeDate()); 
}

Erik pointed out another example of this a few weeks ago while we were working on some HTML helper code.

The tests in question looked roughly like this:

[Test]
public void ShouldConstructReadOnlyValue()
{
	var readOnlyValue = new HtmlHelper().ReadOnlyValue("someId", "someValue", 'someTextValue');
 
	...
	var expectedValue = new TestHtmlBuilder().AddLabel("someId", "someTextValue").AddHiddenField("someId", "someValue").Build();
 
	Assert.AreEqual(expectedValue, readOnlyValue);
}

The test reads reasonably nicely and it’s fairly obvious what it is we’re testing for.

The problem here is that we’ve hidden away our expectation and in this case we actually found out that the ‘TestHtmlBuilder’ had extra spaces in some places so all our assertions were incorrect and we didn’t even know!

In addition we end up duplicating the logic that the ‘HtmlHelper’ is doing if we create test assertion helpers like these.

[Test]
public void ShouldConstructReadOnlyValue()
{
	var readOnlyValue = new HtmlHelper().ReadOnlyValue("someId", "someValue", 'someTextValue');
 
	...
	var expectedValue = @"<input type=""hidden"" id=""someId"" value=""someId"" /><label for=""someId"">someValue</label>";
 
	Assert.AreEqual(expectedValue, readOnlyValue);
}

The new test doesn’t look as clean as the old one but the assertion is much more obvious so if it fails then we can quickly work out why.

Written by Mark Needham

October 10th, 2009 at 11:07 am

Posted in Testing

Tagged with