Mark Needham

Thoughts on Software Development

Archive for the ‘TDD’ tag

Kent Beck’s Test Driven Development Screencasts

with 4 comments

Following the recommendations of Corey Haines, Michael Guterl, James Martin and Michael Hunger I decided to get Kent Beck’s screencasts on Test Driven Development which have been published by the Pragmatic Programmers.

I read Kent’s ‘Test Driven Development By Example‘ book a couple of years ago and remember enjoying that so I was intrigued as to what it would be like to see some of those ideas put into practice in real time.

As I expected a lot of Kent’s approach wasn’t that surprising to me but there were a few things which stood out:

  • Kent wrote the code inside the first test and didn’t pull that out into its own class until the first test case was working. I’ve only used this approach in coding dojos when we followed Keith Braithwaite’s ‘TDD as if you meant it‘ idea. Kent wasn’t as stringent about writing all the code inside the test though – he only did this when he was getting started with the problem.

    The goal seemed to be to keep the feedback loop as tight as possible and this was approach was the easiest way to achieve that when starting out.

  • He reminded me of the ‘calling the shots‘ technique when test driving a piece of code. We should predict what’s going to happen when we run the test rather than just blindly running it. Kent pointed out that this is a good way for us to learn something – if the test doesn’t fail/pass the way that we expect it to then we have a gap in our understanding of how the code works. We can then do something about closing that gap.
  • I was quite surprised that Kent copied and pasted part of an existing test almost every time he created a new one – I thought that was just something that we did because we’re immensely lazy!

    I’m still unsure about this practice because although Ian Cartwright points out the dangers of doing this it does seem to make for better pairing sessions. The navigator doesn’t have to wait twiddling their thumbs while their pair types out what is probably a fairly similar test to one of the others in the same file. Having said that it could be argued that if your tests are that similar then perhaps there’s a better way to write them.

    For me the main benefit of not copy/pasting is that it puts us in a mindset where we have to think about the next test that we’re going to write. I got the impression that Kent was doing that anyway so it’s probably not such a big deal.

  • Kent used the ‘present tense’ in his test names rather than prefixing each test with ‘should’. This is an approach I came across when working with Raph at the end of last year.

    To use Esko Luontola’s lingo I think the tests follow the specification style as each of them seems to describe a particular behaviour for part of the API.

    I found it interesting that he includes the method name as part of the test name. For some reason I’ve tried to avoid doing this and often end up with really verbose test names when a more concise name with the method name included would have been way more readable.

    A couple of examples are ‘getRetrievesWhatWasPut’ and ‘getReturnsNullIfKeyNotFound’ which both describe the intent of their test clearly and concisely. The code and tests are available to download from the Prag Prog website.

  • One thing which I don’t think I quite yet grasp is something Kent pointed out in his summary at the end of the 4th screencast. To paraphrase, he suggested that the order in which we write our tests/code can have quite a big impact on the way that the code evolves.

    He described the following algorithm to help find the best order:

    • Write some code
      • erase it
        • write it in a different order

    And repeat.

    I’m not sure if Kent intended for that cycle to be followed just when practicing or if it’s something he’d do with real code too. An interesting idea either way and since I haven’t ever used that technique I’m intrigued as to how it would impact the way code evolved.

  • There were also a few good reminders across all the episodes:
    • Don’t parameterise code until you actually need to.
    • Follow the Test – Code – Cleanup cycle.
    • Keep a list of tests to write and cross them off as you go.

Overall it was an interesting series of videos to watch and there were certainly some good reminders and ideas for doing TDD more effectively.

Written by Mark Needham

July 28th, 2010 at 10:44 am

Posted in Testing

Tagged with

TDD: Call your shots

with 4 comments

One of the other neat ideas I was reminded of when watching Kent Beck’s TDD screencasts is the value of ‘calling your shots‘ i.e. writing a test and then saying what’s going to happen when you run that test.

It reminds me of an exercise we used to do in tennis training when I was younger.

The coach would feed the ball to you and just before you hit it you had to say exactly where on the court you were going to place it – cross court/down the line and short/deep.

The point of this exercise is that it’s relatively easy to just hit the ball over the net but to have control over exactly where the ball’s going to go is way more powerful albeit more difficult.

This applies to TDD as well – it’s easy to write a failing test but much more useful to write a failing test which fails in exactly the way that we expect it to.

I’ve written previously about the value of commentating when pair programming and calling your shots is a really easy way of keeping the conversation flowing in a pair.

I like to ask not only how a test is going to fail but why it’s going to fail in that way. I think it’s a pretty good way of making sure that you as a pair understand exactly what you’re doing. It also slows the pace just enough that you’re not coding without thinking about what you’re doing.

I quite like Bryan Liles’ take on this which he described in a talk at acts_as_conference 2009. Bryan suggests that we first write a test and then either make it green or change the error message that you’re getting.

We should know whether or not the test is going to go green or the error message is going to change before we run the test and the idea is that we want to either get the test to pass in one go or at least get a step closer to a passing test.

Written by Mark Needham

July 28th, 2010 at 7:39 am

Posted in Testing

Tagged with

TDD: Testing collections

with 5 comments

I’ve been watching Kent Beck’s TDD screencasts and in the 3rd episode he reminded me of a mistake I used to make when I was first learning how to test drive code.

The mistake happens when testing collections and I would write a test which would pass even if the collection had nothing in it.

The code would look something like this:

[Test]
public void SomeTestOfACollection()
{
	var someObject = new Object();
	var aCollection = someObject.Collection;
 
	for(var anItem : aCollection)
	{
		Assert.That(anItem.Value, Is.EqualTo(...));
	}
}

If the collection returned by someObject is empty then the test will still pass because there is no assertion to deal with that situation coming up.

In Kent’s example he is using an iterator rather than a collection so he creates a local ‘count’ variable which he increments inside the for loop and then writes an assertion outside the loop that the ‘count’ variable has a certain value.

In my example we can just check that the length of the collection is non zero before we iterate through it.

[Test]
public void SomeTestOfACollection()
{
	var someObject = new Object();
	var aCollection = someObject.Collection;
 
	Assert.That(aCollection.Count(), Is.GreaterThan(0));
	for(var anItem : aCollection)
	{
		Assert.That(anItem.Value, Is.EqualTo(...));
	}
}

It’s a relatively simple problem to fix but it’s certainly one that’s caught me out before!

Written by Mark Needham

July 28th, 2010 at 6:05 am

Posted in Testing

Tagged with

TDD, small steps and no need for comments

with 7 comments

I recently came a blog post written by Matt Ward describing some habits to make you a better coder and while he presented a lot of good ideas I found myself disagreeing with his 2nd tip:

2. Write Your Logic through Comments

When it comes to coding, there are many tenets and ideas I stand by. One of this is that code is 95% logic. Another is that logic doesn’t change when translated from human language into a programming language.

What this means is that if you can write it in code, you can write it in a spoken language like English or French.

Instead of just jumping into coding the function, I could step back and write the logic in plain English as comments.

I’ve tried this approach before and although it can be useful I found that quite frequently I ended up with a more complicated solution than if I’d driven it out with a test first approach.

The advantage of driving the code from examples/tests is that it helps to put you in a mindset where you only need to care about one specific way that a particular object may be used.

As a result we often end up with simpler code than if we’d tried to imagine the whole design of that object up front.

I find it more difficult to keep a lot of code in my head but having just one example is relatively easy. The less code I have to think about at any one time the less mistakes I make.

As we add additional examples which describe different ways that the object may be used I’ve often found that the code ends up becoming more generalised and we end up with a simpler solution than we might have imagined when we started.

Matt goes on to say:

This way, I can think through the full logic and try to iron out the wrinkles before I actually get into writing the code. I’ve found this to be an incredibly valuable habit that tends to result in fewer bugs.

Using a TDD approach “allows us to describe in code what we want to achieve before we consider how” so the examples that we write provide an executable specification of what we expect the code to do.

I don’t have a problem with making mistakes when coding. I make mistakes all the time but having the safety net of tests helps me fix them pretty quickly.

Matt ends this section with the following:

As a bonus, since I will rarely actually delete the comments, writing the logic through comments also means that my code will already be documented, making it easier for others to follow my logic if they ever have to work on it, or even just for myself, if I have to come back to it several months or years down the road!

There are other ways of documenting code which don’t involve peppering it with comments. We can write our code in a way that reveals intent such that instead of having this:

// FUNCTION: Lock On Time
// This function will accept two time values, indicating the range through
// which it should return an unlocked status.
 
  // Create a new data object
 
  // Using the data object, get the current time
 
  // IF the current time falls within the range passed to the function
 
    // Return false – meaning that we are currently unlocked
 
  // ELSE
 
    // Return true – meaning that we are currently locked.
 
  // ENDIF
 
// END FUNCTION

We have something closer to this:

public bool ShouldBeLockedBasedOn(DateTime startOfTimeRange, DateTime endOfTimeRange)
{
	var dataObject = CreateDataObject();
	var currentTime = dataObject.CurrentTime;
 
	if(currentTime.IsBetween(startOfTimeRange, endOfTimeRange) 
	{
		return false;
	}
	return true;
}

…where ‘IsBetween’ would be an extension method on DateTime. We could have that as a private method but I think it reads better this way.

Comments don’t tend to be maintained in the same way that code is from my experience so as soon as the code around them changes we’ll find that they quickly become misleading rather than helpful.

There are certainly times when it makes sense to put comments in code but using them as a substitute for writing intention revealing code isn’t one of those!

Written by Mark Needham

July 23rd, 2010 at 2:52 am

Posted in Testing

Tagged with

TDD: I hate deleting unit tests

with 10 comments

Following on from my post about the value we found in acceptance tests on our project when doing a large scale refactoring I had an interesting discussion with Jak Charlton and Ben Hall about deleting unit tests when they’re no longer needed.

The following is part of our discussion:

Ben:

@JakCharlton @markhneedham a lot (not all) of the unit tests created can be deleted once the acceptance tests are passing…

Jak:

@Ben_Hall @markhneedham yep I agree, but that isn’t what TDD really advocates – its a balance, unit tests work well in some places

Me:

@Ben_Hall @JakCharlton gotta be courageous to do that.Its like you’re ripping away the safety net. Even if it might be an illusion of safety

Jak:

@markhneedham one of the XP principles … Courage 🙂

dragons2.jpg

While Jak and Ben are probably right I do find myself feeling way more anxious about deleting test code than I would deleting production code.

I think that this is mostly because when I delete production code we usually have some tests around that code so there is a degree of safety in doing so.

Deleting tests seems a bit more risky because there’s much more judgement involved in working out whether we’re removing the safety net that we created by writing those tests in the first place.

The diagram on the right hand side shows the way I see the various safety nets that we create to protect us from making breaking changes in production code.

In this case it might seem that a unit test is providing safety but it’s now an illusion of safety and in actual fact it’s barely protecting us at all.

I find it much easier to delete a unit test if it’s an obvious duplicate or if we’ve completely changed the way a piece of code works such that the test will never pass again anyway…

..and I find it more difficult to judge when we end up with tests which overlap while testing similar bits of functionality.

Do others feel like this as well or am I just being overly paranoid?

Either way does anyone have any approaches that give you ore confidence that you’re not deleting something that will come back to haunt you later?

Written by Mark Needham

July 15th, 2010 at 11:15 pm

Posted in Testing

Tagged with

TDD: Consistent test structure

with 3 comments

While pairing with Damian we came across the fairly common situation where we’d written two different tests – one to handle the positive case and one the negative case.

While tidying up the tests after we’d got them passing we noticed that the test structure wasn’t exactly the same. The two tests looked a bit like this:

[Test]
public void ShouldSetSomethingIfWeHaveAFoo()
{
	var aFoo = FooBuilder.Build.WithBar("bar").WithBaz("baz").AFoo();
 
	// some random setup
	// some stubs/expectations
 
	var result = new Controller(...).Submit(aFoo);
 
	Assert.That(result.HasFoo, Is.True);
}
[Test]
public void ShouldNotSetSomethingIfWeDoNotHaveAFoo()
{
	// some random setup
	// some stubs/expectations
 
	var result = new Controller(...).Submit(null);
 
	Assert.That(result.HasFoo, Is.False);
}

There isn’t a great deal of difference between these two bits of code but the structure of the test isn’t the same because I inlined the ‘aFoo’ variable in the second test.

Damian pointed out that if we were just glancing at the tests in the future it would be much easier for us if the structure was exactly the same. This would mean that we would immediately be able to identify what the test was supposed to be doing and why.

In this contrived example we would just need to pull out the ‘null’ into a descriptive variable:

[Test]
public void ShouldNotSetSomethingIfWeDoNotHaveAFoo()
{
	var noFoo = null;
 
	// some random setup
	// some stubs/expectations
 
	var result = new Controller(...).Submit(noFoo);
 
	Assert.That(result.HasFoo, Is.False);
}

Although this is a simple example I’ve been trying to follow this guideline wherever possible and my tests now tend to have the following structure:

[Test]
public void ShouldShowTheStructureOfMarksTests()
{
	// The test data that's important for the test
 
	// Less important test data
 
	// Expectation/Stub setup
 
	// Call to object under test
 
	// Assertions
}

As a neat side effect I’ve also noticed that it seems to be easier to spot duplication that we can possibly extract with this approach as well.

Written by Mark Needham

March 24th, 2010 at 6:53 am

Posted in Testing

Tagged with ,

TDD: Expressive test names

with 4 comments

Towards the end of a post I wrote just over a year ago I suggested that I wasn’t really bothered about test names anymore because I could learn what I wanted from reading the test body.

Recently, however, I’ve come across several tests that I wrote previously which were testing the wrong thing and had such generic test names that it wasn’t obvious that it was happening.

The tests in question were around code which partially clones an object but doesn’t copy some fields for various reasons.

Instead of documenting these reasons I had written tests with names like this:

[Test]
public void ShouldCloneFooCorrectly() { }
[Test]
public void ShouldSetupFooCorrectly() { }

When we realised that the code wasn’t working correctly, which didn’t happen until QA testing, these test names were really useless because they didn’t express the intent of what we were trying to test at all.

Damian and i spent some time writing more fine grained tests which described why the code was written the way it was.

We also changed the name of the test fixture to be more descriptive as well:

[TestFixture]
public class WhenCloningFooTests
{
	[Test]
	public void ShouldNotCloneIdBecauseWeWantANewOneAssignedInTheDatabase() { }
 
	[Test]
	public void ShouldNotCopyCompletedFlagBecauseWeWantTheFooCompletionJobToPickItUp() { 	
}

It seems to me that these new tests names are more useful as specifications of the system behaviour although we didn’t go as far as you can with some frameworks where you can create base classes and separate methods to describe the different parts of a test.

Despite that I think naming tests in this way can be quite useful so I’m going to try and write more of my tests like this.

Of course it’s still possible to test the wrong thing even if you are using more expressive names but I think it will make it less likely.

Written by Mark Needham

March 19th, 2010 at 6:06 pm

Posted in Testing

Tagged with

TDD: Rewriting/refactoring tests

with 2 comments

I’ve read several times about the dangers of the big rewrite when it comes to production code but I’ve recently been wondering whether or not we should apply the same rules when it comes to test code or not.

I worked with Raphael Speyer for a few weeks last year and on the code base we were working on he often spent some time rewriting tests originally written using rMock to use mockito which was the framework we were driving towards.

One of the benefits that he was able to get from doing this was that he had to understand the test in order to change it which enabled him to increase his understanding of how the code was supposed to work and identify anything that didn’t seem quite right.

I quite liked this idea at the time and while spending some time recently working with some tests which required quite a lot of setup and were testing several different things in the same test.

Unfortunately a few of them were failing and it was quite difficult to work out why that was.

My initial approach was to try and work my way through the tests inlining all the test code to start with and then extracting out irrelevant details to make the tests easier to understand.

Despite those attempts I was unable to work out why the test was failing so I worked out what the main things the test was trying to verify and then wrote tests from scratch for each of those cases.

I was able to write tests covering everything the original test did in several smaller tests in less time than I had spent trying to debug the original one and with a fair degree of confidence that I’m testing exactly the same thing.

As I see it the big danger of rewriting is that we’re always playing catch up with the current system which is still being worked on in production and we never quite catch up.

I’m not so sure this logic applies in this case because we’re only rewriting small bits of code which means that we can replace the original test very quickly.

My main driver when working with tests is to ensure that they’re easy to understand and make it easy to reason about the code so if I have to rewrite some tests to make that happen then I think it’s a fair trade off.

My initial approach would nearly always be to refactor the tests that are already there. Rewriting is something I’d look to do if I was really struggling to refactor effectively.

Written by Mark Needham

January 25th, 2010 at 10:06 pm

Posted in Testing

Tagged with

TDD: Simplifying a test with a hand rolled stub

with 7 comments

I wrote a couple of weeks ago about my thoughts on hand written stubs vs framework generated stubs and I noticed an interesting situation where it helped me out while trying to simplify some test code.

The code in question was making use of several framework generated stubs/mocks and one in particular was trying to return different values depending on the value passed as a parameter.

The test was failing and I spent about half an hour unsuccessfully trying to work out why it wasn’t working as expected before I decided to replace it with a hand rolled stub that did exactly what I wanted.

This is a simplified version of the test:

[Test]
public void SomeTest()
{
	var service = MockRepository.GenerateStub<IService>();
 
	service.Stub(x => x.SomeMethod("call1")).Return("aValue");
	service.Stub(x => x.SomeMethod("call2")).Return("anotherValue");
 
	// and so on
 
	new SomeObject(service).AnotherMethod();
 
       // some assertions
}

For the sake of the test I only wanted ‘service’ to return a value of ‘aValue’ the first time it was called and then ‘anotherValue’ for any other calls after that.

I therefore wrote the following hand rolled stub to try and simplify the test for myself and plugged it into the original test:

public class AValueOnFirstCallThenAnotherValueService : IService
{
	private int numberOfCalls = 0;
 
	public string SomeMethod(string parameter)
	{
		if(numberOfCalls == 0)
		{
			numberOfCalls++;
			return "aValue";
		}
		else
		{
			numberOfCalls++;
			return "anotherValue";
		}
	}
}
[Test]
public void SomeTest()
{
	var service = new AValueOnFirstCallThenAnotherValueService();
 
	new SomeObject(service).AnotherMethod();
 
       // some assertions
}

I’ve never tried this particular approach before but it made it way easier for me to identify what was going wrong and I was then able to get the test to work as expected and move onto the next one.

In retrospect it should have been possible for me to work out why the original framework generated stub wasn’t working but it seemed like the right time to cut my losses and the time to write the hand generated one and get it working was an order of magnitude less.

Written by Mark Needham

January 25th, 2010 at 9:23 pm

Posted in Testing

Tagged with

TDD: Removing the clutter

with 3 comments

I got the chance to work with Phil for a couple of weeks last year and one of the most interesting things that he started teaching me was the importance of reducing the clutter in our tests and ensuring that we take some time to refactor them as well as the code as part of the ‘red-green-refactor’ cycle.

I’m still trying to work out the best way to do this but I came across a really interesting post by J.B. Rainsberger where he describes how he removes irrelevant details from his tests.

Since I worked with Phil I’ve started noticing some of the ways that we can simplify tests so that they are more useful as documentation of how our system works.

Wrapping methods around irrelevant test builders

One thing I’ve noticed in tests recently is that generally most of the setup code for a test is irrelevant and is just there to get the test to actually run. There’s very little that’s actually interesting and more often than not it ends up getting hidden amongst the other irrelevant stuff.

The test builder pattern is a really useful one for allowing us to easily setup test data but I feel that if we’re not careful it contributes to the clutter.

To describe a contrived example:

[Test]
public void SomeTest() 
{
	var bar = BarBuilder.Build.WithBaz("baz").BuildBar();
	var foo = FooBuilder.Build.Bar(bar).BuildFoo();
 
	var someObject = new SomeObject();
	var result = someObject.SomeMethod(foo);
 
	Assert.That(result.Baz, Is.EqualTo("baz");
}

In this example ‘Foo’ is actually not important at all. What we’re really interested in is Baz which happens to be accessed via Foo.

I’ve started refactoring tests like that into the following style:

[Test]
public void SomeTest() 
{
	var bar = BarBuilder.Build.WithBaz("baz").BuildBar();
 
	var someObject = new SomeObject();
	var result = someObject.SomeMethod(FooWith(bar));
 
	Assert.That(result.Baz, Is.EqualTo("baz");
}
 
private Foo FooWith(Bar bar)
{
	return FooBuilder.Build.Bar(bar).BuildFoo();
}

In this example it doesn’t make much different in terms of readability but when there’s more dependencies it works quite well for driving the test into a state where all we see are the important details that form part of the ‘Arrange – Act – Assert’ pattern.

New up object under test inline

Object initialisation is often not worthy of a variable in our test since it doesn’t really add anything to our understanding of the test.

I only really break this rule when we need to call one method on the object under test and then need to call another method to verify whether or not the expected behaviour happened.

More often than not this is only the case when dealing with framework code. It’s much easier to avoid this in our own code.

In the example that I started with we can inline the creation of ‘SomeObject’ without losing any of the intent of the test:

[Test]
public void SomeTest() 
{
	var bar = BarBuilder.Build.WithBaz("baz").BuildBar();
 
	var result = new SomeObject().SomeMethod(FooWith(bar));
 
	Assert.That(result.Baz, Is.EqualTo("baz");
}

The only time I don’t do this is when the constructor takes in a lot of dependencies and keeping it all inlined would take the code off the right side of the screen.

In any case it’s a sign that something’s gone wrong and the object probably has too many dependencies so we need to try and fix that.

Pull up static dependencies into fields

Another technique I’ve been trying is pulling static dependencies i.e. ones whose values are not mutated in the test up into fields and initialising them there.

A typical example would be in tests that have a clock.

[Test]
public void ShouldShowFoosOlderThanToday()
{
	var clock = new ControlledClock(new DateTime(2010,1,16));
	var fooService = MockRepository.GenerateStub<IFooService>();
 
	var fooFromYesterday = new Foo { Date = 1.DayBefore(clock) });
	var aCollectionOfFoos = new List<Foo> { fooFromYesterday };
	fooService.Stub(f => f.GetFoos()).Return(aCollectionOfFoos);
 
	var oldFoos = new FooFinder(clock, fooService).GetFoosFromEarlierThanToday();
 
	Assert.That(oldFoos.Count, Is.EqualTo(1));
	// and so on
}

I would pull the clock variable up to be a field since the value we want to return for it is going to be the same for the whole test fixture.

[TestFixture]
public class TheTestFixture
{
	private readonly IClock Clock = new ControlledClock(new DateTime(2010,1,16));
 
	[Test]
	public void ShouldShowFoosOlderThanToday()
	{
		var fooService = MockRepository.GenerateStub<IFooService>();
 
		var fooFromYesterday = new Foo { Date = 1.DayBefore(clock) });
		var aCollectionOfFoos = new List<Foo> { fooFromYesterday };
		fooService.Stub(f => f.GetFoos()).Return(aCollectionOfFoos);
 
		var oldFoos = new FooFinder(Clock, fooService).GetFoosFromEarlierThanToday();
 
		Assert.That(oldFoos.Count, Is.EqualTo(1));
		// and so on
	}
}

I’m less certain what I would do with ‘fooService’. I’ve run into problems previously by pulling these types of dependencies up into a setup method if we’ve also moved the corresponding ‘Stub’ or ‘Mock’ call as well. With that setup the intent of the test is now in two places which makes it more difficult to understand.

In Summary

It’s really interesting to read about the way that others are trying to write better tests and Brian Marick also has a post where he describes how he is able to create even more intention revealing tests in a dynamic language.

It’d be cool to here some more ideas around this.

Written by Mark Needham

January 24th, 2010 at 1:13 am

Posted in Testing

Tagged with