Mark Needham

Thoughts on Software Development

Archive for the ‘Testing’ Category

Testing XML generation with vimdiff

without comments

A couple of weeks ago I spent a bit of time writing a Ruby DSL to automate the setup of load balancers, firewall and NAT rules through the VCloud API.

The VCloud API deals primarily in XML so the DSL is just a thin layer which creates the appropriate mark up.

When we started out we configured everything manually through the web console and then exported the XML so the first thing that the DSL needed to do was create XML that matched what we already had.

My previous experience using testing frameworks to do this is that they’ll tell you whether the XML you’ve generated is equivalent to your expected XML but if they differ it isn’t easy to work out what was different.

I therefore decided to use a poor man’s approach where I first copied one rule into an XML file, attempted to replicate that in the DSL, and then used vimdiff to compare the files.

Although I had to manually verify whether or not the code was working I found this approach useful as any differences between the two pieces of XML were very easy to see.

90% of the rules were almost identical so I focused on the 10% that were different and once I’d got those working it was reasonably plain sailing.

My vimdiff command read like this:

ruby generate_networking_xml.rb > bar && vimdiff -c 'set diffopt+=iwhite' bar initialFirewall.xml

After I was reasonably confident that I understood the way that the XML should be generated I created an Rspec test which checked that we could correctly create all of the existing configurations using the DSL.

While discussing this approach with Jen she suggested that an alternative would be to start with a Rspec test with most of the rules hard coded in XML and then replace them one by one with the DSL.

I think that probably does make more sense but I still quite like my hacky approach as well!

Written by Mark Needham

September 30th, 2012 at 3:48 pm

Posted in Testing

Tagged with ,

Testing: Trying not to overdo it

with 3 comments

The design of the code which contains the main logic of the application that I’m currently working on looks a bit like the diagram on the right hand side:

Orchestration code

We load a bunch of stuff from an Oracle database, construct some objects from the data and then invoke a sequence of methods on those objects in order to execute our domain logic.

Typically we might expect to see unit level test against all the classes described in this diagram but we’ve actually been trying out an approach where we don’t test the orchestration code directly but rather only test it via the resource which makes use of it.

We originally started off writing some tests around that code but they ended up being really similar to our database and resource tests.

Having them around also made it difficult to change the way the orchestration worked since we’d end up breaking most of the tests when we tried to change anything.

One disadvantage of not testing this code is that we end up using the debugger more when trying to work out why resource tests aren’t working since we now have more code being directly tested.

Orchestration tests2

On the other hand we’ve been forced to drive logic into the domain objects as a result since we don’t have any other place to test that functionality from.

Testing directly against the domain objects is much easier since everything’s in memory and we can easily setup the data to be how we want it to be and inject it into the objects.

Another approach we could have taken would be to mock out the dependencies of the orchestration code but since this code is mostly coordinating other classes there are a lot of dependencies and the tests ended up being quite complicated and brittle.

Initially I was of the opinion that it wasn’t a good idea to not test the orchestration code but looking back a month later I think it’s working reasonably well and putting this constraint on ourselves has made the code easier to change while still being well tested.

Written by Mark Needham

March 28th, 2012 at 12:10 am

Posted in Testing

Tagged with

Kent Beck’s Test Driven Development Screencasts

with 4 comments

Following the recommendations of Corey Haines, Michael Guterl, James Martin and Michael Hunger I decided to get Kent Beck’s screencasts on Test Driven Development which have been published by the Pragmatic Programmers.

I read Kent’s ‘Test Driven Development By Example‘ book a couple of years ago and remember enjoying that so I was intrigued as to what it would be like to see some of those ideas put into practice in real time.

As I expected a lot of Kent’s approach wasn’t that surprising to me but there were a few things which stood out:

  • Kent wrote the code inside the first test and didn’t pull that out into its own class until the first test case was working. I’ve only used this approach in coding dojos when we followed Keith Braithwaite’s ‘TDD as if you meant it‘ idea. Kent wasn’t as stringent about writing all the code inside the test though – he only did this when he was getting started with the problem.

    The goal seemed to be to keep the feedback loop as tight as possible and this was approach was the easiest way to achieve that when starting out.

  • He reminded me of the ‘calling the shots‘ technique when test driving a piece of code. We should predict what’s going to happen when we run the test rather than just blindly running it. Kent pointed out that this is a good way for us to learn something – if the test doesn’t fail/pass the way that we expect it to then we have a gap in our understanding of how the code works. We can then do something about closing that gap.
  • I was quite surprised that Kent copied and pasted part of an existing test almost every time he created a new one – I thought that was just something that we did because we’re immensely lazy!

    I’m still unsure about this practice because although Ian Cartwright points out the dangers of doing this it does seem to make for better pairing sessions. The navigator doesn’t have to wait twiddling their thumbs while their pair types out what is probably a fairly similar test to one of the others in the same file. Having said that it could be argued that if your tests are that similar then perhaps there’s a better way to write them.

    For me the main benefit of not copy/pasting is that it puts us in a mindset where we have to think about the next test that we’re going to write. I got the impression that Kent was doing that anyway so it’s probably not such a big deal.

  • Kent used the ‘present tense’ in his test names rather than prefixing each test with ‘should’. This is an approach I came across when working with Raph at the end of last year.

    To use Esko Luontola’s lingo I think the tests follow the specification style as each of them seems to describe a particular behaviour for part of the API.

    I found it interesting that he includes the method name as part of the test name. For some reason I’ve tried to avoid doing this and often end up with really verbose test names when a more concise name with the method name included would have been way more readable.

    A couple of examples are ‘getRetrievesWhatWasPut’ and ‘getReturnsNullIfKeyNotFound’ which both describe the intent of their test clearly and concisely. The code and tests are available to download from the Prag Prog website.

  • One thing which I don’t think I quite yet grasp is something Kent pointed out in his summary at the end of the 4th screencast. To paraphrase, he suggested that the order in which we write our tests/code can have quite a big impact on the way that the code evolves.

    He described the following algorithm to help find the best order:

    • Write some code
      • erase it
        • write it in a different order

    And repeat.

    I’m not sure if Kent intended for that cycle to be followed just when practicing or if it’s something he’d do with real code too. An interesting idea either way and since I haven’t ever used that technique I’m intrigued as to how it would impact the way code evolved.

  • There were also a few good reminders across all the episodes:
    • Don’t parameterise code until you actually need to.
    • Follow the Test – Code – Cleanup cycle.
    • Keep a list of tests to write and cross them off as you go.

Overall it was an interesting series of videos to watch and there were certainly some good reminders and ideas for doing TDD more effectively.

Written by Mark Needham

July 28th, 2010 at 10:44 am

Posted in Testing

Tagged with

TDD: Call your shots

with 4 comments

One of the other neat ideas I was reminded of when watching Kent Beck’s TDD screencasts is the value of ‘calling your shots‘ i.e. writing a test and then saying what’s going to happen when you run that test.

It reminds me of an exercise we used to do in tennis training when I was younger.

The coach would feed the ball to you and just before you hit it you had to say exactly where on the court you were going to place it – cross court/down the line and short/deep.

The point of this exercise is that it’s relatively easy to just hit the ball over the net but to have control over exactly where the ball’s going to go is way more powerful albeit more difficult.

This applies to TDD as well – it’s easy to write a failing test but much more useful to write a failing test which fails in exactly the way that we expect it to.

I’ve written previously about the value of commentating when pair programming and calling your shots is a really easy way of keeping the conversation flowing in a pair.

I like to ask not only how a test is going to fail but why it’s going to fail in that way. I think it’s a pretty good way of making sure that you as a pair understand exactly what you’re doing. It also slows the pace just enough that you’re not coding without thinking about what you’re doing.

I quite like Bryan Liles’ take on this which he described in a talk at acts_as_conference 2009. Bryan suggests that we first write a test and then either make it green or change the error message that you’re getting.

We should know whether or not the test is going to go green or the error message is going to change before we run the test and the idea is that we want to either get the test to pass in one go or at least get a step closer to a passing test.

Written by Mark Needham

July 28th, 2010 at 7:39 am

Posted in Testing

Tagged with

TDD: Testing collections

with 5 comments

I’ve been watching Kent Beck’s TDD screencasts and in the 3rd episode he reminded me of a mistake I used to make when I was first learning how to test drive code.

The mistake happens when testing collections and I would write a test which would pass even if the collection had nothing in it.

The code would look something like this:

public void SomeTestOfACollection()
	var someObject = new Object();
	var aCollection = someObject.Collection;
	for(var anItem : aCollection)
		Assert.That(anItem.Value, Is.EqualTo(...));

If the collection returned by someObject is empty then the test will still pass because there is no assertion to deal with that situation coming up.

In Kent’s example he is using an iterator rather than a collection so he creates a local ‘count’ variable which he increments inside the for loop and then writes an assertion outside the loop that the ‘count’ variable has a certain value.

In my example we can just check that the length of the collection is non zero before we iterate through it.

public void SomeTestOfACollection()
	var someObject = new Object();
	var aCollection = someObject.Collection;
	Assert.That(aCollection.Count(), Is.GreaterThan(0));
	for(var anItem : aCollection)
		Assert.That(anItem.Value, Is.EqualTo(...));

It’s a relatively simple problem to fix but it’s certainly one that’s caught me out before!

Written by Mark Needham

July 28th, 2010 at 6:05 am

Posted in Testing

Tagged with

TDD, small steps and no need for comments

with 7 comments

I recently came a blog post written by Matt Ward describing some habits to make you a better coder and while he presented a lot of good ideas I found myself disagreeing with his 2nd tip:

2. Write Your Logic through Comments

When it comes to coding, there are many tenets and ideas I stand by. One of this is that code is 95% logic. Another is that logic doesn’t change when translated from human language into a programming language.

What this means is that if you can write it in code, you can write it in a spoken language like English or French.

Instead of just jumping into coding the function, I could step back and write the logic in plain English as comments.

I’ve tried this approach before and although it can be useful I found that quite frequently I ended up with a more complicated solution than if I’d driven it out with a test first approach.

The advantage of driving the code from examples/tests is that it helps to put you in a mindset where you only need to care about one specific way that a particular object may be used.

As a result we often end up with simpler code than if we’d tried to imagine the whole design of that object up front.

I find it more difficult to keep a lot of code in my head but having just one example is relatively easy. The less code I have to think about at any one time the less mistakes I make.

As we add additional examples which describe different ways that the object may be used I’ve often found that the code ends up becoming more generalised and we end up with a simpler solution than we might have imagined when we started.

Matt goes on to say:

This way, I can think through the full logic and try to iron out the wrinkles before I actually get into writing the code. I’ve found this to be an incredibly valuable habit that tends to result in fewer bugs.

Using a TDD approach “allows us to describe in code what we want to achieve before we consider how” so the examples that we write provide an executable specification of what we expect the code to do.

I don’t have a problem with making mistakes when coding. I make mistakes all the time but having the safety net of tests helps me fix them pretty quickly.

Matt ends this section with the following:

As a bonus, since I will rarely actually delete the comments, writing the logic through comments also means that my code will already be documented, making it easier for others to follow my logic if they ever have to work on it, or even just for myself, if I have to come back to it several months or years down the road!

There are other ways of documenting code which don’t involve peppering it with comments. We can write our code in a way that reveals intent such that instead of having this:

// FUNCTION: Lock On Time
// This function will accept two time values, indicating the range through
// which it should return an unlocked status.
  // Create a new data object
  // Using the data object, get the current time
  // IF the current time falls within the range passed to the function
    // Return false – meaning that we are currently unlocked
  // ELSE
    // Return true – meaning that we are currently locked.
  // ENDIF

We have something closer to this:

public bool ShouldBeLockedBasedOn(DateTime startOfTimeRange, DateTime endOfTimeRange)
	var dataObject = CreateDataObject();
	var currentTime = dataObject.CurrentTime;
	if(currentTime.IsBetween(startOfTimeRange, endOfTimeRange) 
		return false;
	return true;

…where ‘IsBetween’ would be an extension method on DateTime. We could have that as a private method but I think it reads better this way.

Comments don’t tend to be maintained in the same way that code is from my experience so as soon as the code around them changes we’ll find that they quickly become misleading rather than helpful.

There are certainly times when it makes sense to put comments in code but using them as a substitute for writing intention revealing code isn’t one of those!

Written by Mark Needham

July 23rd, 2010 at 2:52 am

Posted in Testing

Tagged with

TDD: I hate deleting unit tests

with 10 comments

Following on from my post about the value we found in acceptance tests on our project when doing a large scale refactoring I had an interesting discussion with Jak Charlton and Ben Hall about deleting unit tests when they’re no longer needed.

The following is part of our discussion:


@JakCharlton @markhneedham a lot (not all) of the unit tests created can be deleted once the acceptance tests are passing…


@Ben_Hall @markhneedham yep I agree, but that isn’t what TDD really advocates – its a balance, unit tests work well in some places


@Ben_Hall @JakCharlton gotta be courageous to do that.Its like you’re ripping away the safety net. Even if it might be an illusion of safety


@markhneedham one of the XP principles … Courage 🙂


While Jak and Ben are probably right I do find myself feeling way more anxious about deleting test code than I would deleting production code.

I think that this is mostly because when I delete production code we usually have some tests around that code so there is a degree of safety in doing so.

Deleting tests seems a bit more risky because there’s much more judgement involved in working out whether we’re removing the safety net that we created by writing those tests in the first place.

The diagram on the right hand side shows the way I see the various safety nets that we create to protect us from making breaking changes in production code.

In this case it might seem that a unit test is providing safety but it’s now an illusion of safety and in actual fact it’s barely protecting us at all.

I find it much easier to delete a unit test if it’s an obvious duplicate or if we’ve completely changed the way a piece of code works such that the test will never pass again anyway…

..and I find it more difficult to judge when we end up with tests which overlap while testing similar bits of functionality.

Do others feel like this as well or am I just being overly paranoid?

Either way does anyone have any approaches that give you ore confidence that you’re not deleting something that will come back to haunt you later?

Written by Mark Needham

July 15th, 2010 at 11:15 pm

Posted in Testing

Tagged with

A new found respect for acceptance tests

with 8 comments

On the project that I’ve been working on over the past few months one of the key benefits of the application was its ability to perform various calculations based on user input.

In order to check that these calculators are producing the correct outputs we created a series of acceptance tests that ran directly against one of the objects in the system.

We did this by defining the inputs and expected outputs for each scenario in an excel spreadsheet which we converted into a CSV file before reading that into an NUnit test.

It looked roughly like this:


We found that testing the calculations like this gave us a quicker feedback cycle than testing them from UI tests both in terms of the time taken to run the tests and the fact that we were able to narrow in on problematic areas of the code more quickly.

As I mentioned on a previous post we’ve been trying to move the creation of the calculators away from the ‘CalculatorProvider’ and ‘CalculatorFactory’ so that they’re all created in one place based on a DSL which describes the data required to initialise a calculator.

In order to introduce this DSL into the code base these acceptance tests acted as our safety net as we pulled out the existing code and replaced it with the new DSL.


We had to completely rewrite the ‘CalculationService’ unit tests so those unit tests didn’t provide us much protection while we made the changes I described above.

The acceptance tests, on the other hand, were invaluable and saved us from incorrectly changing the code even when we were certain we’d taken such small steps along the way that we couldn’t possibly have made a mistake.

This is certainly an approach I’d use again in a similar situation although it could probably be improved my removing the step where we convert the data from the spreadsheet to CSV file.

Written by Mark Needham

July 11th, 2010 at 5:08 pm

Posted in Testing

Tagged with ,

TDD: Driving from the assertion up

with 4 comments

About a year ago I wrote a post about a book club we ran in Sydney covering ‘The readability of tests’ from Steve Freeman and Nat Pryce’s book in which they suggest that their preferred way of writing tests is to drive them from the assertion up:

Write Tests Backwards

Although we stick to a canonical format for test code, we don’t necessarily write tests from top to bottom. What we often do is: write the test name, which helps us decide what we want to achieve; write the call to the target code, which is the entry point for the feature; write the expectations and assertions, so we know what effects the feature should have; and, write the setup and teardown to define the context for the test. Of course, there may be some blurring of these steps to help the compiler, but this sequence reflects how we tend to think through a new unit test. Then we run it and watch it fail.

At the time I wasn’t necessarily convinced that this was the best way to drive but we came across an interesting example today where that approach might have been beneficial.

The test in question was an integration test and we were following the approach of saving the test object directly through the NHibernate session and then loading it again through a repository.

We started the test from the setup of the data and decided to get the mappings and table setup in order to successfully persist the test object first. We didn’t write the assertion or repository call in the test initially.

Having got that all working correctly we got back to our test and wrote the rest of it only to realise as we drove out the repository code that we actually needed to create a new object which would be a composition of several objects including our original test object.

We wanted to retrieve a ‘Foo’ by providing a key and a date – we would retrieve different values depending on the values we provided for those parameters.

This is roughly what the new object looked like:

public class FooRecord
   public Foo Foo { get; set; }
   public FooKey FooKey { get; set; }
   public DateTime OnDate { get; set; } 

‘FooRecord’ would need to be saved to the session although we would still retrieve ‘Foo’ from the repository having queried the database for the appropriate one.

public class FooRepository
   public Foo Find(Date onDate, FooKey fooKey)
      // code to query NHibernate which retrieves FooRecords
      // and then filters those to find the one we want

We wouldn’t necessarily have discovered this more quickly if we’d driven from the assertion because we’d still have had to start driving the implementation with an incomplete test to avoid any re-work.

I think it would have been more likely that we’d have seen the problem though.

Written by Mark Needham

June 14th, 2010 at 10:46 pm

Posted in Testing

Tagged with

Selenium, Firefox and HTTPS pages

without comments

A fairly common scenario that we come across when building automated test suites using Selenium is the need to get past the security exception that Firefox pops up when you try to access a self signed HTTPS page.

Luckily there is quite a cool plugin for Firefox called ‘Remember Certificate Exception‘ which automatically clicks through the exception and allows the automated tests to keep running and not get stuck on the certificate exception page.

One other thing to note is that if the first time you hit a HTTPS page is on a HTTP POST then the automated test will still get stuck because after the plugin has accepted the certificate exception it will try to refresh the page which leads to the ‘Do you want to resend the data’ pop up.

We’ve previously got around this by writing a script using AutoIt which waits for that specific pop up and then ‘presses the spacebar’ but another way is to ensure that you hit a HTTPS page with a GET request at the beginning of the build so that the certificate exception is accepted for the rest of the test run.

To use the plugin in the build we need to add it to the Firefox profile that we use to run the build.

In Windows you need to run this command (having first ensured that all instances of Firefox are closed):

firefox.exe --ProfileManager

We then need to create a profile which points to the ‘/path/to/selenium/profile’ directory that we will use when launching Selenium Server. There is a much more detailed description of how to do that on this blog post.

After that we need to launch Firefox with that profile and then add the plugin to the profile.

Having done that we need to tell Selenium Server to use that profile whenever it runs any tests which can be done like so:

java -jar selenium-server.jar -firefoxProfileTemplate /path/to/selenium/profile

Written by Mark Needham

March 25th, 2010 at 8:09 am

Posted in Testing

Tagged with ,