Mark Needham

Thoughts on Software Development

Archive for May, 2009

CouchDB/Futon: ‘_all_dbs’ call returns databases with leading ‘c/’

with one comment

As I mentioned in my previous post I’ve been playing around with CouchDB and one of the problems that I’ve been having is that although I can access my database through the REST API perfectly fine, whenever I went to the Futon page (‘http://localhost:5984/_utils/’ in my case) to view my list of databases I was getting the following javascript error:

Database information could not be retrieved: missing

I thought I’d have a quick look with FireBug to see if I could work out what was going on and saw several requests being made to the following urls and resulting in 404s:

  • http://localhost:5984/c%2Fsharpcouch/
  • http://localhost:5984/c%2Fmark_erlang/

The value ‘c/’ was being added to the front of each of my database names, therefore meaning that Futon was unable to display the various attributes on the page for each of them.

Tracing this further I realised that the call to ‘http://localhost:5984/_all_dbs’ was actually the one that was failing, and calling it directly from ‘erl’ was resulting in the same error:

> couch_server:all_databases().
 
{ok,["c/mark_erlang","c/sharpcouch","c/test_suite_db"]}

I don’t know Erlang well enough to try and change the code to fix this problem but I came across a bug report on the CouchDB website which described exactly the problem I’ve been having.

Apparently there is a problem when you use an upper case ‘C’ for the ‘DbRootDir’ property in ‘couch.ini’. Changing that to a lower case ‘c’ so that my ‘couch.ini’ file now looks like this solved the problem:

DbRootDir=c:/couchdb/db

Written by Mark Needham

May 31st, 2009 at 11:28 pm

Posted in CouchDB

Tagged with ,

SharpCouch: Use anonymous type to create JSON objects

without comments

I’ve been playing around with CouchDB a bit today and in particular making use of SharpCouch, a library which acts as a wrapper around CouchDB calls. It is included in the CouchBrowse library which is recommended as a good starting point for interacting with CouchDB from C# code.

I decided to work out how the API worked with by writing an integration test to save a document to the database.

The API is reasonably easy to understand and I ended up with the following test:

[Test]
public void ShouldAllowMeToSaveADocument()
{
    var server = "http://localhost:5984";
    var databaseName = "sharpcouch";
    var sharpCouchDb = new SharpCouch.DB();
 
    sharpCouchDb.CreateDocument(server, databaseName, "{ key : \"value\"}");
}

In theory that should save the JSON object { key = “value” } to the database but it actually throws a 500 internal error in SharpCouch.cs:

275
HttpWebResponse resp = req.GetResponse() as HttpWebResponse;

Debugging into that line the Status property is set to ‘Protocol Error’ and a bit of Googling led me to think that I probably had a malformed client request.

I tried the same test but this time created the document to save by creating an anonymous type and then converted it to a JSON object using the LitJSON library:

[Test]
public void ShouldAllowMeToSaveADocumentWithAnonymousType()
{
    var server = "http://localhost:5984";
    var databaseName = "sharpcouch";
    var sharpCouchDb = new SharpCouch.DB();
 
    var savedDocument = new { key = "value"};
    sharpCouchDb.CreateDocument(server, databaseName, JsonMapper.ToJson(savedDocument));
}

That works much better and does actually save the document to the database which I was able to verify by adding a new method to SharpCouch.cs which creates a document and then returns the ‘documentID’, allowing me to reload it afterwards.

[Test]
public void ShouldAllowMeToSaveAndRetrieveADocument()
{
    var server = "http://localhost:5984";
    var databaseName = "sharpcouch";
    var sharpCouchDb = new SharpCouch.DB();
 
    var savedDocument = new {key = "value"};
    var documentId = sharpCouchDb.CreateDocumentAndReturnId(server, databaseName, JsonMapper.ToJson(savedDocument));
 
    var retrievedDocument = sharpCouchDb.GetDocument(server, databaseName, documentId);
 
    Assert.AreEqual(savedDocument.key, JsonMapper.ToObject(retrievedDocument)["key"].ToString());
}
public string CreateDocumentAndReturnId(string server, string db, string content)
{
    var response = DoRequest(server + "/" + db, "POST", content, "application/json");
    return JsonMapper.ToObject(response)["id"].ToString();
}

I’m not sure how well anonymous types work for more complicated JSON objects but for the simple cases it seems to do the job.

Written by Mark Needham

May 31st, 2009 at 8:59 pm

Posted in .NET,CouchDB

Tagged with ,

F#: Testing asynchronous calls to MailBoxProcessor

without comments

Continuing with my attempts to test some of the code in my twitter application I’ve been trying to work out how to test the Erlang style messaging which I set up to process tweets when I had captured them using the TweetSharp API.

The problem I had is that that processing is being done asynchronously so we can’t test it in our normal sequential way.

Chatting with Dave about this he suggested that what I really needed was a latch which could be triggered when the asynchronous behaviour had completed, thus informing the test that it could proceed.

In the .NET library we have two classes which do this, AutoResetEvent and ManualResetEvent. The main difference that I can see between them is that AutoResetEvent will automatically reset itself after one call to Set whereas ManualResetEvent lets any number of calls go through and doesn’t reset its state unless you explicitly call the Reset method.

In terms of what I wanted to do it doesn’t actually make a big difference which one is used so I decided to use AutoResetEvent since that seems a bit simpler.

This is the code I’m trying to test:

    open Dimebrain.TweetSharp.Model
 
    type Message = Phrase of TwitterStatus | Stop
 
    type ILinkProcessor =
        abstract Send : TwitterStatus -> Unit
        abstract Stop : Unit -> Unit
 
    type LinkProcessor(callBack) =      
      let agent = MailboxProcessor.Start(fun inbox ->
        let rec loop () =
          async {
                  let! msg = inbox.Receive()
                  match msg with
                  | Phrase item ->
                    callBack item
                    return! loop()
                  | Stop ->
                    return ()
                }
        loop()
      ) 
      interface ILinkProcessor with 
            member x.Send(status:TwitterStatus) = agent.Post(Phrase(status))       
            member x.Stop() = agent.Post(Stop)
 
    type MainProcessor(linkProcessor:ILinkProcessor) =
      let agent = MailboxProcessor.Start(fun inbox ->
        let rec loop () =
          async {
                  let! msg = inbox.Receive()
                  match msg with
                  | Phrase item when item |> hasLink -> 
                    linkProcessor.Send(item)
                    return! loop()
                  | Phrase item ->
                    return! loop()
                  | Stop ->
                    return ()
                }
        loop()
      )
 
       member x.Send(statuses:seq<TwitterStatus>) =  statuses |> Seq.iter (fun status -> agent.Post(Phrase(status)))       
       member x.Stop() = agent.Post(Stop)

In particular I want to test that when I send a message to the MainProcessor it gets sent to the LinkProcessor if the message contains a link.

The idea is to stub out the LinkProcessor and then trigger the ‘Set’ method of our latch when we are inside the ‘Send’ method of our stubbed LinkProcessor:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[<Fact>]
let should_send_message_to_link_processor_if_it_contains_link_mutable () = 
    let latch = new AutoResetEvent(false)
    let messageWithLink = (new MessageBuilder(message = "a message with http://www.google.com link")).Build()
 
    let mutable interceptedMessage = ""
 
    let linkProcessorStub = { 
        new ILinkProcessor  with
            member x.Send (message) =
                 interceptedMessage <- message.Text
                 latch.Set() |> ignore
            member x.Stop() = ()  }
 
    (new MainProcessor(linkProcessorStub)).Send(seq { yield messageWithLink }) 
 
    let wasTripped = latch.WaitOne(1000)
 
    Assert.True(wasTripped)
    Assert.Equal(messageWithLink.Text, interceptedMessage)

The problem is this doesn’t actually compile, we get an error on line 11:

The mutable variable 'interceptedMessage' is used in an invalid way. Mutable variables may not be captured by closures. Consider eliminating this use of mutation or using a heap-allocated mutable reference cell via 'ref' and '!'.

I wasn’t immediately sure how to get rid of the mutation so as the error message suggested I decided to try and use a heap allocated reference cell:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
 [<Fact>]
 let should_send_message_to_link_processor_if_it_contains_link_ref () = 
     let latch = new AutoResetEvent(false)
     let messageWithLink = (new MessageBuilder(message = "a message with http://www.google.com link")).Build()
 
     let interceptedMessage = ref ""
 
     let linkProcessorStub = { 
         new ILinkProcessor  with
             member x.Send (message) =
                  interceptedMessage := message.Text
                  latch.Set() |> ignore
             member x.Stop() = ()  } 
 
     (new MainProcessor(linkProcessorStub)).Send(seq { yield messageWithLink }) 
 
     let wasTripped = latch.WaitOne(1000)
 
     Assert.True(wasTripped)
     Assert.Equal(messageWithLink.Text, !interceptedMessage)

Now this works but I don’t really like the look of line 6 where we setup the reference cell, it feels a little hacky.

My next attempt was to try to get rid of the mutation by testing the message inside the linkProcessorStub:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[<Fact>]
let should_send_message_to_link_processor_if_it_contains_link_closure () = 
    let latch = new AutoResetEvent(false)
    let messageWithLink = (new MessageBuilder(message = "a message with http://www.google.com link")).Build()
 
    let linkProcessorStub =
        { new ILinkProcessor with   
            member x.Send (message) =
                 Assert.Equal(messageWithLink.Text, message.Text)
                 latch.Set() |> ignore
            member x.Stop() = ()  } 
 
    (new MainProcessor(linkProcessorStub)).Send(seq { yield messageWithLink }) 
 
     let wasTripped = latch.WaitOne(1000)
 
    Assert.True(wasTripped)

This seems like it should work the same as the previous example but in fact the Assert.Equal call on line 9 is being done on another thread since it is within the asynchronous operation. This means that when there is a failure with this assertion we don’t get to hear about it.

I’m still trying to work out if there is a better way of doing this, perhaps by wrapping the AutoResetEvent in a custom type:

type AutoResetEvent with
    member x.WasTripped = x.WaitOne(1000)
 
type MyOneTimeLatch (autoResetEvent: AutoResetEvent) =
    let mutable savedMessage =  None
    member x.MessageReceived (message:TwitterStatus) = 
        savedMessage <- Some(message)
        autoResetEvent.Set() |> ignore
    member x.WasTripped = autoResetEvent.WasTripped
    member x.RetrieveMessage = 
        if(savedMessage.IsSome) then savedMessage.Value.Text else ""

Our test would then read like this:

[<Fact>]
let should_send_message_to_link_processor_if_it_contains_link_custom_type () = 
    let latch = new MyOneTimeLatch(autoResetEvent = new AutoResetEvent(false))
    let messageWithLink = (new MessageBuilder(message = "a message with http://www.google.com link")).Build()
 
    let linkProcessorStub = { 
        new ILinkProcessor  with
            member x.Send (message) =
                 latch.MessageReceived(message)
            member x.Stop() = ()  } 
 
    (new MainProcessor(linkProcessorStub)).Send(seq { yield messageWithLink }) 
 
    Assert.True(latch.WasTripped)
    Assert.Equal(messageWithLink.Text, latch.RetrieveMessage)

Does the job and maybe it’s fine to have this as a stub for testing purposes. I’d be interested in hearing if anyone’s found any good ways to do this kind of thing.

Written by Mark Needham

May 30th, 2009 at 8:38 pm

Posted in F#

Tagged with , ,

xUnit.NET: Running tests written in Visual Studio 2010

with 7 comments

I’ve been playing around with F# in Visual Studio 2010 after the Beta 1 release last Wednesday and in particular I’ve been writing some xUnit.NET tests around the twitter application I’ve been working on.

A problem I ran into when attempting to run my tests against ‘xunit.console.exe’ is that xUnit.NET is linked to run against version 2.0 of the CLR and right now you can’t actually change the ‘targetframework’ for a project compiled in Visual Studio 2010.

> xunit.console.exe ..\..\TwitterTests\bin\Debug\TwitterTests.dll
 
System.BadImageFormatException: Could not load file or assembly 'C:\Playbox\FSharpPlayground\Twitter\TwitterTests\bin\Debug\TwitterTests.dll' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded.
File name: 'C:\Playbox\FSharpPlayground\Twitter\TwitterTests\bin\Debug\TwitterTests.dll'
   at System.Reflection.AssemblyName.nGetFileInformation(String s)
   at System.Reflection.AssemblyName.GetAssemblyName(String assemblyFile)
   at Xunit.Sdk.Executor..ctor(String assemblyFilename)
   at Xunit.ExecutorWrapper.RethrowWithNoStackTraceLoss(Exception ex)
   at Xunit.ExecutorWrapper.CreateObject(String typeName, Object[] args)
   at Xunit.ExecutorWrapper..ctor(String assemblyFilename, String configFilename, Boolean shadowCopy)
   at Xunit.ConsoleClient.Program.Main(String[] args)

I wasn’t really sure how to fix this but luckily Dave pointed out that the way to do this is to add a ‘requiredRunTime’ tag in the ‘startup’ section of the configuration file (xunit.console.exe.config):

<configuration>
...
	<startup>
      <requiredRuntime version="v4.0.20506" safemode="true"/>
	</startup>
...
</configuration>

And all is good:

> xunit.console.exe ..\..\TwitterTests\bin\Debug\TwitterTests.dll
 
xUnit.net console runner (xunit.dll version 1.4.9.1416)
Test assembly: C:\Playbox\FSharpPlayground\Twitter\TwitterTests\bin\Debug\TwitterTests.dll
........
Total tests: 8, Failures: 0, Skipped: 0, Time: 0.231 seconds

Written by Mark Needham

May 30th, 2009 at 11:51 am

Posted in .NET

Tagged with

Coding Dojo #16: Reading SUnit code

with 2 comments

Continuing on from last week’s look at Smalltalk, in our latest coding dojo we spent some time investigating the SUnit testing framework, how we would use it to write some tests and looking at how it actually works.

The Format

We had 3 people for the dojo this week and the majority was spent looking at the code on a big screen and trying to understand between us what was going on. We only had the dojo for about 90 minutes this week. Normally we go for around 3 hours.

What We Learnt

  • An interesting thing which I noticed during this session was the idea of protocols which are used to organise a set of methods that a class’ instance respond to. I think these are intended more for humans than for the computer which I think is a really cool idea – I am strongly of the belief that programming languages provide a mechanism for communicating our intent with ourselves and with the other people on our team.
  • As a result of looking at the description of the ‘initialize-release’ protocol for Object I became intrigued about the way that objects are removed by the garbage collection. We weren’t able to find out exactly how Visual Works does garbage collection but I learnt a little bit about the way that garbage collection works in general using three different approaches – mark and sweep, copying collector and reference count.
  • Another thing which I found interesting was the way that Smalltalk handles exceptions – we came across this when looking at how XUnit handles passing and failing test cases. Since Smalltalk only has messages and objects there is no explicit of an exception so as I understand it objects have the ability to respond to an error signal being sent to them.

    (TestResult runCase:aTestCase)

    	| testCasePassed |
    	testCasePassed := [[aTestCase runCase.
    	true]
    		sunitOn: self class failure
    		do:
    			[:signal | 
    			self failures add: aTestCase.
    			signal sunitExitWith: false]]
    		sunitOn: self class error
    		do:
    			[:signal | 
    			self errors add: aTestCase.
    			signal sunitExitWith: false].
    	testCasePassed ifTrue: [self passed add: aTestCase]

    The above is the code block we spent a bit of time looking at which I think in C# world would look a bit like this:

    try 
    {
    	aTestCase.runCase()
    }
    catch(FailureException) 
    {
    	this.Failures.Add(aTestCase);
    }
    catch(ErrorException)
    {
    	this.Errors.Add(aTestCase);
    }
    this.Passed.Add(aTestCase);

    It seems easier to understand to me having exceptions as a language construct but I haven’t done much Smalltalk so maybe that’s just a preference for what’s familiar.

  • It took us a bit of Googling to work out how to start the SUnit TestRunner in VisualWorks but the way to do it eventually turned out to be quite simple. Typing the following code into the workspace window does it:
    TestRunner open

For next time

  • If we continue in Smalltalk world for another week then we’ll probably play around with SUnit a bit more and perhaps get onto seaside. If not then we’ll be back to the Java modeling I imagine.

Written by Mark Needham

May 29th, 2009 at 9:23 am

Posted in Coding Dojo

Tagged with , ,

The 5 dysfunctions of teams in code

with 5 comments

I recently came across an interesting post by my colleague Pat Kua where he talks about how some patterns he’s noticed in code can be linked to Conway’s law which suggests that the structure of systems designed in organisations will mirror the communication structure of that organisation.

I recently read a book called ‘The Five Dysfunctions of Teams‘ which describe some behaviours in teams which aren’t working in an effective way.

Playing the devil’s advocate I became intrigued as to if there is some sort of link between these dysfunctions and whether they manifest themselves in our code as anti patterns.

The five dysfunctions are:

  1. Absence of Trust – team members are unwilling to be vulnerable within the group
  2. Fear of Conflict – team cannot engage in unfiltered and passionate debate of ideas
  3. Lack of Commitment – team members rarely have buy in or commit to decisions
  4. Avoidance of Accountability – team members don’t call their peers on actions/behaviours which hurt the team
  5. Inattention to Results – team members put their individual needs before those of the team

Absence of Trust

I think having null checks all over the code is the most obvious indicator that people don’t trust the code that they are working with.

If the person writing the code had faith in their colleagues who had written the code they now need to interact with then I think it would be more likely that they would trust the code to do the right thing and they wouldn’t feel driven to such a defensive approach.

Fear of Conflict

Fear of conflict in a team seems to manifest itself most obviously in code when we have a lot of duplication happening – there are several reasons why duplication can happen but I think one of them is when people aren’t engaging in discussions when they disagree with something that a colleague has written and therefore end up writing their own version of something that’s already been done.

This probably manifests itself even more obviously when you end up with multiple different frameworks all in the same code base and all doing the same thing just because people don’t want to engage in a conversation to choose which one the team is going to use.

Lack of Commitment

This is one which seems to overlap a lot with the previous two although perhaps one specific way that this would manifest itself in the code might be if we see sloppy mistakes or lack of care being shown with the code – an example of this could be changing the name of a class but then not ensuring that all the places where the old name was used in variables have been changed accordingly.

This leaves the code in a half baked state which becomes quite difficult for other people to work with and they have to do some clean up work before being able to effectively make changes to the code.

Avoidance of Accountability

The coding anti pattern that stands out for me here is when we allow people to write code without tests and then check those into source control.

From my experience so far this never seems to work out well and I think it shows a lack of respect for the rest of the team since we don’t have an easy way of verifying whether this code actually works and other people can’t make use of it elsewhere in the code base with any degree of confidence.

Inattention to Results

Team members putting their individual needs before the team manifests itself in code when we end up with code that has been written in such a way that only the person who wrote it is really able to understand it.

I think this manifests itself in ‘clever code‘ which is fine in your own projects but in a team context is very detrimental as you become a bit of a bottleneck when people want to make changes in this area of the code and can’t do it because they can’t understand what’s going on.

Something else that falls under this dysfunctions is when there is a convention for how to do certain things in the code but we decide to go off and do it our own way. Now granted sometimes it’s fine to do this if you’re working the code towards a better state and the rest of the team are aware you’re trying to work towards this goal but otherwise it’s not a very effective approach.

In Summary

I found it quite intriguing that in my mind at least some of the problems we see in code do seem to have some correlation to the problems that we see in teams.

One thing I remember from reading Gerald Weinberg’s ‘The Secrets of Consulting‘ is his claim that ‘no matter what the problem is it’s always a people problem‘ – if indeed this is true then in theory problems that we see in code should be indicative of a people problem which I think probably to an extent is true.

I think certainly not all problems in code are linked to the dysfunctions of teams – certainly some anti patterns creep into our code due to a lack of experience of team members of how they to do things better but then again maybe that’s indicative of the team not having senior members working closely enough with their colleagues!

Maybe we can therefore work out how we can therefore identify ways that we can improve our team by starting with a look at the code.

Written by Mark Needham

May 28th, 2009 at 5:44 am

Posted in Coding

Tagged with ,

Pair Programming: Refactoring

with 5 comments

One area of development where I have sometimes wondered about the value that we can get from pair programming is when we’re spending time doing some major refactoring of our code base.

The reason I felt that pairing on big refactoring tasks might be difficult compared to when working on a story together is that with a story there tends to be a more defined goal and the business have defined that goal whereas with a refactoring task that goal is often less clear and people have much wider ranging differing opinions about the approach that should be taken.

Having spent time pairing on some refactoring work for most of the last week I have changed my mind and I think that pairing on refactoring tasks is actually extremely valuable – certainly equally as valuable as pairing on story work and perhaps more so.

What I noticed in my time pairing on this task is that actually although there is more conflict over the approach that should be taken this actually works out reasonably well in terms of driving an approach that is somewhere between pragmatic and dogmatic.

From my experience, the mentality when you’re the driver in a refactoring session is pretty much to want to fix everything that you come across but having someone else alongside you helps to rein in this desire and focus on making the fixes that add value to the code right now.

I was quite surprised to notice that within just one keyboard switch we had both suggested to the other that making a particular refactoring should probably be added to our ‘refactoring list’ to have a look at later on rather than looking at it now and potentially getting into a yak shaving situation.

Another thing I noticed was that the navigator was often able to point out things that the other person didn’t see – sometimes making a certain change to the code had a bigger impact than the driver had expected and the navigator was able to spot this quickly and initiate a discussion about what the next step should be before we had gone too far down the wrong path.

Refactoring code effectively and not making mistakes when doing so is one of the more difficult skills to learn as a developer and I think it is very difficult to learn by just working on your own since the approach to effectively refactor code by taking very small steps is not necessarily obvious.

While there are certainly books which explain how to do refactorings on our code a lot of the approaches that I like to use have been taught to me by more experienced people that I’ve had the chance to pair with. Pairing creates the opportunity for these skills to be passed on to other members of the team.

Written by Mark Needham

May 26th, 2009 at 11:44 pm

Refactoring: Removing duplication more safely

with 5 comments

One of the most important things that I’ve learnt from the coding dojo sessions that we’ve been running over the last six months is the importance of small step refactorings.

Granted we have been trying to take some of the practices to the extreme but the basic idea of trying to keep the tests green for as much time as well as keeping our code in a state where it still compiles (in a static language) is very useful no matter what code we’re working on.

Recently a colleague and I were doing some work on our code base to try and remove some duplication – we had two classes which were doing pretty much the same thing but they were named just slightly differently.

The implementation was also slightly different – one was a list containing objects with Key and Value properties on and the other was a dictionary/map of key/value pairs.

We spent a bit of time checking that we hadn’t misunderstood what these two different classes were doing and having convinced ourselves that we knew what was going on decided to get rid of one of them – one was used in around 50% more places than the other so we decided to keep that one.

We now needed to replace the usages of the other one.

My pre-coding dojo refactoring approach would have been to just find the first place that the one we wanted to replace was being used, delete that and then let the compiler guide me to replacing all its usages and then do that with the second usage and so on.

The problem with this approach is that we would probably have had the code in a state where it didn’t compile for maybe half an hour, leading to a situation where we would be unable to run our tests for any of this time which would mean that we would lose confidence that the changes we were making actually worked.

The approach we therefore took was to add in the class we wanted to keep side by side with the one we wanted to get rid of and slowly move our tests across to setup data for that.

We therefore ended up with code looking a bit like this to start with:

public class IHaveUsages
{
	public IDictionary<OldType, string> OldType { get; set; }
	public IList<NewType> OldType2 { get; set; }
}

When changing tests we took the approach of commenting out the old setup code to start with so that we could see exactly what was going on in the setup and then delete it once we had done so. I’ve written previously about my dislike for commented code but we were using it in this case as a mechanism to guide our refactoring and we didn’t ever check the code in with these comments in so I think it was a reasonable approach.

[Test]
public void SomeTest()
{
	// some test stuff
	new IHaveUsages 
	{
		// OldType = new Dictionary<OldType, string> {{new OldType("key"), "value"}},
		OldType2 = new List<NewType> { new NewType { Key = "key", Value = "value" } } 
	}
}

The intention was to try and reduce the number of places that OldType was being used until the point where there were none left which would allow us to safely remove it.

Once we had made that change to the test setup we needed to make the changes in the class using that data to get our green bar back.

On a couple of occasions we found methods in the production code which took in the OldType as a parameter. In order to refactor these areas we decided to take a copy of the method and renamed it slightly before re-implementing it using the NewType.

private void OldMethod(OldType oldType) 
{
	// Do some stuff
}
private void OldMethod2(NewType newType) 
{
	// replicate what's being done in DoSomeStuffOn
}

We then looked for the usages of ‘OldMethod’ and replaced those with calls to ‘OldMethod2′, also ensuring that we passed in NewType to the method instead of OldType.

I’m intrigued as to whether there is an even better way to perform this refactoring – when I chatted with Nick about this he suggested that it might have been even easier to create a temporary inheritance hierarchy with NewType extending OldType. We could then just change any calls which use OldType to use NewType before eventually getting rid of OldType.

I haven’t tried this approach out yet but if it makes our feedback cycle quicker and chances of failing less then I’m all for it.

Written by Mark Needham

May 26th, 2009 at 1:20 pm

The value of a fresh mind

without comments

I recently read a post by my colleague Sai Venkatakrishnan where he talks about some of the disadvantages of over working on a project and it reminded me of something I’ve noticed a lot recently – notably that after taking a break from solving a problem, either by looking at it again the next day or after lunch or any kind of break I end up solving it significantly more quickly than if I’d kept on trying to solve it without doing so.

I’m not sure exactly why that is, but it seems to tie in nicely with Tip 16 from Pragmatic Learning and Thinking:

Step away from the keyboard to solve hard problems.

The idea that Andy Hunt is promoting is that we should take some time away from the keyboard and keep the problem in mind but don’t focus on it – the rich mode in our brain (used for intuition, problem solving and creativity) will take over from the linear mode (used for working through the details and making things happen) and probably lead us to the insight that helps us solve the problem.

I think it comes down to the fact that when you are really close to a problem for a long period of time and you’re feeling tired you stop seeing the bigger picture and therefore perhaps miss what would be a much simpler way of solving the problem. This has happened on several occasions for me and each time I am surprised that I couldn’t see the obvious solution the day before.

Another thing I’ve noticed is that not only do I start to not see solutions to problems but I start to make more mistakes than I normally would and end up spending time the next day fixing these, meaning the extra time spent didn’t really add any value at all – very frustrating!

I think it’s fine to sometimes spend extra time working on problems, especially if they are critical ones that need to be fixed within a certain time frame – usually production issues or when close to release – but apart from those occasions it really does seem to be true that maintaining a sustainable pace works out more productive overall than over working and being in a consistently tired state.

An idea somewhat related which I’m keen to try out is the Pomodoro technique which I’ve seen the pairwithus guys using during their pairing sessions. From what I’ve read about this technique it might take the idea of keeping our mind fresh and focused to the next level.

Whatever the technique we choose to use the value of stepping away from the keyboard to solve problems more quickly seems counter intuitive but somehow keeps coming true for me at least.

Written by Mark Needham

May 26th, 2009 at 12:51 am

TDD: Making the test green quickly

with 5 comments

Although I pointed out some things that I disagreed with in Nick’s post about pair programming one thing that I really liked in that post was that he emphasised the importance of getting tests from red to green as quickly as possible.

I remember the best programming sessions I’ve had was with Stacy Curl, now an ex-thoughtworker and whom I believe was also a chess player. He would always look to quickly make my tests pass, even if it was to just echo the output that my tests would sometimes expect.

The idea that we didn’t have to implement the real functionality after writing the first test for a method was an idea that I feel really freed me up when doing TDD – previous to this I had often spent a lot of time thinking how exactly I wanted to implement the code to make the test pass and I was never entirely satisfied with this approach.

Kent Beck refers to the patterns used to go from red to green quickly as the Green Bar Patterns in his book Test Driven Development by Example.

Fake It

This was an idea which I was first introduced to a couple of years ago while pairing with Dan North – the idea is that if you don’t know how to implement the real code to make a test pass you just do the minimum to make it pass by inserting a fake implementation.

For example, if we have a test checking the calculation of the factorial of 5:

[Test]
public void ShouldCalculateFactorial() 
{
	Assert.AreEqual(120, CalculateFactorial(5));
}

The fake implementation of this would be to return the value ‘120’:

public int CalculateFactorial(int value) 
{
	return 120;
}

I’ve found that it actually makes pair programming quite fun when we take this approach as it becomes a bit of a game between the two people to try and come up with a test case that forces the other person to write the real implementation code.

The thing to be careful with here is that sometimes it can actually end up being more complicated faking code for specific inputs and in these cases it may be easier to just implement the solution.

Triangulate

For me triangulating is the next step that we take after inserting a fake implementation to try and drive towards a more generic solution.

The guidance in the book is that we should look to triangulate when we have two or more examples/tests.

In the factorial example perhaps we would drive some example tests which forced us to implement the two parts of the recursive function.

Given we started with CalculateFactorial(5) our next step would probably be to calculate the factorial of 0.

[Test]
public void ShouldCalculateFactorial() 
{
	Assert.AreEqual(120, CalculateFactorial(5));
	Assert.AreEqual(1, CalculateFactorial(0));
}

Leading to an implementation like this:

public int CalculateFactorial(int value) 
{
	if(value == 0) return 1;
	return 120;
}

We now have one example for each branch. The next test would look to drive the implementation of the else side of the expression.

[Test]
public void ShouldCalculateFactorial() 
{
	Assert.AreEqual(1, CalculateFactorial(0));
 
	Assert.AreEqual(24, CalculateFactorial(4));
	Assert.AreEqual(120, CalculateFactorial(5));
}

We could still try to fake the implementation here…

public int CalculateFactorial(int value) 
{
	if(value == 0) return 1;
	else
	{
		if(value == 4) return 24;
		return 120;
	}
}

…although it would probably be easier to just put the obvious implementation in as that’s easier than branching it out for different input values at this stage.

public int CalculateFactorial(int value) 
{
	if(value == 0) return 1;
	return value * CalculateFactorial(value - 1);
}

Obvious implementation

As I alluded to above obvious implementation is where you know exactly how to implement the production code to make your test pass so you might as well just go ahead and write that code straight away.

The trap I’ve run into many times is thinking that I know how to make a test pass by doing the ‘obvious implementation’ before writing way too much code in an attempt to do so – it wasn’t as obvious as I had thought!

This is where the other two approaches can be helpful.

Written by Mark Needham

May 24th, 2009 at 11:43 pm

Posted in Testing

Tagged with