Mark Needham

Thoughts on Software Development

Archive for May, 2010

C#: Using a dictionary instead of if statements

with 22 comments

A problem we had to solve on my current project is how to handle form submission where the user can click on a different button depending whether they want to go to the previous page, save the form or go to the next page.

An imperative approach to this problem might yield code similar to the following:

public class SomeController
{
	public ActionResult TheAction(string whichButton, UserData userData)
	{
		if(whichButton == "Back")
		{
			// do the back action
		}
		else if(whichButton == "Next")
		{
			// do the next action
		}
		else if(whichButton == "Save")
		{
			// do the save action
		}
 
		throw Exception("");
	}
}

A neat design idea which my colleague Dermot Kilroy introduced on our project is the idea of using a dictionary to map to the different actions instead of using if statements.

public class SomeController
{
	private Dictionary<string, Func<UserData,ActionResult>> handleAction = 
		new Dictionary<string, Func<UserData,ActionResult>>
		{ { "Back", SaveAction },
		  { "Next", NextAction },
		  { "Save", SaveAction } };
 
	public ActionResult TheAction(string whichButton, UserData userData)
	{
		if(handleAction.ContainsKey(whichButton))
		{
			return handleAction[whichButton](userData);
		}
 
		throw Exception("");
	}
 
	private ActionResult NextAction(UserData userData)
	{
		// do cool stuff
	}
}

It’s quite similar in a way to a problem we had on another project where we needed to deal with user inputs and then create an object appropriately.

The way we have to read the code is a bit more indirect than with the original approach since you now need to click through to the individual methods for each action.

On the other hand I like the fact that we don’t have if statements all over the place anymore.

* Updated * – updated to take Dhananjay Goyani’s comments into account

Written by Mark Needham

May 30th, 2010 at 11:13 pm

Posted in .NET

Tagged with

Evolving a design: Some thoughts

with 3 comments

Phil wrote an interesting post recently about the Ubuntu decision making process with respect to design and suggested that we should look to follow something similar on agile software development teams.

The Ubuntu design process basically comes down to this:

This is not a democracy. Good feedback, good data, are welcome. But we are not voting on design decisions.

Phil suggests the following:

That doesn’t mean that there is an Architect (capital A, please), designing the system for the less-skilled developers to write. Architecture is an ongoing thing; team members make architecture decisions every day.

What that means is that, every time a design decision has to be made, the decision must come from an individual or very small group of people that have the skills and experience required –excluding coaching and teaching, of course.

I agree with this to an extent.

At a higher level it does make sense that the Tech Lead of the team makes the decisions on the general direction of the code but when it comes down to specifics of a certain part of an application I don’t know how well this works.

If they aren’t actually working in that part then it can be very difficult to make a decision about how it should be designed.

Evolving a design

From my experience it’s often the case that the pair working in that area is best placed to evolve the design.

The tech lead on a team can make a suggestion about the way that they want that part of the system to be designed but if it evolves slightly differently then I think that’s still fine and we don’t need to re-discuss it and check that they agree with that evolution.

For example Dermot and I were working on a bit of code that is used to validate forms and the original design that we had quite tightly coupled the form creation and the adding of validators to each of the fields.

public class FormFactory 
{
	public AForm CreateWithValidation(UserData userData)
	{
		...
	}
}

We had a requirement to validate the same form in a different context which had never happened previously so we had to change the design.

When we talked about this problem abstractly it seemed that the easiest way to do this was to add another method to the object which would create a form with validation for the new context.

We started trying to do that but as we did so it became really obvious that it would make our life easier if we split the form creation and adding of validators – we would be driving towards the builder pattern.

That was a decision that was made while in the code and it wasn’t clear that the code would end up going that way when we discussed it beforehand.

I think it’s fine for a pair to make this type of decision.

Tech lead facilitating design

I spoke to Phil about this and he suggested that what will often happen is that the tech lead will act as a facilitator and help the pair address all the issues of the part of the system that they’re designing.

I like this way of describing the approach better as it ensures that the design we come up with will be compatible with any upcoming requirements – which the tech lead would be more aware of – and the other developers still have input on the way that the code is designed.

From my experience if the tech lead makes these decisions without being seen to take into consideration the opinions of the others on the team then it will quickly create a culture where developers on the team stop thinking about design problems and just delegate their thinking to the tech lead.

Equally as they won’t have any buy in they’ll probably be secretly hoping the design fails rather than looking for ways that it can evolve successfully.

The facilitation approach helps address this.

Written by Mark Needham

May 13th, 2010 at 7:00 am

Agile: Chasing a points total

with 6 comments

I’ve previously written about the danger of using velocity as a goal but on almost every project I’ve worked on at some stage we do actually end up chasing a points total.

Something I find quite interesting towards the end of an iteration is that if there is a choice of two stories to pick up then the project manager will nearly always press for one which can be completed within the remaining time in order to get the points total for that iteration higher.

This might even mean that we pick something which is lower priority because we might not finish a higher priority but larger story in time.

This would mean that the points for that card would not be recognised until the next iteration which would mean that in the current iteration the points total would be lower than expected.

Most of the time this doesn’t make a great amount of difference and if it helps take some pressure off and create an impression of ‘success’ then it seems a reasonable trade off to make.

It does still seem less than ideal to have that type of decision dictated by such a limited metric though.

Dermot pointed me to a blog post by Tim Ross titled ‘Are burndowns evil?‘ where he discusses this in more detail and although I agree with the points Tim makes, it often seems that a product owner ends up with the impression that the project is ‘failing’ or that we are ‘behind’ if the velocity ‘target’ is not being met each iteration.

It seems to take product owners who are new to a more agile approach a bit of time to get used to the idea that achieving the points total isn’t the most important thing to focus on and that it’s more useful to focus on other things which actually give them value.

I’ve worked on projects where we’ve got to the stage where the points total is a side show but it takes quite a bit of time of consistently delivering before we can get that level of trust.

Written by Mark Needham

May 11th, 2010 at 10:28 pm

Posted in Agile

Tagged with

F#: Tacit programming

with one comment

I recently came across the idea of tacit programming which is described as such:

Tacit programming is a programming paradigm in which a function definition does not include information regarding its arguments, using combinators and function composition (but not λ-abstraction) instead of variables.

The simplicity behind this idea allows its use on several programming languages, such as J programming language and APL and especially in stack or concatenative languages, such as PostScript, Forth, Joy or Factor. Outside of the APL and J communities, tacit programming is referred to as point-free style.

I realised that this approach quite closely describes what I’ve been trying to drive towards in my most recent playing around with F# and it’s actually quite fun trying to drive any intermediate state or storing of data in variables out of a program and just relying completely on function composition and higher order functions.

It seems like we need to define the signatures of some of the functions more explicitly but once we have a group of these functions we can combine them quite effectively elsewhere in our program.

Moving towards function composition

I’ve been trying to do this with the F# test builder that I’ve been working on and in a few cases it’s helped to reduce the amount of code required.

The build function originally read roughly like this:

and build (t:Type) = 
     let shortestConstructor = getShortestConstructor t
 
     shortestConstructor |>
     invokeWith (fun c -> getParameters c |> Array.map (fun p -> valueFor { Type = p.ParameterType; Name = p.Name })) |>
     andApply (fun t -> getWriteableProperties t |> Array.iter(setValueOn t))

In order to drive that code toward a ‘point-free style’ we need to make use of the function composition operator which allows us to get the code to the stage where we don’t need to specify the signature of ‘build’, it can be inferred:

and build = getShortestConstructor >>
            invokeWith (getParameters >> Array.map (fun p -> valueFor { Type = p.ParameterType; Name = p.Name })) >>
            andApply (fun t -> getWriteableProperties t |> Array.iter(setValueOn t))

I think that code is pretty much identical to the first version but I’m getting the following warning message pointing to the ‘valueFor’ call on line 2:

Warning	1	This and other recursive references to the object(s) being defined will be checked for initialization-soundness at runtime through the use of a delayed reference. This is because you are defining one or more recursive objects, rather than recursive functions. This warning may be suppressed by using #nowarn "40" or --nowarn 40.	C:\Playbox\yab\yab\Builder.fs	40	66	yab

I can’t figure out how I can change the code to get rid of that warning. I also haven’t worked out whether it’s possible to fix the ‘andApply’ line so that we can use functional composition throughout.

It would be cool if it could written in such a way that ‘t’ wouldn’t have to be explicitly specified. I can’t quite figure out how to do it because I need to call ‘getWriteableProperties’ and then iterate through that sequence and set a value on ‘t’ using each one.

Is there a way to write that bit of code so that ‘t’ could be inferred?

Some functions need to define signatures?

In order to write a ‘build’ function which heavily uses functional composition I’ve pulled out several helper functions which all currently explicitly define their signatures:

    let getWriteableProperties t = t.GetType().GetProperties() |> Array.filter (fun p -> p.CanWrite)
 
    let invokeWith f (aConstructor:ConstructorInfo) = f(aConstructor) |> aConstructor.Invoke

If we want to call C# libraries like this then I don’t think we have a choice but to explicitly define function signatures. It is possible to push the place at which we need to do this by writing F# functions to wrap those C# method calls but at some stage we’ll explicitly define a function signature:

let getWriteableProperties = getTypeProperties >> Array.filter writeableProperty

where the extra helper functions are defined like so:

let getTypeProperties t = t.GetType().GetProperties()
let writeableProperty (p:PropertyInfo) = p.CanWrite

I can’t see a way around this so again I’d be interested if there is one. I don’t think it’s a problem anyway, just intrigued how far their programming approach can be taken.

Written by Mark Needham

May 10th, 2010 at 11:24 pm

Posted in F#

Tagged with

Learnings from my first project of 2010

with one comment

Pat Kua recently wrote a retrospective of his time working at ThoughtWorks and since I recently finished the first project I’ve worked on in 2010 I thought it would be interesting to have a look at what I’d learned and observed while working on it.

“Perfect” code

I’ve previously believed that driving for the cleanest code with the least duplication and best structured object oriented design was the way to go but on this project we favoured a simpler design which felt quite procedural in comparison to some of the code bases I’ve worked on.

I initially felt that this would be a problem but the code was still pretty easy to change and everyone on the team was able to understand it without any problem so that seemed to outweigh any other concerns.

There’s always a story to be told

The client initially had a very stringent change process where if a developer wanted to implement a new feature they would have to get a change request signed off before they were able to check out a branch of the code base to work on it.

This had been changed before I started working there such that Subversion was being used by the development teams although the code still had to go through quite a stringent change management process before it could be released into production.

We didn’t really understand why this was the case because it just seemed to make it more difficult to get anything live.

As it turned out that change process was introduced to get around a problem they’d experienced where code was being manually changed in live. This meant that when there was a bug it was difficult to reproduce it since that version of the code wasn’t in source control.

Of course there are other ways to achieve this goal but it was interesting that the approach being taken wasn’t completely illogical but had been arrived at through a series of steps which did make sense. Dan North talks of the need to have a project shaman who can tell the story of why certain decisions were made and the history behind them and it’s always useful to have someone who can do that.

Theory of constraints

The other interesting thing that I learnt, which is directly linked to the above, is that there is always a constraint within a system and no matter how much you optimise other parts of your system it won’t make any difference to the overall situation.

In this case no matter how quickly the team was able to code new requirements it didn’t make that much difference because there was still quite a big wait at the end pushing the release through the release management system so it could go into production.

We were starting to work on trying to improve this part of the process when I rolled off the project but a lesson for me here is to try and identify where the constraint is in any given system and try and work out how we can address that.

Written by Mark Needham

May 9th, 2010 at 10:17 pm

Coding: Paying attention

with one comment

Jeremy Miller tweeted earlier in the week about the dangers of using an auto mocking container and how it can encourage sloppy design:

That whole “Auto Mocking Containers encourage sloppy design” meme that I blew off last week? Seeing an example in our code.

I haven’t used an auto mocking container but it seems to me that although that type of tool might be useful for reducing the amount of code we have to write in our tests it also hides the actual problem that we have – an object has too many dependencies.

By hiding the creation of stubs/mocks for those dependencies in our test we are addressing the effect and not the cause i.e. we are bear shaving.

Jeremy followed this up with a couple of quite insightful comments:

You know though, I still think it comes down to you being responsible for paying attention.

It’s no different than still having to worry about DB optimization even though the ORM is shielding you from the details

Another somewhat related situation where I’ve noticed a similar problem is when we have several tests which require a certain method to be stubbed out and in the interests of reducing duplication we pull that up into a setup method.

While this achieves that goal it also means that there is information that is hidden away from us when we read each of our tests.

One approach that I’ve seen encouraged is that we should never use a setup method so that we have to create everything we need for our test in the test body.

I quite like this approach because it encourages us to see any problems that we’re creating with respect to writing difficult to test code but quite often what ends up happening is we’ll copy and paste tests because there’s more code to write for each test.

I’m coming to the conclusion that there’s no one approach that will stop us making design mistakes and that as Jeremy says, we need to make sure that we pay attention to what we’re doing.

Written by Mark Needham

May 9th, 2010 at 1:04 pm

Posted in Coding

Tagged with

F#: My current coding approach

with 2 comments

I spent a bit of time over the weekend coding a simple generic builder for test objects in F# and I noticed that although there were similarity with the ways I drive code in C# or Java my approach didn’t seem to be exactly the same.

I’ve previously written about the importance of getting quick feedback when programming and how I believe that this can often be achieved faster by using the REPL rather than unit testing.

Still driving from the outside in

This time I decided to apply some of my recent learnings on the value of driving from the outside in so I started by writing an acceptance test which described the fluent interface of the code at a high level.

I want to use the builder from C# code so I was driving the code from C# tests:

[Test]
public void ShouldCreateAFoo()
{
	var foo = Build.A<Foo>().Build();
 
	Assert.That(foo.Bar, Is.EqualTo("Bar"));
}

I wrote enough code to make that test compile at which stage the test was failing with a null reference exception because I hadn’t instantiated ‘Foo’ yet.

At this stage if I was coding in C# I would probably work out what object I needed to create to do that and then I would write a test directly against that object and then write the code to make that pass.

In this case I didn’t do that but instead I had a rough idea of the functions that I needed to glue together to get that test to pass. I just wrote those functions and combined them together without writing any tests against those individual functions.

I extended my initial test to cover all the different types that are being used by the objects in our code base and then I added the code to make the new type pass.

Approach to learning an API

Since it’s reflection code and I don’t know those APIs that well I spent a bit of time tinkering in the REPL until I knew which methods I needed to call.

If I was writing C# then I’d have probably spent less time trying out different methods and more time reading through the list of available methods before choosing the one that I wanted.

I wrote quite messy code initially until I had the first test passing and then I went back and tidied it up so that it was a bit more readable.

At this stage common functions started to reveal themselves and it made sense to have some unit tests directly against those as documentation if nothing else. The tests were all immediately green so those bits of code weren’t test driven at that level.

Do we need really granular tests in F#?

My overall feeling at the moment is that while it’s useful to have tests around code written in F#, we don’t need to go as granular with our tests as we might do in C#.

Due to the conciseness of the language it’s often so obvious that the code written is correct that I don’t think tests at the functional level would add that much value. Equally I’m not necessarily convinced that the design would be any different if the individual functions were test driven.

I found the high level test gave me enough protection for any changes that I wanted to make and it did protect me a couple of times when I made breaking changes.

Having said all that, I’m still only writing toy code so it would be interesting to see if my approach would be any different if I was working on an F# application with a team of other developers.

Written by Mark Needham

May 6th, 2010 at 11:36 pm

Posted in F#

Tagged with

Consistency in the code base and incremental refactoring

with 6 comments

I wrote a post a while ago about keeping consistency in the code base where I covered some of the reasons that you might want to rewrite parts of a code base and the potential impact of those changes but an interesting side to this discussion which I didn’t cover that much but which seems to play a big role is the role of incremental refactoring.

In our code base we recently realised that the naming of the fields in some parts of a form don’t really make sense and I wanted to start naming new fields with the new naming style and then go back and change the existing ones incrementally when it was a good time to do so.

Richard and Christian suggested that this wouldn’t be such a good approach because it would make the naming issue even more confusing until all the fields had been renamed.

In order to avoid this problem we had to either go and change every single field to follow the new naming approach immediately or settle on the old names even though they might not be as descriptive.

Since doing the former would involve changing the names of around 15-20 fields across several different objects, in Hibernate mapping code, probably in the database, on HTML forms and in Watin tests we decided not to do that – the project is only for a short amount of time so the investment probably wouldn’t be worthwhile.

Although in this case it makes sense not to make the improvement it doesn’t strike me as being entirely satisfactory that we would need to make this type of change in a big bang fashion.

From my experience there are often insights into the code or improvements in the ubiquitous language as time goes on and while consistency is of course an important thing in any code base it’s not the only thing.

When do we decide that actually gradually moving to a better approach is worth the temporary pain that having this inconsistency will cause?

Written by Mark Needham

May 5th, 2010 at 10:34 pm

Posted in Coding,Incremental Refactoring

Tagged with

F#: The Kestrel Revisited

with 3 comments

A couple of days I wrote about a ‘returning’ function that I’d written to simplify a bit of F# code that I’ve been working on.

It’s defined like so:

let returning t f = f(t); t

And can then be used like this:

let build (t:Type) =
    returning (Activator.CreateInstance(t)) (fun t -> 
        t.GetType().GetProperties() |> Array.iter (fun p -> p.SetValue(t, createValueFor p, null)))

While I quite like this function it didn’t quite feel like idiomatic F# to me.

With idiomatic F# we would tend to design our functions in such a way that we can pipe data through a series of them by making use of the forward or function composition operators.

With that idea in mind I decided to try switching the arguments to ‘returning’ around and renaming it so that the code would read more naturally:

let andApply f t = f(t); t
let build (t:Type) =
    Activator.CreateInstance(t) |>
    andApply (fun t ->  t.GetType().GetProperties() |> Array.iter (fun p -> p.SetValue(t, createValueFor p, null)))

We partially apply the ‘andApply’ function to create a new function which takes in a type which is provided on the left hand side of the ‘|>’ operator.

I think this now reads better than the original version.

What I find interesting is that when writing functions which I intend to be used like this the way I name them is different and the name only makes sense if it’s used in that context.

Using the ‘andApply’ function on its own without making use of partial function application wouldn’t read as cleanly as it does at the moment.

Written by Mark Needham

May 4th, 2010 at 6:36 pm

Posted in F#

Tagged with

Coding: Make the mutation obvious

with 7 comments

Although I’m generally quite opposed to coding approaches whereby we mutate objects, sometimes the way a framework is designed seems to make this a preferable option.

We came across a situation like this last week when we wanted to hydrate an object with data coming back from the browser.

The signature of the action in question looked like this:

public class SomeController
{
	public ActionResult SomeAction(string id, UserData userData)
	{
	}

We were able to automatically bind most of the values onto ‘UserData’ except for the ‘id’ which was coming in from the URL.

We could have included the ‘id’ in a hidden field in the page to get around this problem but that seemed like a more complicated/hacky solution.

Instead what we needed to do is the following:

public class SomeController
{
	public ActionResult SomeAction(string id, UserData userData)
	{
		userData.Id = id.ParseIntoId();
	}

‘ParseIntoId’ is an extension method which converts a string representation of an ‘id’ into an object representation.

While this method is fine as it is, I find that as code gets added it can sometimes be less than obvious that we’ve actually mutated ‘UserData’ so I now prefer to explicitly call that out like so:

public class SomeController
{
	public ActionResult SomeAction(string id, UserData userData)
	{
		var userDataWithId = UpdateUserDataWithId(userData, id);
	}
 
	private UserData UpdateUserDataWithId(UserData userData, string id)
	{
		userData.Id = id.ParseIntoId();
		return userData;
	}

We would then want to use ‘userDataWithId’ after this point in the function.

Of course the original ‘UserData’ has still been mutated and it doesn’t seem worth the extra code that it would take to avoid this so we’ll just trust that anyone reading the code would realise that they should use ‘userDataWithId’ instead!

Written by Mark Needham

May 4th, 2010 at 6:32 pm

Posted in Coding

Tagged with