Mark Needham

Thoughts on Software Development

Archive for February, 2009

Coding: Implicit vs Explicit modeling

with one comment

When it comes to object modeling there seem to be two distinct approaches that I have come across.

Implicit modeling

The first approach is where we do what I like to think of as implicit modeling.

With this approach we would probably use less objects than in the explicit approach and we would have objects being populated as we moved through the work flow of our application.

I call it implicit modeling because we need to imply where we are based on the internal state of our objects – we can typically work this out by seeing what is and is not set to null.

The disadvantage of this approach is that sometimes data which should have been set isn’t set because an error occurred somewhere so we then end up with our object in an invalid state – it has one extra null value than it was supposed to have.

We need to understand more context to work out whether this was intentionally set to null or whether there was a problem somewhere which caused it to happen. I find myself doing a lot of debugging with this approach.

Explicit modeling

The alternate approach is to add more objects into our code which describe the work flow and current state more explicitly.

With this approach we will probably end up writing more code than with the implicit approach and there will be more ‘mapping’ code written transitioning data between our objects.

The advantage of doing this is that it becomes much more easier to work out what is going on when reading the code without necessarily having to dive deep into the logic behind it.

We spend a lot more time reading code than writing it so I’m happy to write more code if it helps to save time when others have to look at it later on.

A contrived example

To give a somewhat contrived example let’s say we have a Foo in our application which we only have an identifier for when we first get it but will have a Bar and Baz later on in our application.

public class Foo 
{
	public string Id {get; set;}
	public string Bar {get;set;}
	public string Baz {get; set;}
}

The implicit approach to modeling would involve setting the values of say Id and leaving Bar and Baz undefined until we have them.

The more explicit approach might involved having another object called ‘FooReference’ which just has the Id and then we can load the actual Foo from that:

public class FooReference 
{
	public string Id {get;set;}
}
 
public class Foo 
{
	public Foo(FooReference fooReference, string bar, string baz}
	{
		// and so on...
	}
}

This way we can tell from just reading the code when we have a real Foo and when we just have a placeholder for it, which I think makes the code much more expressive.

Combining the two approaches

An approach which is half way between the two extremes involves being able to specifically state when we are deliberately not setting values on an object by introducing the concept of an optional or blank object for example.

I haven’t tried this approach (only been told about it) but it sounds like a pretty good compromise for avoiding over complicating the code while also maintaining the expressiveness.

Written by Mark Needham

February 28th, 2009 at 9:50 am

Posted in Coding

Tagged with ,

Coding: Using ‘ToString’

with 5 comments

An interesting conversation I’ve had recently with some of my colleagues is around the use of the ToString method available on all objects created in Java or C#. It was also pointed out in the comments on my recent post about wrapping DateTimes in our code.

I think the original intention of this method was to create a string representation of an object, but its use has been overloaded by developers to the point where its expected use is as a mechanism for creating nice output when debugging the code or viewing unit test failures.

The nice thing about it in C# at least is that if you are using an object in your UI you can just put the object into the view and the ToString method will be implicitly called when the object needs to be rendered.

The problem with doing that is its implicitness – other developers might change the ToString method when debugging some code to give more useful output and the display logic of our application has now changed, potentially without us realising until a higher level functional test stops working.

The method name itself is not really that intention revealing anyway – what string format is it creating? A display string? A debugging string? It’s not that clear.

The approach we are now taking is a bit more explicit and involves having a more explicit method on objects to achieve the same end result.

Method names such as ‘ToDisplayFormat’ or ‘ToServiceFormat’ help us to explain a bit more clearly what we are doing while still getting our object to render a different representation of itself.

Written by Mark Needham

February 26th, 2009 at 11:43 pm

Posted in Coding

Tagged with

C#: Wrapping DateTime

with 5 comments

I think it was Darren Hobbs who first introduced me to the idea of wrapping dates in our system to describe what that date actually means in our context, and after suffering the pain of passing some unwrapped dates around our code I think I can safely say that wrapping them is the way to go.

The culprit was a date of birth which was sometimes being created from user input and sometimes being retrieved from another system.

The initial (incorrect) assumption was that we would be passing around the date in the same string format and there was no point wrapping the value as we were never doing anything with the data.

It proved to be a bit of a nightmare trying to work out which state the data of birth was in various parts of the application and we ended up doing conversions to the wrong format and then undoing those and losing the formatting in other places!

Step 1 here was clearly not to pass around the date as a string but instead to convert it to a DateTime as soon as possible.

This is much more expressive but we can take this one step further by wrapping that date time in a DateOfBirth class.

public class DateOfBirth 
{
	private DateTime? dateOfBirth
 
	public DateOfBirth(DateTime? dateOfBirth) 
	{
		this.dateOfBirth = dateofBirth;
	}
 
	public string ToDisplayFormat()
	{
		return dateOfBirth == null ? "" : dateOfBirth.Value.ToString("dd MMM yyyy");
	}
}

When we want to display this object on the page we just have to call the ToDisplayFormat() and if that date format needs to change then we have only one place to make that change. Creating this class removed at least 3 or 4 ‘DateTime.Parse(…)’ and ‘DateTime.ToString(…)’ calls throughout the code.

Now we could achieve the same functionality using an extension method on DateTime? but it’s not as expressive as this in my opinion. It is also really obvious when looking at the code to know what type we are dealing with and it is really obvious when reading this class which method we will use to get the format to display to the user.

I will certainly be looking to wrap any DateTimes I come across in future.

Written by Mark Needham

February 25th, 2009 at 11:12 pm

Posted in .NET

Tagged with ,

C#: Wrapping collections vs Extension methods

with 3 comments

Another interesting thing I’ve noticed in C# world is that there seems to be a trend towards using extension methods as much as possible. One area where this is particularly prevalent is when working with collections.

From reading Object Calisthenics and working with Nick I have got used to wrapping collections and defining methods on the wrapped class for interacting with the underlying collection.

For example, given that we have a collection of Foos that we need to use in our system we might wrap that in an object Foos.

public class Foos
{
    private readonly IEnumerable<Foo> foos;
 
    public Foos(IEnumerable<Foo> foos)
    {
        this.foos = foos;
    }
 
    public Foo FindBy(string id)
    {
        return foos.Where(foo => foo.Id == id).First();
    }
 
    // some other methods to apply on the collection
}

Extension methods provide another way of achieving the same thing while not needing to wrap it.

public static class FooExtensions
{
    public static Foo FindBy(this IEnumerable<Foo> foos, string id)
    {
        return foos.Where(foo => foo.Id == id).First();
    }
}

It seems like there isn’t much difference in wrapping the collection compared to just using an extension method to achieve the same outcome.

The benefit I see in wrapping is that we take away the ability to do anything to the collection that we don’t want to happen. You only have the public API of the wrapper to interact with.

The benefit of the extension method approach is that we don’t need to create the object Foos – we can just call a method on the collection.

I’m not sure which is a better approach – certainly languages which provide the ability to open classes seem to favour taking that approach over wrapping but I still think it’s nice to have the wrapper as it means you don’t have to be explicitly passing collections all around the code.

But maybe that’s just me.

Written by Mark Needham

February 23rd, 2009 at 8:24 pm

Posted in .NET

Tagged with ,

C#: Implicit Operator

with 3 comments

Since it was pointed out in the comments on an earlier post I wrote about using the builder pattern how useful the implicit operator could be in this context we’ve been using it wherever it makes sense.

The main benefit that using this approach provides is that our test code becomes more expressive since we don’t need to explicitly call a method to complete the building of our object.

public class FooBuilder 
{
	private string bar = "defaultBar";
 
	public FooBuilder Bar(string value)
	{
		bar = value;
		return this;
	}
 
	public static implicit operator Foo(FooBuilder builder) 
	{
		return new Foo { Bar = builder.bar };
	}
}
public class Foo 
{
	public string Bar { get; set; }
}

We can then create a ‘Foo’ in our tests like this:

var foo = new FooBuilder().Bar("bar");

The type of ‘foo’ is actually ‘FooBuilder’ but it will be implicitly converted to Foo when needed.

Alternatively we can force it to Foo earlier by explicitly defining the type:

Foo foo = new FooBuilder().Bar("bar");

While playing around with the specification pattern to try and create a cleaner API for some querying of collections I tried to create a specification builder to chain together several specifications.

public interface IFooSpecification
{
    bool SatisfiedBy(Foo foo);
    IFooSpecification And(IFooSpecification fooSpecification);
}
public abstract class BaseFooSpecification : IFooSpecification
{
    public abstract bool SatisfiedBy(Foo foo);
    public  IFooSpecification And(IFooSpecification fooSpecification)
    {
        return new AndSpecification(this, fooSpecification);   
    }
}
public class FooBar : BaseFooSpecification
{
    private readonly string bar;
 
    public FooBar(string bar)
    {
        this.bar = bar;
    }
 
    public override bool SatisfiedBy(Foo foo)
    {
        return foo.Bar == bar;
    }
}
public class FooQuery 
{
    private FooBar fooBarSpecification;
    private FooBaz fooBazSpecification;
 
    public FooQuery Bar(string value)
    {
        fooBarSpecification = new FooBar(value);
        return this;
    }
 
    public FooQuery Baz(string value)
    {
        fooBazSpecification = new FooBaz(value);
        return this;
    }
 
 
    public static implicit operator IFooSpecification(FooQuery fooQuery)
    {
        // User-conversion to interface error message displayed by Resharper
 
    }
}

The intention was to be able to filter a collection of foos with code like the following:

foos.FindBy(new FooQuery().Bar("bar").Baz("baz"));

Unfortunately the C# language specification explicitly doesn’t allow this:

A class or struct is permitted to declare a conversion from a source type S to a target type T provided all of the following are true:

  • Neither S nor T is object or an interface-type.

User-defined conversions are not allowed to convert from or to interface-types. In particular, this restriction ensures that no user-defined transformations occur when converting to an interface-type, and that a conversion to an interface-type succeeds only if the object being converted actually implements the specified interface-type.

I tried casting to the BaseFooSpecification abstract class instead and although that does compile it seemed to be leading me down a path where I would need to change the ‘FindBy’ signature to take in a BaseFooSpecification which I wasn’t keen on.

It didn’t prove possible to implicitly convert to a BaseFooSpecification when the signature for the method expected an IFooSpecification even though BaseFooSpecification implements IFooSpecification.

I don’t think there is a way to get around this in C# 3.0 so I just ended up creating an explicit method to convert between the two – not quite as nice to read but the best I could come up with.

Written by Mark Needham

February 22nd, 2009 at 10:20 pm

Posted in .NET

Tagged with ,

ASP.NET MVC: Driving partials by convention

with 4 comments

I like to have conventions in the code I write – I find it makes the code i write much cleaner which still providing flexibility.

One of the conventions that Jeremy Miller coined for working with ASP.NET MVC applications is that of using one model per controller method aka “The Thunderdome principle”. I think we can take this further by having one model per partial that we use inside our views.

The benefit of having a model especially for a partial is that we remove confusion about data available to populate our controls by restricting the amount of data we actually have. It also makes more sense from a conceptual point of view.

Given this approach it started to become quite annoying having to type the following code all the time.

<% Html.RenderPartial("_Foo", new FooModel()); %>

We realised that a neater approach would be if we could just pass in the model and it would work out which partial needed to be rendered, assuming the convention that each model is only used on one partial.

We are using strongly typed models on our views so the code behind in each of the partials extends ViewPage, making it possible to work out the partial we want to load by looking up the model.

public static class HtmlHelperExtensions
{
    public static void RenderPartialFrom(this HtmlHelper htmlHelper,  object model)
    {
        var thePartial = FindMeThePartial(model);
        htmlHelper.RenderPartial(thePartial, model);
    }
 
    private static string FindMeThePartial<T>(T model) where T : class
    {
        var projectAssembly = Assembly.Load("Project");
        var types = projectAssembly.GetTypes();
 
        foreach (var type in types)
        {
            if (type.BaseType == typeof(ViewPage<T>))
            {
                return type.Name;
            }
        }
        return string.Empty;
    }
}

You can then refer to this in views like so:

<% Html.RenderPartialFrom(new FooModel()); %>

Obviously it takes the first result it finds, so the convention we have is that each model should only be used on one partial which I think is a reasonable idea.

Written by Mark Needham

February 21st, 2009 at 10:39 am

Posted in .NET

Tagged with

Coding Dojo #10: Isola III

without comments

In our latest coding dojo we continued working on Isola with a focus on adding functionality following on from last week’s refactoring effort.

The Format

We used the Randori approach with four people participating for the whole session.

What We Learnt

  • Our real aim for this session was to try and get the code into a state where we could reject an invalid move i.e. a move to a square that wasn’t adjacent to the one the player was currently on. As we still had the whole board represented as a string this proved to be quite tricky but we eventually came up with an approach which calculated the difference between the last and current moves and was able to tell us whether or not it was valid. This didn’t cover diagonal moves, however. We found it pretty difficult to drive this functionality due to the way the board was represented.
  • What I’ve found surprising is how long we’ve been able to get away with having the board represented like this. Ideally we would have it represented in a structure that made it easy for us to make changes. This would require quite a big refactoring effort which we shied away from, I think due to the fact that we would be working without a green bar for quite a while during the refactoring. It wasn’t obvious to me how we could refactor the code in small steps.
  • Halvard pointed out that while we don’t want to do Big Design Up Front, what we did in the first week of Isola was No Design Up Front which was equally harmful. Finding a happy medium i.e. Enough Design Up Front is necessary to avoid the problems we have run into here.

For next time

  • We’re planning to try and implement Isola in Javascript next week. Most of the Dojo regulars are working with Javascript on their projects so it makes sense to give it a go.

Written by Mark Needham

February 19th, 2009 at 11:09 pm

Posted in Coding Dojo

Tagged with

C#: Extension methods != Open classes

with 6 comments

When I first heard about extension methods in C# it sounded like a pretty cool idea but I wasn’t sure how they differed to the idea of open classes that I had seen when doing a bit of Ruby.

After a bit of a struggle recently to try and override some extension methods on HtmlHelper in ASP.NET MVC it’s clear to me that we don’t quite have the same power that open classes would provide.

To start with we can’t access private variables of a class in an extension method – as far as I understand an extension method is just syntactic sugar to make our code look pretty at compile time.

We therefore only have access to public fields, methods and properties from extension methods. Anything protected is also out of our reach.

I’m no expert of Ruby’s open classes but from what I have read and remember you can override private methods, use private variables and even remove methods if you want to. We could also add methods to a specific instance of a class which we can’t do using extension methods.

We therefore tend to be limited to using C#’s extension methods for applying operations directly to an object – applying different types of formatting to strings is one common use.

Before extension methods existed this type of code would typically have gone on a StringUtils class so this is a definite improvement.

We can ‘add’ more functional methods to a class as long as it only uses data accessible from the class’ API but we haven’t tended to do this so much on the projects I’ve worked on.

The ability to override methods added to a class away from its original definition is something that we don’t have using extension methods.

As I mentioned we has some problems with this recently when trying to work out how to override some calls to HtmlHelper methods.

In this case it would have been nice to be able to open up the HtmlHelper class and change these methods. Unfortunately since they were defined as extension methods, extending HtmlHelper didn’t give access to them so we ended up coming up with a solution which feels a bit too hacky for my liking.

As Dare Obasanjo points out towards the end of his post we also don’t have the ability to create extension properties. Seeing as properties are compiled to get/set methods under the hood I wouldn’t have thought it would be that difficult to add this in the next release.

Overall though extension methods are a nice addition but they still don’t quite give us the full power that open classes would provide.

Written by Mark Needham

February 19th, 2009 at 6:22 am

Posted in .NET

Tagged with , , ,

Collective Code Ownership: Some Thoughts

without comments

Collective code ownership is one of the things we practice on projects using extreme programming and Mike Bria’s post on the subject makes me wonder if code ownership exists on more than one level.

Kent Beck’s definition of collective code ownership is that

Anyone can change anything at anytime

Mike also gives an alternative definition which goes beyond that:

From a more measurable POV, CoCO states that everyone on the team (developer-wise) must be able to describe the design of anything the team is working on in no more than 5 minutes.

Fo me this second definition goes beyond just describing a belief system about code and seems to be heading more towards the benefits we achieve from close collaboration on a code base using techniques such as pair programming.

I’ve worked on several different teams over the last few years and although we’ve practiced collective code ownership on all of them I’ve noticed that only really on the projects where the whole team practiced pair programming all the time did everyone on the team have a good understanding of all areas of the code.

In those teams where we didn’t pair all the time we tended to end up with people becoming specialised in certain areas of the code, leading to them being asked to do the next story in that area, and so on until other people found it difficult to make changes without consulting them.

It’s almost like the benefits of collective code ownership are lost without the belief system of the team actually changing.

We can still change the code if we want to but we don’t have the confidence to do so since we haven’t done any work in that area.

Even if someone does explain the code to the rest of the team afterwards after they have written it, I don’t think it’s the same as living through the process of writing it, seeing the tradeoffs, limitations and the reasons why decisions were made.

From my experience this happens far more frequently when pair programming as we get to work on a lot more areas of the code as well as working with people who can provide insight on other areas of the code that we might not have worked on yet.

The belief of collective code ownership + pair programming is what leads to that second definition, which surely is the ideal on any software development team.

Written by Mark Needham

February 17th, 2009 at 10:32 pm

Posted in Coding

Tagged with

C#: Object Initializer and The Horse Shoe

with 7 comments

The object initializer syntax introduced in C# 3.0 makes it easier for us to initialise our objects in one statement but I think we need to remember that they are not named parameters and that there is still a place (a very good one actually) for creating objects from constructors or factory methods.

Unfortunately what I think the cleaner syntax does is encourage us to create objects with half the fields populated and half of them null by default.

When we didn’t have the object initializer syntax we would have to set properties on objects like so:

var foo = new Foo();
 
var bar = new Bar();
bar.Baz = new Baz();
 
foo.Bar = bar;

It takes a lot of extra boiler plate code to achieve this and it looks terrible, hopefully driving us towards using the constructor to initialise our objects.

Object initializers makes it much easier to achieve the same thing but at its worst we end up with code similar to the following which to me looks a bit like a horse shoe, an anti pattern in my opinion.

new Foo 
{ 
	Bar = new Bar 
	{ 
		Baz = new Baz 
		{
			Other = new Other
			{
				Value = "value",
				OtherValue = "otherValue"
			}
		}
	}
}

I don’t think we should write code like this – to me it’s not expressive and it’s difficult to understand why certain fields are set or not set. You end up having to think how this code fits into the bigger picture in order to understand it – extra context which shouldn’t be necessary.

From experience we also end up in the debugger much more frequently than should be the case, trying to work out why certain fields are set. I feel this leads to very implicit code where you have to work out what is going on/where you are in the work flow by checking the state of our objects.

Of course the problem here is the reliance on properties (i.e. getters/setters) to instantiate our objects rather than object initializer in itself but the new syntax has made it much easier for us to do it.

Certainly there are some times when it’s quite nice to have the object initializer syntax but as with most things we need to be careful not to overdo it.

Written by Mark Needham

February 16th, 2009 at 10:04 pm

Posted in .NET

Tagged with ,