Mark Needham

Thoughts on Software Development

Archive for the ‘.net’ tag

Iron Ruby: ‘unitialized constant…NameError’

without comments

I’ve been playing around a bit with Iron Ruby and cucumber following Rupak Ganguly’s tutorial and I tried to change the .NET example provided in the 0.4.2 release of cucumber to call a class wrapping Castle’s WindsorContainer.

The feature file now looks like this:

# 'MyAssembly.dll' is in the 'C:/Ruby/lib/ruby/gems/1.8/gems/cucumber-0.6.4/examples/cs' folder
require 'MyAssembly'
...
Before do
  @container = Our::Namespace::OurContainer.new.Container
end

The class is defined roughly like this:

public class OurContainer : IContainerAccessor
    {
        private WindsorContainer container = new WindsorContainer();
 
        public SwintonContainer()
        {
            container.RegisterControllers(Assembly.GetExecutingAssembly()).AddComponent<IFoo, Foo>();
           // and so on
        }
 
        public IWindsorContainer Container
        {
            get { return container; }
        }
    }

When I tried to run the feature like so:

icucumber features

I kept getting the following error:

uninitialized constant Our::Namespace::OurContainer (NameError)
C:/Ruby/lib/ruby/gems/1.8/gems/cucumber-0.6.4/examples/cs/features/step_definitons/calculator_steps.rb:13:in `Before'

I’ve come across a few posts where people described the same error and they all suggested that IronRuby was unable to find the class that I was trying to call in the code.

I decided to try calling another class from the assembly to see if that was the problem but that worked fine so there wasn’t a problem with locating the class.

Somewhat by coincidence I was looking at the assembly again in Reflector and tried to look at the constructor of the ‘OurContainer’ class and was asked to give the location of the ‘Castle.Windsor’ assembly which it uses internally.

I didn’t have that assembly or any of its dependencies in the ‘C:/Ruby/lib/ruby/gems/1.8/gems/cucumber-0.6.4/examples/cs’ folder but once I’d included those it all worked fine again!

Written by Mark Needham

April 25th, 2010 at 5:27 pm

Posted in .NET,Ruby

Tagged with , ,

Functional C#: An imperative to declarative example

with 6 comments

I wrote previously about how we’ve been working on some calculations on my current project and one thing we’ve been trying to do is write this code in a fairly declarative way.

Since we’ve been test driving the code it initially started off being quite imperative and looked a bit like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
public class TheCalculator
{
	...
	public double CalculateFrom(UserData userData)
	{
		return Calculation1(userData) + Calculation2(userData) + Calculation3(userData);
	}
 
	public double Calculation1(UserData userData)
	{
		// do calculation stuff here
	}
 
	public double Calculation2(UserData userData)
	{
		// do calculation stuff here
	}
	...
}

What we have on line 7 is a series of calculations which we can put in a collection and then sum together:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public class TheCalculator
{
	...
	public double CalculateFrom(UserData userData)
	{
		var calculations = new Func<UserData, double>[] { Calculation1, Calculation2, Calculation3 };
 
		return calculations.Sum(calculation => calculation(userData));
	}
 
	public double Calculation1(UserData userData)
	{
		// do calculation stuff here
	}
	...
}

We can pull out a ‘Calculation’ delegate to make that a bit more readable:

public class TheCalculator
{
	private delegate double Calculation(UserData userData);
 
	public double CalculateFrom(UserData userData)
	{
		var calculations = new Calculation[] { Calculation1, Calculation2, Calculation3 };
 
		return calculations.Sum(calculation => calculation(userData));
	}
	...	
}

One of the cool things about structuring the code like this is that if we want to add a new Calculation we can just go to the end of the array, type in the name of the method and then Resharper will create it for us with the proper signature.

We eventually came across some calculations which needed to be subtracted from the other ones, which seems like quite an imperative thing to do!

Luckily Christian saw a way to wrap these calculations in a ‘Subtract’ function so that we could stay in declarative land:

public class TheCalculator
{
	private delegate double Calculation(UserData userData);
 
	public double CalculateFrom(UserData userData)
	{
		var calculations = new [] { Calculation1, Calculation2, Calculation3, Subtract(Calculation4) };
 
		return calculations.Sum(calculation => calculation(userData));
	}
	...	
	public Calculation Subtract(Calculation calculation)
	{
		return userData => calculation(userData) * -1;
	}
}

Having a method which explicitly has the ‘Calculation’ signature allows us to remove it from the array declarative which is pretty neat.

We can also change the method signature of ‘Subtract’ to take in a variable number of calculations if we need to:

public class TheCalculator
{
	...	
	public double CalculateFrom(UserData userData)
	{
		var calculations = new [] { Calculation1, Calculation2, Calculation3, Subtract(Calculation4, Calculation5) };
 
		return calculations.Sum(calculation => calculation(userData));
	}
 
	public Calculation Subtract(params Calculation[] calculations)
	{
		return userData => calculations.Sum(calculation =>  calculation(userData)) * -1;
	}
}

The other nice thing about coding it this way is that we ran into a problem where when we fed real data through the code we were getting the wrong values returned and we wanted to understand where it was falling down.

We could easily temporarily add in a ‘Console.WriteLine’ statement like this to help us out:

public class TheCalculator
{
	...	
	public double CalculateFrom(UserData userData)
	{
		var calculations = new [] { Calculation1, Calculation2, Calculation3, Subtract(Calculation4, Calculation5) };
 
		return calculations
			.Select(calculation =>
					{
						Console.WriteLine(calculation.Method.Name + " = " + calculation(userData));
						return calculation;
					})
			.Sum(calculation => calculation(userData));
	}
	...
}

It then printed the results down the page like so:

Calculation1: 23.34
Calculation2: 45.45
...

Written by Mark Needham

April 20th, 2010 at 7:08 am

Posted in .NET

Tagged with ,

C#: Java-ish enums

with 5 comments

We’ve been writing quite a bit of code on my current project trying to encapsulate user selected values from drop down menus where we then want to go and look up something in another system based on the value that they select.

Essentially we have the need for some of the things that a Java Enum would give us but which a C# one doesn’t!

Right now we have several classes similar to the following in our code base to achieve this:

public class CustomerType
{
    public static readonly CustomerType Good = new CustomerType("Good", "GoodKey");
    public static readonly CustomerType Bad = new CustomerType("Bad", "BadKey");
 
    private readonly string displayValue;
    private readonly string key;
    private static readonly CustomerType[] all = new[] { Good, Bad };
 
    private CustomerType(string displayValue, string key)
    {
        this.displayValue = displayValue;
        this.key = key;
    }
 
    public static CustomerType From(string customerType)
    {
        return all.First(c => c.displayValue == customerType);
    }
 
    public static string[] Values
    {
        get { return all.Select(c => c.DisplayValue).ToArray(); }
    }
 
    public string DisplayValue
    {
        get { return displayValue; }
    }
 
    public string Key
    {
        get { return key; }
    }
}

To get values to display in drop downs on the screen we call the following property in our controllers:

CustomerType.Values

And to map from user input to our type we do this:

CustomerType.From("Good")

Right now that will blow up if you trying to parse a value that doesn’t have an instance associated with it.

Christian came up with the idea of storing each of the public static fields in an array and then using ‘First’ inside the ‘From’ method to select the one that we want.

We previously had a somewhat over complicated dictionary with the display value as the key and type as the value and looked it up that way.

So far this does all that we need to do and the only annoying thing is that if we add a new instance then we need to manually add it to the ‘all’ array.

An alternative would be to do that using reflection which I think would work but it’s simple enough like this so we haven’t taken that approach yet.

Written by Mark Needham

April 17th, 2010 at 10:33 am

Posted in .NET

Tagged with

Functional C#: Using Join and GroupJoin

with 7 comments

An interesting problem which I’ve come across a few times recently is where we have two collections which we want to use together in some way and get a result which could either be another collection or some other value.

In one which Chris and I were playing around with we had a collection of years and a collection of cars with corresponding years and the requirement was to show all the years on the page with the first car we found for that year or an empty value if there was no car for that year.

We effectively needed to do a left join on the cars collection.

This is an imperative way of solving the problem:

public class Car
{
     public int Year { get; set; }
     public string Description { get; set; }
}
var years = new[] { 2000, 2001, 2002, 2003 };
var cars = new[] { new Car { Year = 2000, Description = "Honda" }, new Car { Year = 2003, Description = "Ford" } };
 
var newCars = new List<Car>();
foreach (var year in years)
{
    var car = cars.Where(x => x.Year == year).FirstOrDefault() ?? new Car  { Year = year, Description = ""};
    newCars.Add(car);
}

We can actually achieve the same result in a more declarative way by making use of ‘GroupJoin‘:

var newCars = years.GroupJoin(cars, 
                              year => year, 
                              car => car.Year,
                              (year, theCars) =>  theCars.FirstOrDefault() ??  new Car { Year = year, Description = ""  });

‘GroupJoin’ is useful if we want to keep all of the items in the first collection and get a collection of the items in the second collection which match for the specified keys.

In this case it allows us to identify where there are no matching cars for a specific year and then just set a blank description for those years.

One nice side effect is that if we later want to include multiple cars for a year then we shouldn’t have to change the code too much to achieve that.

Another example which I came across is where we have one collection which contains filter criteria which it needs to apply against the other collection.

We have a collection of years and need to indicate whether there is a matching car for each of those years.

[Test]
public void JoinExample()
{
    var years = new[] { 2000, 2003 };
    var cars = new[] { new Car { Year = 2000, Description = "Honda" },
                       new Car { Year = 2003, Description = "Ford" },
                       new Car { Year = 2003, Description = "Mercedes"}};
 
    Assert.That(AreThereMatchingCars(years, cars), Is.True);
}
public bool AreThereMatchingCars(IEnumerable<int> years, IEnumerable<Car> cars)
{
    foreach (var year in years)
    {
        if(cars.Where(c => c.Year == year).Count() == 0)
        {
            return false;
        }
    }
    return true;
}

We can rewrite this function like so:

public bool AreThereMatchingCars(IEnumerable<int> years, IEnumerable<Car> cars)
{
    var distinctCars = cars.GroupBy(x => x.Year).Select(x => x.First());
    return years.Join(distinctCars, y => y, c => c.Year, (y, c) => c).Count() == years.Count();
}

This actually become more complicated than we expected because we were working out if there were matching cars for each of the specified years by checking the number of filter items and then comparing it to the number of items when we joined that collection with our collection of cars.

If we have more than one car for the same year that logic falls down so we needed to get just one car per year which is what the first line of the function does.

I can’t decide whether or not the code is easier to read and understand by making use of these functions but it’s an approach that I picked up when playing around with F# so it’s interesting that it can still be applied in C# code as well.

Written by Mark Needham

March 4th, 2010 at 6:55 pm

Posted in .NET

Tagged with ,

C#: Overcomplicating with LINQ

with 11 comments

I recently came across an interesting bit of code which was going through a collection of strings and then only taking the first ‘x’ number of characters and discarding the rest.

The code looked roughly like this:

var words = new[] {"hello", "to", "the", "world"};
var newWords = new List<string>();
foreach (string word in words)  
{
    if (word.Length > 3)
    {
        newWords.Add(word.Substring(0, 3));
        continue;
    }
    newWords.Add(word);
}

For this initial collection of words we would expect ‘newWords’ to contain [“hel”, “to”, “the”, “wor”]

In a way it’s quite annoying that the API for ‘Substring’ throws an exception if you try and get just the first 3 characters of a string which contains less than 3 characters. If it didn’t do that then we would have an easy ‘Select’ call on the collection.

Instead we have an annoying if statement which stops us from treating the collection as a whole – we do two different things depending on whether or not the string contains more than 3 characters.

In the spirit of the transformational mindset I tried to write some code using functional collection parameters which didn’t make use of an if statement.

Following this idea we pretty much have to split the collection into two resulting in this initial attempt:

var newWords = words
    .Where(w => w.Length > 3)
    .Select(w => w.Substring(0, 3))
    .Union(words.Where(w => w.Length <= 3).Select(w => w));

This resulted in a collection containing [“hel”, “wor”, “to”, “the”] which is now in a different order to the original!

To keep the original order I figured that we needed to keep track of the original index position of the words, resulting in this massively overcomplicated version:

var wordsWithIndex = words.Select((w, index) => new { w, index });
 
var newWords = wordsWithIndex
               .Where(a => a.w.Length >= 3)
               .Select((a, index) => new {w = a.w.Substring(0, 3), a.index})
               .Union(wordsWithIndex.Where(a => a.w.Length < 3).Select(a => new { a.w, a.index }))
               .OrderBy(a => a.index);

We end up with a collection of anonymous types from which we can get the transformed words but it’s a far worse solution than any of the others because it takes way longer to understand what’s going on.

I couldn’t see a good way to make use of functional collection parameters to solve this problem but luckily at this stage Chris Owen came over and pointed out that we could just do this:

var newWords = words.Select(w => w.Length > 3 ? w.Substring(0, 3) : w);

I’d been trying to avoid doing what is effectively an if statement inside a ‘Select’ but I think in this case it makes a lot of sense and results in a simple and easy to read solution.

Written by Mark Needham

February 21st, 2010 at 12:01 pm

Posted in .NET

Tagged with ,

Functional C#: Writing a ‘partition’ function

with 12 comments

One of the more interesting higher order functions that I’ve come across while playing with F# is the partition function which is similar to the filter function except it returns the values which meet the predicate passed in as well as the ones which don’t.

I came across an interesting problem recently where we needed to do exactly this and had ended up taking a more imperative for each style approach to solve the problem because this function doesn’t exist in C# as far as I know.

In F# the function makes use of a tuple to do this so if we want to create the function in C# then we need to define a tuple object first.

public class Tuple<TFirst, TSecond>
{
	private readonly TFirst first;
	private readonly TSecond second;
 
	public Tuple(TFirst first, TSecond second)
	{
		this.first = first;
		this.second = second;
	}
 
	public TFirst First
	{
		get { return first; }
	}
 
	public TSecond Second
	{
		get { return second; }
	}
}
public static class IEnumerableExtensions
{
	public static Tuple<IEnumerable<T>, IEnumerable<T>> Partition<T>(this IEnumerable<T> enumerableOf, Func<T, bool> predicate)
	{
		var positives = enumerableOf.Where(predicate);
		var negatives = enumerableOf.Where(e => !predicate(e));
		return new Tuple<IEnumerable<T>, IEnumerable<T>>(positives, negatives);
 
	}
}

I’m not sure of the best way to write this function – at the moment we end up creating two iterators to cover the two different filters that we’re running over the collection which seems a bit strange.

In F# ‘partition’ is on List so the whole collection would be evaluated whereas in this case we’re still only evaluating each item as it’s needed so maybe there isn’t a way to do it without using two iterators.

If we wanted to use this function to get the evens and odds from a collection we could write the following code:

var evensAndOdds = Enumerable.Range(1, 10).Partition(x => x % 2 == 0);
 
var evens = evensAndOdds.First;
var odds = evensAndOdds.Second;

The other thing that’s nice about F# is that we can assign the result of the expression to two separate values in one go and I don’t know of a way to do that in C#.

let evens, odds = [1..10] |> List.partition (fun x -> x % 2 = 0)

We don’t need to have the intermediate variable ‘evensAndOdds’ which doesn’t really add much to the code.

I’d be interested in knowing if there’s a better way to do this than what I’m trying out.

Written by Mark Needham

February 1st, 2010 at 11:34 pm

Posted in .NET

Tagged with , ,

Functional collectional parameters: Some thoughts

with 6 comments

I’ve been reading through a bit of Steve Freeman and Nat Pryce’s ‘Growing Object Oriented Software guided by tests‘ book and I found the following observation in chapter 7 quite interesting:

When starting a new area of code, we might temporarily suspend our design judgment and just write code without attempting to impose much structure.

It’s interesting that they don’t try and write perfect code the first time around which is actually something I thought experienced developers did until I came across Uncle Bob’s Clean Code book where he suggested something similar.

One thing I’ve noticed when working with collections is that if we want to do something more complicated than just doing a simple map or filter then I find myself initially trying to work through the problem in an imperative hacky way.

When pairing it sometimes also seems easier to talk through the code in an imperative way and then after we’ve got that figured out then we can work out a way to solve the problem in a more declarative way by making use of functional collection parameters.

An example of this which we came across recently was while looking to parse a file which had data like this:

some,random,data,that,i,made,up

The file was being processed later on and the values inserted into the database in field order. The problem was that we had removed two database fields so we needed to get rid of the 2nd and 3rd values from each line.

var stringBuilder = new StringBuilder();
using (var sr = new StreamReader("c:\\test.txt"))
{
    string line;
 
    while ((line = sr.ReadLine()) != null)
    {
        var values = line.Split(',');
 
        var localBuilder = new StringBuilder();
        var count = 0;
        foreach (var value in values)
        {
            if (!(count == 1 || count == 2))
            {
                localBuilder.Append(value);
                localBuilder.Append(",");
            }
            count++;
        }
 
        stringBuilder.AppendLine(localBuilder.ToString().Remove(localBuilder.ToString().Length - 1));
    }
}
 
using(var writer = new StreamWriter("c:\\newfile.txt"))
{
    writer.Write(stringBuilder.ToString());
    writer.Flush();
}

If we wanted to refactor that to use a more declarative style then the first thing we’d look to change is the for loop populating the localBuilder.

We have a temporary ‘count’ variable which is keeping track of which column we’re up to and suggests that we should be able to use one of the higher order functions over collection which allows us to refer to the index of the item.

In this case we can use the ‘Where’ function to achieve this:

...
while ((line = sr.ReadLine()) != null)
{
    var localBuilder = line.Split(',').
                        Where((_, index) => !(index == 1 || index == 2)).
                        Aggregate(new StringBuilder(), (builder, v) => builder.Append(v).Append(","));
 
    stringBuilder.AppendLine(localBuilder.ToString().Remove(localBuilder.ToString().Length - 1));
}

I’ve been playing around with ‘Aggregate’ a little bit and it seems like it’s quite easy to overcomplicate code using that. It also seems that when using ‘Aggregate’ it makes sense if the method that we call on our seed returns itself rather than void.

I didn’t realise that ‘Append’ did that so my original code was like this:

    var localBuilder = line.Split(',').
                        Where((_, index) => !(index == 1 || index == 2)).
                        Aggregate(new StringBuilder(), (builder, v) => {
                           builder.Append(v);
                           builder.Append(",");
                           return builder;
                        });

I think if we end up having to call functions which return void or some other type then it would probably make sense to add on an extension method which allows us to use the object in a fluent interface style.

Of course this isn’t the best solution since we would ideally avoid the need to remove the last character to get rid of the trailling comma which could be done by creating an array of values and then using ‘String.Join’ on that.

Given that I still think the solution written using functional collection parameters is easier to follow since we’ve managed to get rid of two variable assignments which weren’t interesting as part of what we wanted to do but were details about that specific implementation.

Written by Mark Needham

January 20th, 2010 at 10:45 pm

Posted in Coding,Hibernate

Tagged with ,

C#: Removing duplication in mapping code with partial classes

with one comment

One of the problems that we’ve come across while writing the mapping code for our anti corruption layer is that there is quite a lot of duplication of mapping similar types due to the fact that each service has different auto generated classes representing the same data structure.

We are making SOAP web service calls and generating classes to represent the requests and responses to those end points using SvcUtil. We then translate from those auto generated classes to our domain model using various mapper classes.

One example of duplication which really stood out is the creation of a ‘ShortAddress’ which is a data structure consisting of a postcode, suburb and state.

In order to map address we have a lot of code similar to this:

private ShortAddress MapAddress(XsdGeneratedAddress xsdGeneratedAddress)
{
	return new ShortAddress(xsdGeneratedAddress.Postcode, xsdGeneratedAddress.Suburb, xsdGeneratedAddress.State);
}
private ShortAddress MapAddress(AnotherXsdGeneratedAddress xsdGeneratedAddress)
{
	return new ShortAddress(xsdGeneratedAddress.Postcode, xsdGeneratedAddress.Suburb, xsdGeneratedAddress.State);
}

Where the XsdGeneratedAddress might be something like this:

public class XsdGeneratedAddress
{
	string Postcode { get; }
	string Suburb { get; }
	string State { get; }
	// random other code
}

It’s really quite boring code to write and it’s pretty much exactly the same apart from the class name.

I realise here that if we were using a dynamic language we wouldn’t have a problem since we could just write the code as if the object being passed into the method had those properties on it.

Sadly we are in C# which doesn’t yet have that capability!

Luckily for us the SvcUtil generated classes are partial classes so (as Dave pointed out) we can create another partial class which inherits from an interface that we define. We can then refer to types which implement this interface in our mapping code, helping to reduce the duplication.

In this case we create a ‘ShortAddressDTO’ with properties that match those on the auto generated class:

public interface ShortAddressDTO 
{
	string Postcode { get; }
	string Suburb { get; }
	string State { get; }
}

We then need to make the generated classes inherit from this:

public partial class XsdGeneratedAddress : ShortAddressDTO {}

Which means in our mapping code we can now do the following:

private ShortAddress MapAddress(ShortAddressDTO shortAddressDTO)
{
	return xsdGeneratedAddress.ConvertToShortAddress();
}

Which uses this extension method:

public static class ServiceDTOExtensions 
{
	public static ShortAddress ConvertToShortAddress(ShortAddressDTO shortAddressDTO)
	{
		return new ShortAddress(shortAddressDTO.Postcode, shortAddressDTO.Suburb, shortAddressDTO.State);
	}
}

Which seems much cleaner than what we had to do before.

Written by Mark Needham

July 7th, 2009 at 6:11 pm

Posted in .NET

Tagged with ,

Brownfield Application Development in .NET: Book Review

without comments

The Book

Brownfield Application Development in .NET by Kyle Baley and Donald Belcham

The Review

I asked to be sent this book to review by Manning as I was quite intrigued to see how well it would complement Michael Feather’s Working Effectively with Legacy Code, the other book I’m aware of which covers approaches to dealing with non green field applications.

What did I learn?

  • The authors provide a brief description of the two different approaches to unit testing – state based and behaviour based – I’m currently in favour of the latter approach and Martin Fowler has a well known article which covers pretty much anything you’d want to know about this topic area.
  • I really like the section of the book which talks about ‘Zero Defect Count’, whereby the highest priority should be to fix any defects that are found in work done previously rather than racing ahead onto the next new piece of functionality:

    Developers are geared towards driving to work on, and complete, new features and tasks. The result is that defect resolution subconsciously takes a back seat in a developer’s mind.

    I think this is quite difficult to achieve when the team is getting pressure to complete new features but then again it will take longer to fix defects if we leave them until later since we need to regain the context around them which is more fresh in our mind the earlier we fix them.

  • Another cool idea is that of time boxing efforts at fixing technical debt in the code base – that way we spend a certain amount of time fixing one area and when the time’s up we stop. I think this will work well as an approach as often when trying to fix code we can either get into the mindset of not fixing anything at all because it will take too long to do so or ending up shaving the yak in an attempt to fix a particularly problematic area of code.
  • I like the definition of abstraction that the authors give:

    From the perspective of object- oriented programming, it is the method in which we simplify a complex “thing”, like an object, a set of objects, or a set of services.

    I often end up over complicating code in an attempt to create ‘abstractions’ but by this definition I’m not really abstracting since I’m not simplifying but complicating! This seems like a useful definition to keep in mind when looking to make changes to code.

  • Maintainability of code is something which is seriously undervalued – I think it’s very important to write your code in such a way that the next person who works with it can actually understand what’s going on. The authors have a fantastic quote from Perl Best Practices:

    Always code as if the guy who ends up maintaining your code is a violent psychopath who knows where you live.

    Writing code that is easy for the next person to understand is much harder than I would expect it to be although on teams which pair programmed frequently I’ve found the code easier to understand. I recently read a blog post by Jaibeer Malik where he claims that it is harder to read code than to write code which I think is certainly true in some cases.

  • There is a discussion of some of the design patterns and whether or not we should explicitly call out their use in our code, the suggestion being that we should only do so if it makes our intent clearer.
  • While describing out how to refactor some code to loosen its dependencies it’s pointed out that when the responsibilities of a class are a bit fuzzy the name of the class will probably be quite fuzzy too – it seems like this would server as quite a useful indicator for refactoring code to the single responsibility principle. The authors also suggest trying not to append the suffix ‘Service’ to classes since it tends to be a very overloaded term and a lot of the time doesn’t add much value to our code.
  • It is constantly pointed out how important it is to do refactoring in small steps so that we don’t break the rest of our code and to allow us to get rapid feedback on whether the refactoring is actually working or not. This is something that we’ve practiced in coding dojos and Kent mentions it as being one of his tools when dealing with code – I’ve certainly found that the overall time is much less when doing small step refactorings than trying to do everything in one go.

    I’m quite interested in trying out an idea called ‘Bowling Scorecards‘ which my former colleague Bernardo Heynemann wrote about – the idea to have a card which has a certain number of squares, each square reprsenting a task that needs to be done. These are then crossed off as members of the team do them.

  • An interesting point which is made when talking about how to refactor data access code is to try and make sure that we are getting all the data from a single entry point – this is something which I noticed on a recent project where we were cluttering the controller with two calls to different repositories to retrieve some data when it probably could have been encapsulated into a single call.
  • Although they are talking specifically about poor encapsulation in data access layers, I think the following section about this applies to anywhere in our code base where we expose the inner workings of classes by failing to encapsulate properly:

    Poor encapsulation will lead to the code changes requiring what is known as the Shotgun Effect. Instead of being able to make one change, the code will require you to make changes in a number of scattered places, similar to how the pellets of a shotgun hit a target. The cost of performing this type of change quickly becomes prohibitive and you will see developers pushing to not have to make changes where this will occur.

  • The creation of an anti corruption layer to shield us from 3rd party dependency changes is suggested and I think this is absolutely vital otherwise whenever there is a change in the 3rd party code our code breaks all over the place. The authors also adeptly point out:

    The reality is that when you rely on another company’s web service, you are ultimately at their mercy. It’s the nature of third-party dependencies. You don’t have control over them.

    Even if we do recognise that we are completely reliant on a 3rd party service for our model I think there is still a need for an anti corruption layer even if it is very thin to protect us from changes.

    The authors also describe run time and compile time 3rd party dependencies – I think it’s preferable if we can have compile time dependencies since this gives us much quicker feedback and this is an approach we used on a recent project I worked on by making use of generated classes to interact with a SOAP service rather than using WCF message attributes which only provided us feedback at runtime.

In Summary

This book starts off with the very basics of any software development project covering things such as version control, continuous integration servers, automated testing and so on but it gets into some quite interesting areas later on which I think are applicable to any project and not necessarily just ‘brownfield’ ones.

There is a lot of useful advice about making use of abstractions to protect the code against change both from internal and external dependencies and I particularly like the fact that the are code examples showing the progression of the code through each of the refactoring ideas suggested by the authors.

Definitely worth reading although if you’ve been working on any type of agile projects then you’re probably better off skim reading the first half of the book but paying more attention to the second half.

Written by Mark Needham

July 6th, 2009 at 12:43 am

Posted in Books

Tagged with , ,

Functional Collection Parameters: Handling the null collection

with 5 comments

One of the interesting cases where I’ve noticed we tend to avoid functional collection parameters in our code base is when there’s the possibility of the collection being null.

The code is on the boundary of our application’s interaction with another service so it is actually a valid scenario that we could receive a null collection.

When using extension methods, although we wouldn’t get a null pointer exception by calling one on a null collection, we would get a ‘source is null’ exception when the expression is evaluated so we need to protect ourself against this.

As a result of defending against this scenario we have quite a lot of code that looks like this:

public IEnumerable<Foo> MapFooMessages(IEnumerable<FooMessage> fooMessages)
{
	var result = new List<Foo>();
	if(fooMessagaes != null)
	{
		foreach(var fooMessage in fooMessages)
		{
			result.Add(new Foo(fooMessage));
		}
	}
	return result;
}

The method that we want to apply here is ‘Select’ and even though we can’t just apply that directly to the collection we can still make use of it.

private IEnumerable<Foo> MapFooMessages(IEnumerable<FooMessage> fooMessages)
{
	if(fooMessages == null) return new List<Foo>();
	return fooMessages.Select(eachFooMessage => new Foo(eachFooMessage));
}

There’s still duplication doing it this way though so I pulled it up into a ‘SafeSelect’ extension method:

public static class ICollectionExtensions
{
       public static IEnumerable<TResult> SafeSelect<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector)
       {
               return source == null ? new List<TResult>() : source.Select(selector) ;
       }
}

We can then make use of this extension method like so:

private IEnumerable<Foo> MapFooMessages(IEnumerable<FooMessage> fooMessages)
{
	return fooMessages.SafeSelect(eachFooMessage => new Foo(fooMessage));
}

The extension method is a bit different to the original way that we did this as I’m not explicitly converting the result into a list at the end which means that it will only be evaluated when the data is actually needed.

In this particular case I don’t think that decision will have a big impact but it’s something interesting to keep in mind.

Written by Mark Needham

June 16th, 2009 at 8:29 pm