Mark Needham

Thoughts on Software Development

Archive for the ‘c#’ tag

C#: StackTrace

with 5 comments

Dermot and I were doing a bit of work on a mini testing DSL that we’ve been writing to try and make some of our interaction tests a bit more explicit and one of the things that we wanted to do was find out which method was being called on one of our collaborators.

We have a stub collaborator which gets injected into our system under test. It looks roughly like this:

public class StubCollaborator : IGotForcedToCollaborate
{
	public double Method1()
	{
		return CannedValue();
	}
 
	public double Method2()
	{
		return CannedValue();
	}
 
	private double CannedValue()
	{
		return 10;
	}
 
}

We wanted to try and capture which of the methods on that object had been called by our system under test and then assert on that value from our test.

While trying to work out how to do this we came across the ‘StackTrace’ object. We use it like so to work out which public method on that object has been called:

public class StubCollaborator : IGotForcedToCollaborate
{
	private MethodBase methodCalled;
 
	..
 
	private double CannedValue()
	{
		methodCalled =  new StackTrace().GetFrames()
                			.Select(f =>f.GetMethod())
                			.Where(m => m.DeclaringType.Name == GetType().Name)
                			.Where(m => m.IsPublic)
                			.First()
		return 10;
	}
 
	public string GetMethodCalled()
	{
		return methodBase.Name;
	}	
 
}

We needed to only find the public methods because ‘CannedValue’ was showing up on the stack trace and since it’s private this was an easy way to exclude it.

I’m sure there are other ways to get this type of information but we were able to solve our problem really quickly with this solution.

Written by Mark Needham

June 22nd, 2010 at 10:27 pm

Posted in .NET

Tagged with

C#: A failed attempt at F#-ish pattern matching

with 2 comments

A few weeks ago we had some C# code around calcuations which had got a bit too imperative in nature.

The code looked roughly like this:

public class ACalculator
{
	public double CalculateFrom(UserData userData)
	{
		if(userData.Factor1 == Factor1.Option1)
		{
			return 1.0;
		}
 
		if(userData.Factor2 == Factor2.Option3)
		{
			return 2.0;
		}
 
		if(userData.Factor3 == Factor3.Option2)
		{
			return 3.0
		}
		return 0.0;
	}
}

I think there should be a more object oriented way to write this code whereby we push some of the logic onto the ‘UserData’ object but it struck me that it reads a little bit like pattern matching code you might see in F#.

I decided to drive the code to use a dictionary which would store functions representing each of the conditions in the if statements:

public class ACalculator
{
	private Dictionary<Func<UserData, bool>, double> calculations;
 
	public ACalculator()
	{
    		calculations = new Dictionary<Func<UserData,bool>,double>
                       {
                           {u => u.Factor1 == Factor1.Option1, 1.0},
                           {u => u.Factor2 == Factor2.Option3, 2.0},                                       
                           {u => u.Factor3 == Factor3.Option2, 3.0}                                 
                       };	
	}	
 
	public double CalculateFrom(UserData userData)
	{
    		var calculation = calculations.Keys.FirstOrDefault(calc => calc(userData));
    		if(calculation != null)
    		{
        		return calculations[calculation];
    		}
 
    		return 0.0;
	}
}

It’s less readable than it was before and it’s not obvious that the adding of the functions to the dictionary needs to be in that order in order for it to work.

I’ve simplified the real example a bit to show the idea but I don’t think it works as the best abstraction in this situation either way although it was an interesting experiment.

Written by Mark Needham

June 13th, 2010 at 10:35 pm

Posted in .NET,F#

Tagged with ,

C#: Using a dictionary instead of if statements

with 22 comments

A problem we had to solve on my current project is how to handle form submission where the user can click on a different button depending whether they want to go to the previous page, save the form or go to the next page.

An imperative approach to this problem might yield code similar to the following:

public class SomeController
{
	public ActionResult TheAction(string whichButton, UserData userData)
	{
		if(whichButton == "Back")
		{
			// do the back action
		}
		else if(whichButton == "Next")
		{
			// do the next action
		}
		else if(whichButton == "Save")
		{
			// do the save action
		}
 
		throw Exception("");
	}
}

A neat design idea which my colleague Dermot Kilroy introduced on our project is the idea of using a dictionary to map to the different actions instead of using if statements.

public class SomeController
{
	private Dictionary<string, Func<UserData,ActionResult>> handleAction = 
		new Dictionary<string, Func<UserData,ActionResult>>
		{ { "Back", SaveAction },
		  { "Next", NextAction },
		  { "Save", SaveAction } };
 
	public ActionResult TheAction(string whichButton, UserData userData)
	{
		if(handleAction.ContainsKey(whichButton))
		{
			return handleAction[whichButton](userData);
		}
 
		throw Exception("");
	}
 
	private ActionResult NextAction(UserData userData)
	{
		// do cool stuff
	}
}

It’s quite similar in a way to a problem we had on another project where we needed to deal with user inputs and then create an object appropriately.

The way we have to read the code is a bit more indirect than with the original approach since you now need to click through to the individual methods for each action.

On the other hand I like the fact that we don’t have if statements all over the place anymore.

* Updated * – updated to take Dhananjay Goyani’s comments into account

Written by Mark Needham

May 30th, 2010 at 11:13 pm

Posted in .NET

Tagged with

Functional C#: An imperative to declarative example

with 6 comments

I wrote previously about how we’ve been working on some calculations on my current project and one thing we’ve been trying to do is write this code in a fairly declarative way.

Since we’ve been test driving the code it initially started off being quite imperative and looked a bit like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
public class TheCalculator
{
	...
	public double CalculateFrom(UserData userData)
	{
		return Calculation1(userData) + Calculation2(userData) + Calculation3(userData);
	}
 
	public double Calculation1(UserData userData)
	{
		// do calculation stuff here
	}
 
	public double Calculation2(UserData userData)
	{
		// do calculation stuff here
	}
	...
}

What we have on line 7 is a series of calculations which we can put in a collection and then sum together:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public class TheCalculator
{
	...
	public double CalculateFrom(UserData userData)
	{
		var calculations = new Func<UserData, double>[] { Calculation1, Calculation2, Calculation3 };
 
		return calculations.Sum(calculation => calculation(userData));
	}
 
	public double Calculation1(UserData userData)
	{
		// do calculation stuff here
	}
	...
}

We can pull out a ‘Calculation’ delegate to make that a bit more readable:

public class TheCalculator
{
	private delegate double Calculation(UserData userData);
 
	public double CalculateFrom(UserData userData)
	{
		var calculations = new Calculation[] { Calculation1, Calculation2, Calculation3 };
 
		return calculations.Sum(calculation => calculation(userData));
	}
	...	
}

One of the cool things about structuring the code like this is that if we want to add a new Calculation we can just go to the end of the array, type in the name of the method and then Resharper will create it for us with the proper signature.

We eventually came across some calculations which needed to be subtracted from the other ones, which seems like quite an imperative thing to do!

Luckily Christian saw a way to wrap these calculations in a ‘Subtract’ function so that we could stay in declarative land:

public class TheCalculator
{
	private delegate double Calculation(UserData userData);
 
	public double CalculateFrom(UserData userData)
	{
		var calculations = new [] { Calculation1, Calculation2, Calculation3, Subtract(Calculation4) };
 
		return calculations.Sum(calculation => calculation(userData));
	}
	...	
	public Calculation Subtract(Calculation calculation)
	{
		return userData => calculation(userData) * -1;
	}
}

Having a method which explicitly has the ‘Calculation’ signature allows us to remove it from the array declarative which is pretty neat.

We can also change the method signature of ‘Subtract’ to take in a variable number of calculations if we need to:

public class TheCalculator
{
	...	
	public double CalculateFrom(UserData userData)
	{
		var calculations = new [] { Calculation1, Calculation2, Calculation3, Subtract(Calculation4, Calculation5) };
 
		return calculations.Sum(calculation => calculation(userData));
	}
 
	public Calculation Subtract(params Calculation[] calculations)
	{
		return userData => calculations.Sum(calculation =>  calculation(userData)) * -1;
	}
}

The other nice thing about coding it this way is that we ran into a problem where when we fed real data through the code we were getting the wrong values returned and we wanted to understand where it was falling down.

We could easily temporarily add in a ‘Console.WriteLine’ statement like this to help us out:

public class TheCalculator
{
	...	
	public double CalculateFrom(UserData userData)
	{
		var calculations = new [] { Calculation1, Calculation2, Calculation3, Subtract(Calculation4, Calculation5) };
 
		return calculations
			.Select(calculation =>
					{
						Console.WriteLine(calculation.Method.Name + " = " + calculation(userData));
						return calculation;
					})
			.Sum(calculation => calculation(userData));
	}
	...
}

It then printed the results down the page like so:

Calculation1: 23.34
Calculation2: 45.45
...

Written by Mark Needham

April 20th, 2010 at 7:08 am

Posted in .NET

Tagged with ,

Functional C#: Using custom delegates to encapsulate Funcs

without comments

One of the problems that I’ve frequently run into when writing C# code in a more functional way is that we can often end up with ‘Funcs’ all over the place which don’t really describe what concept they’re encapsulating.

We had some code similar to this where it wasn’t entirely obvious what the Func being stored in the dictionary was actually doing:

public class Calculator
{
	private Dictionary<string, Func<double, double, double>> lookups = new Dictionary<string, Func<double, double, double>>();
 
	public Blah()
	{
		lookups.Add("key", (input1, input2) => (input1 * 0.1) / input2);
	}
	...
}

Christian pointed out that we can create a named delegate (something I had completely forgotten!) so that it’s a bit more obvious what exactly we’re storing in the dictionary just by looking at the code.

We’d then end up with this code:

public class Calculator
{
	private delegate double PremiumCalculation(double input1, double input2);
 
	private Dictionary<string, PremiumCalculation> lookups = new Dictionary<string, PremiumCalculation >();
 
	public Calculator()
	{
		lookups.Add("key", (input1, input2) => (input1 * 0.1) / input2);
	}
	...
}

Now it’s pretty clear from just reading the declaration of ‘lookups’ what it’s being used for without needing to go further into the code to understand that.

This is just a simple example but the problem becomes more obvious the more angle brackets we end up with in our dictionary definition and pulling those Funcs out makes the code much easier to understand.

Written by Mark Needham

April 17th, 2010 at 12:16 pm

Posted in .NET

Tagged with

Functional C#: Continuation Passing Style

with 4 comments

Partly inspired by my colleague Alex Scordellis’ recent post about lambda passing style I spent some time trying out a continuation passing style style on some of the code in one of our controllers to see how different the code would look compared to its current top to bottom imperative style.

We had code similar to the following:

public ActionResult Submit(string id, FormCollection form)
{
	var shoppingBasket = CreateShoppingBasketFrom(id, form);
 
	if (!validator.IsValid(shoppingBasket, ModelState))
	{
	    return RedirectToAction("index", "ShoppingBasket", new { shoppingBasket.Id });
	}
	try
	{
	    shoppingBasket.User = userService.CreateAccountOrLogIn(shoppingBasket);
	}
	catch (NoAccountException)
	{
	    ModelState.AddModelError("Password", "User name/email address was incorrect - please re-enter");
	    return RedirectToAction("index", ""ShoppingBasket", new { Id = new Guid(id) });
	}
 
	UpdateShoppingBasket(shoppingBasket);
	return RedirectToAction("index", "Purchase", new { Id = shoppingBasket.Id });
}

The user selects some products that they want to buy and then just before they click to go through to the payment page we have the option for them to login if they are an existing user.

This code handles the server side validation of the shopping basket and tries to login the user. If it fails then we return to the original page with an error message. If not then we forward them to the payment page.

With a continuation passing style we move away from the imperative style of programming whereby we call a function, get a result and then do something with that result to a style where we call a function and pass it a continuation/block of code which represents the rest of the program. After it has calculated the result it should call the continuation.

If we apply that style to the above code we end up with the following:

public ActionResult Submit(string id, FormCollection form)
{
	var shoppingBasket = CreateShoppingBasketFrom(id, form);
	return IsValid(shoppingBasket, ModelState,
					() => RedirectToAction("index", "ShoppingBasket", new { shoppingBasket.Id} ),
					() => LoginUser(shoppingBasket, 
							() =>
								{
									ModelState.AddModelError("Password", "User name/email address was incorrect - please re-enter");
									return RedirectToAction("index", ""ShoppingBasket", new { Id = new Guid(id) });
								},
							user => 
								{
									shoppingBasket.User = user;
									UpdateShoppingBasket(shoppingBasket);
									return RedirectToAction("index", "Purchase", new { Id = shoppingBasket.Id });
								}));
}
 
 
private RedirectToRouteResult IsValid(ShoppingBasket shoppingBasket, ModelStateDictionary modelState, Func<RedirectToRouteResult> failureFn, Func<RedirectToRouteResult> successFn)
{
 
	return validator.IsValid(shoppingBasket, modelState) ? successFn() : failureFn();
}
 
private RedirectToRouteResult LoginUser(ShoppingBasket shoppingBasket, Func<RedirectToRouteResult> failureFn, Func<User,RedirectToRouteResult> successFn)
{
	User user = null;
	try
	{
	    user = userService.CreateAccountOrLogIn(shoppingBasket);
	}
 
	catch (NoAccountException)
	{
		return failureFn();
	}
 
	return successFn(user);
}

The common theme in this code seemed to be that there we both success and failure paths for the code to follow depending on the result of a function so I passed in both success and failure continuations.

I quite like the fact that the try/catch block is no longer in the main method and the different things that are happening in this code now seem grouped together more than they were before.

In general though the way that I read the code doesn’t seem that different.

Instead of following the flow of logic in the code from top to bottom we just need to follow it from left to right instead and since that’s not as natural the code is more complicated than it was before.

I do understand how the code works more than I did before I started playing before this but I’m not yet convinced this is a better approach to designing code in C#.

Written by Mark Needham

March 19th, 2010 at 7:48 am

Posted in .NET

Tagged with ,

Functional C#: Using Join and GroupJoin

with 7 comments

An interesting problem which I’ve come across a few times recently is where we have two collections which we want to use together in some way and get a result which could either be another collection or some other value.

In one which Chris and I were playing around with we had a collection of years and a collection of cars with corresponding years and the requirement was to show all the years on the page with the first car we found for that year or an empty value if there was no car for that year.

We effectively needed to do a left join on the cars collection.

This is an imperative way of solving the problem:

public class Car
{
     public int Year { get; set; }
     public string Description { get; set; }
}
var years = new[] { 2000, 2001, 2002, 2003 };
var cars = new[] { new Car { Year = 2000, Description = "Honda" }, new Car { Year = 2003, Description = "Ford" } };
 
var newCars = new List<Car>();
foreach (var year in years)
{
    var car = cars.Where(x => x.Year == year).FirstOrDefault() ?? new Car  { Year = year, Description = ""};
    newCars.Add(car);
}

We can actually achieve the same result in a more declarative way by making use of ‘GroupJoin‘:

var newCars = years.GroupJoin(cars, 
                              year => year, 
                              car => car.Year,
                              (year, theCars) =>  theCars.FirstOrDefault() ??  new Car { Year = year, Description = ""  });

‘GroupJoin’ is useful if we want to keep all of the items in the first collection and get a collection of the items in the second collection which match for the specified keys.

In this case it allows us to identify where there are no matching cars for a specific year and then just set a blank description for those years.

One nice side effect is that if we later want to include multiple cars for a year then we shouldn’t have to change the code too much to achieve that.

Another example which I came across is where we have one collection which contains filter criteria which it needs to apply against the other collection.

We have a collection of years and need to indicate whether there is a matching car for each of those years.

[Test]
public void JoinExample()
{
    var years = new[] { 2000, 2003 };
    var cars = new[] { new Car { Year = 2000, Description = "Honda" },
                       new Car { Year = 2003, Description = "Ford" },
                       new Car { Year = 2003, Description = "Mercedes"}};
 
    Assert.That(AreThereMatchingCars(years, cars), Is.True);
}
public bool AreThereMatchingCars(IEnumerable<int> years, IEnumerable<Car> cars)
{
    foreach (var year in years)
    {
        if(cars.Where(c => c.Year == year).Count() == 0)
        {
            return false;
        }
    }
    return true;
}

We can rewrite this function like so:

public bool AreThereMatchingCars(IEnumerable<int> years, IEnumerable<Car> cars)
{
    var distinctCars = cars.GroupBy(x => x.Year).Select(x => x.First());
    return years.Join(distinctCars, y => y, c => c.Year, (y, c) => c).Count() == years.Count();
}

This actually become more complicated than we expected because we were working out if there were matching cars for each of the specified years by checking the number of filter items and then comparing it to the number of items when we joined that collection with our collection of cars.

If we have more than one car for the same year that logic falls down so we needed to get just one car per year which is what the first line of the function does.

I can’t decide whether or not the code is easier to read and understand by making use of these functions but it’s an approach that I picked up when playing around with F# so it’s interesting that it can still be applied in C# code as well.

Written by Mark Needham

March 4th, 2010 at 6:55 pm

Posted in .NET

Tagged with ,

C#: Overcomplicating with LINQ

with 11 comments

I recently came across an interesting bit of code which was going through a collection of strings and then only taking the first ‘x’ number of characters and discarding the rest.

The code looked roughly like this:

var words = new[] {"hello", "to", "the", "world"};
var newWords = new List<string>();
foreach (string word in words)  
{
    if (word.Length > 3)
    {
        newWords.Add(word.Substring(0, 3));
        continue;
    }
    newWords.Add(word);
}

For this initial collection of words we would expect ‘newWords’ to contain [“hel”, “to”, “the”, “wor”]

In a way it’s quite annoying that the API for ‘Substring’ throws an exception if you try and get just the first 3 characters of a string which contains less than 3 characters. If it didn’t do that then we would have an easy ‘Select’ call on the collection.

Instead we have an annoying if statement which stops us from treating the collection as a whole – we do two different things depending on whether or not the string contains more than 3 characters.

In the spirit of the transformational mindset I tried to write some code using functional collection parameters which didn’t make use of an if statement.

Following this idea we pretty much have to split the collection into two resulting in this initial attempt:

var newWords = words
    .Where(w => w.Length > 3)
    .Select(w => w.Substring(0, 3))
    .Union(words.Where(w => w.Length <= 3).Select(w => w));

This resulted in a collection containing [“hel”, “wor”, “to”, “the”] which is now in a different order to the original!

To keep the original order I figured that we needed to keep track of the original index position of the words, resulting in this massively overcomplicated version:

var wordsWithIndex = words.Select((w, index) => new { w, index });
 
var newWords = wordsWithIndex
               .Where(a => a.w.Length >= 3)
               .Select((a, index) => new {w = a.w.Substring(0, 3), a.index})
               .Union(wordsWithIndex.Where(a => a.w.Length < 3).Select(a => new { a.w, a.index }))
               .OrderBy(a => a.index);

We end up with a collection of anonymous types from which we can get the transformed words but it’s a far worse solution than any of the others because it takes way longer to understand what’s going on.

I couldn’t see a good way to make use of functional collection parameters to solve this problem but luckily at this stage Chris Owen came over and pointed out that we could just do this:

var newWords = words.Select(w => w.Length > 3 ? w.Substring(0, 3) : w);

I’d been trying to avoid doing what is effectively an if statement inside a ‘Select’ but I think in this case it makes a lot of sense and results in a simple and easy to read solution.

Written by Mark Needham

February 21st, 2010 at 12:01 pm

Posted in .NET

Tagged with ,

C#: A lack of covariance with generics example

with 10 comments

One of the things I find most confusing when reading about programming languages is the idea of covariance and contravariance and while I’ve previously read that covariance is not possible when using generics in C# I recently came across an example where I saw that this was true.

I came across this problem while looking at how to refactor some code which has been written in an imperative style:

public interface IFoo 
{
	string Bar { get; set; } 
}
 
public class Foo : IFoo 
{ 
	public string Bar { get; set; }
}
private IEnumerable<IFoo> GetMeFoos()
{
    var someStrings = new[] { "mike", "mark" };
 
    var someFoos = new List<IFoo>();
    foreach (var s in someStrings)
    {
        someFoos.Add(new Foo { Bar = s });
    }
    return someFoos;
}

I changed the code to read like so:

private IEnumerable<IFoo> GetMeFoos()
{
    var someStrings = new[] { "mike", "mark" };
    return someStrings.Select(s => new Foo { Bar = s });
}

Which fails with the following compilation error:

Error	1	Cannot implicitly convert type 'System.Collections.Generic.IEnumerable<Test.Foo>' to 'System.Collections.Generic.IEnumerable<Test.IFoo>'. An explicit conversion exists (are you missing a cast?)

I thought the compiler would infer that I actually wanted a collection of ‘IFoo’ given that I was returning from the method directly after the call to Select but it doesn’t.

As I understand it the reason that we can’t downcast an IEnumerable of ‘Foo’ to an IEnumberable of ‘IFoo’ is that we would run into problems if we worked of the assumption that our original collection only contained Foos in it later on in our program.

For example it would be possible to add any item which implemented the ‘IFoo’ interface into the collection even if it wasn’t a ‘Foo’:

// this code won't compile
 
List<Foo> foos = new List<Foo>();
// add some foos
 
List<IFoo> ifoos = foos;
 
foos.Add(new SomeOtherTypeThatImplementsIFoo());

It’s not possible to convert ‘SomeOtherTypeThatImplementsIFoo’ to ‘Foo’ so we would run ourself into problems.

Rick Byers has a post from a few years ago where he explains how this works in more detail and also points out that covariance of generics is actually supported by the CLR, just not by C#.

In the case I described we can get around the problem by casting ‘Foo’ to ‘IFoo’ inside the ‘Select’:

private IEnumerable<IFoo> GetMeFoos()
{
    var someStrings = new[] { "mike", "mark" };
    return someStrings.Select(s => (IFoo) new Foo { Bar = s });
}

Written by Mark Needham

February 20th, 2010 at 12:17 pm

Posted in .NET

Tagged with ,

C#: Causing myself pain with LINQ’s delayed evaluation

with 3 comments

I recently came across some code was imperatively looping through a collection and then mapping each value to go to something else by using an injected dependency to do that.

I thought I’d try to make use of functional collection parameters to try and simplify the code a bit but actually ended up breaking one of the tests.

About a month ago I wrote about how I’d written a hand rolled stub to simplify a test and this was actually where I caused myself the problem!

The hand rolled stub was defined like this:

public class AValueOnFirstCallThenAnotherValueService : IService
{
	private int numberOfCalls = 0;
 
	public string SomeMethod(string parameter)
	{
		if(numberOfCalls == 0)
		{
			numberOfCalls++;
			return "aValue";
		}
		else
		{
			numberOfCalls++;
			return "differentValue";
		}
	}
}

The test was something like this:

[Test]
public void SomeTest()
{
	var fooOne = new Foo { Bar = "barOne" };
	var fooTwo = new Foo { Bar = "barTwo" }; 
	var aCollectionOfFoos = new List<Foo> { fooOne, fooTwo };
 
	var service = new AValueOnFirstCallThenAnotherValueService();
 
	var someObject = new SomeObject(service);
 
	var fooBars = someObject.Method(aCollectionOfFoos);
 
	Assert.That(fooBars[0].Other, Is.EqualTo("aValue"));
	// and so on
}

The object under test looked something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
public class SomeObject 
{
	private IService service;
 
	public SomeObject(IService service)
	{
		this.service = service;
	}
 
	public IEnumerable<FooBar> Method(List<Foo> foos)
	{
		var fooBars = new List<FooBar();
		foreach(var foo in foos)
		{
			fooBars.Add(new FooBar { Bar = foo.Bar, Other = service.SomeMethod(foo.Bar) }; 
		}
 
		// a bit further down
 
		var sortedFooBars = fooBars.OrderBy(f => f.Other);
 
		return fooBars;
	}
}

I decided to try and incrementally refactor the code like so:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public class SomeObject 
{
	...
 
	public IEnumerable<FooBar> Method(List<Foo> foos)
	{
		var fooBars = foos.Select(f => new FooBar { Bar = f.Bar, Other = service.SomeMethod(f.Bar) };
 
		// a bit further down
 
		var sortedFooBars = fooBars.OrderBy(f => f.Other);
 
		return fooBars;
	}
}

I ran the tests after doing this and the test I described above failed – it was expecting a return value for ‘Other’ of ‘aValue’ but was actually returning ‘differentValue’.

I was a bit confused about what was going on until I started watching what the test was doing through the debugger and realised that on the ‘OrderBy’ call on line 10 the ‘Select’ call on line 7 was being reevaluated which meant that the value returned by ‘service.SomeMethod’ would be ‘differentValue’ since it was being called for the 3rd and 4th time and it’s set up to return ‘aValue’ only on the 1st time.

The way to get around this problem was to force the evaluation of ‘fooBars’ to happen immediately by calling ‘ToList()’:

1
2
3
4
5
6
7
8
9
10
11
public class SomeObject 
{
	...
 
	public IEnumerable<FooBar> Method(List<Foo> foos)
	{
		var fooBars = foos.Select(f => new FooBar { Bar = f.Bar, Other = service.SomeMethod(f.Bar) }.ToList();
 
		...
	}
}

In this case it was fairly easy to identify the problem but I’ve written similar code before which has ended up reordering collections with thousands of items in because it’s been lazy evaluated every time the collection is needed.

In Jeremy Miller’s article about functional C# he suggests the idea of memoization as an optimisation technique to stop expensive calls being made more times than they need to be so perhaps this would be another way to solve the problem although I haven’t tried that approach before.

Written by Mark Needham

February 18th, 2010 at 10:28 pm

Posted in .NET

Tagged with