Mark Needham

Thoughts on Software Development

Archive for the ‘.net’ tag

C#/F#: Using .NET framework classes

with 2 comments

I was recently discussing F# with a couple of colleagues and one thing that came up is the slightly different ways that we might choose to interact with certain .NET framework classes compared to how we use those same classes in C# code.

One of those where I see potential for different use is the Dictionary class.

In C# code when we’re querying a dictionary to check that a value exists before we try to extract it we might typically do this:

public string GetValueBy(string key) 
{
	var dictionary = new Dictionary<string, string>() { {"key", "value" } };
 
	if(dictionary.ContainsKey(key)) 
	{
		return dictionary[key];
	}	 
	return ""; // maybe we'd do something else here but for the sake of this we return the empty string
}

There is an alternative way to do this but it makes use of the ‘out’ keyword so it’s generally frowned upon.

public string GetValueBy(string key)
{
	var dictionary = new Dictionary<string, string>() {{"key", "value"}};
	string value = "";
	dictionary.TryGetValue(key, out value);
	return value;
}

In F# when we make use of a method which effectively defines a second return parameter by using the ‘out’ keyword the return value of that method becomes a tuple containing that value.

For example when querying Dictionary:

open System.Collections.Generic
open System
open System.Globalization
 
let testDictionary = 
    let builder = new Dictionary<string, string>()
    builder.Add("key1", "value")
    builder
 
let getByKey key =
    let result, value =  testDictionary.TryGetValue(key)
    if(result) then value result else ""

The return type of TryGetValue is ‘bool * string’ in this case and by assigning that result to two different values we can get the appropriate values. This is certainly a time when tuples are really useful for simplifying our code.

We could have made use of an ‘out’ parameter as we did in C# but I think it’s much easier to just use the tuple. Dustin Campbell describes the other ways we could extract a value from a dictionary on his blog.

It’s not significantly more concise code wise compared to the C# way of doing things although from a brief look at the way that the Dictionary code works in Reflector theoretically we are making less calls to the underlying data structure to get the value from it.

I tried timing 100,000 accesses to a dictionary with each approach to see if the total time would be significantly different but there wasn’t any noticeable difference.

There are other ‘Try…’ methods defined in the base class library – DateTime for example has some more – which I’ve never used in C# so I’d be intrigued to see whether other languages that run on the CLR will make use of these methods.

Written by Mark Needham

June 16th, 2009 at 6:55 pm

Posted in .NET,F#

Tagged with , ,

Real World Functional Programming: Book Review

with 2 comments

The Book

Real World Functional Programming by Tomas Petricek with Jon Skeet (corresponding website)

The Review

I decided to read this book after being somewhat inspired to learn more about functional programming after talking with Phil about his experiences learning Clojure. I’m currently working on a .NET project so it seemed to make sense that F# was the language I picked to learn.

What did I learn?

  • I’ve worked with C# 3.0 since around July 2008 so I had a bit of experience using some of the functional features in C# before picking up this book. I therefore found it very interesting to read about the history of lambda and the different functional languages and how they came into being. Having this as an opening chapter was a nice way to introduce the functional approach to programming.
  • Immutable state is one of the key ideas in functional programming – this reminded me of a Joe Armstrong video I watched last year where he spoke of his reduced need to use a debugger when coding Erlang due to the fact that there was only one place where state could have been set rather than several as is the case with a more imperative approach. We have been trying to code with immutable state in mind in our coding dojos and while it takes a bit more thinking up front, the code is much easier to read when written that way.
  • Separating the operations from the data is important for allowing us to write code that can be parallelised, focusing on what to do to the data rather than how to do it. Sadek Drobi has a nice illustration of what he calls mosquito programming vs functional programming on page 14 of the slides of his QCon presentation. It describes this idea quite nicely.
  • A cool technique that Phil taught me when reading language related books is to have the PDF of the book on one side of the screen and the REPL (in this case F# interactive) on the other side so that you can try out the examples in real time. The book encourages this approach and all the examples follow on from previous ones which I think works quite well for gradually introducing concepts.
  • Functions are types in functional programming – I have had a bit of exposure to this idea with Funcs in C# but partial function application is certainly a new concept to me. I can certainly see the value in this although it took me a while to get used to the idea. I am intrigued as to where we should use a functional approach and where an OO approach when working in C#. I think both have a place in well written code.
  • F#’s implicit static typing is one of my favourite things about the language – you get safety at compile time but you don’t waste a lot of code writing in type information that the compiler should be able to work out for you. It has the strongest type inference of any language that I’ve worked with and I thought it was quite nice that it was able to work stuff out for me instead of me having to type it all out.
  • I really like the idea of option types which I first learnt about from the book. Having the ability to explicitly define when a query hasn’t worked is far superior to having to do null checks in our code or the various strategies we use to get around this.
  • I thought it was cool that in the early chapters the focus with the F# code is to provide examples that you can just get running straight away instead of having to worry about the need to structure your code in a maintainable way. After I had a reasonable grasp of this then the chapter about using record types to structure code in an OO way come up. I still prefer the C# style of structuring code in objects – it just feels more natural to me at the moment and manages the complexity more easily. It is quite easy to switch between the two styles using features like member augmentation so I think it’s probably possible to mix the two styles quite easily.
  • We can use modules to make F# functions which don’t fit onto any class available from C# code. The code is not as clean as if we were writing just for it to be used by other F# code but it’s not too bad:
    module Tests =                                               
        let WithIncome (f:Func<_, _>) client =                  
            { client with Income = f.Invoke(client.Income) }

    We can then call this in our C# code like so:

    Tests.WithIncome(income => income + 5000, client);

    Dave Cameron has written more about this.

  • Although I studied data structures at university I don’t really pay a great deal of attention to them in terms of performance normally so it was interesting to see the massive performance hit that you take when appending a value to the end of an F# list compared to adding it to the beginning. F# uses linked lists so if we want to add something to the end then there is a lot of recursion involved to do that which is quite costly. In terms of big O notation we go from O(N) where N is the number of elements to append to O(N*M) in terms of performance.
  • Chapter 13 is about parallel processing of data for which I found I needed to download the Microsoft Parallel Extensions to .NET Framework 3.5, June 2008 Community Technology Preview and then add a reference to ‘C:\Program Files\Microsoft Parallel Extensions Jun08 CTP\System.Threading.dll’ in order to make use of those features.
  • The author provides a nice introduction to continuations and how you can make use of them in F# by using continuation passing style. I’m intrigued as to how we can make use of these in our code – we do a bit already by making use of callbacks which get fired at a later point in our code – but from what I’ve read it sounds like we should be able to do even more especially when writing web applications.
  • Asynchronous workflows are also made very accessible in this book – I had previously struggled a bit with them but the author covers the various API methods available to you and then explains what is going on behind the syntactic sugar that F# provides. I have made some use of these in the little twitter appication that I’ve been working on now and again.

In Summary

I really enjoyed reading this book – it’s my first real foray into the world of functional programming since university and I think I understand the functional approach to programming much better than I did back then from reading this book.

It takes an approach of introducing various functional programming concepts before showing examples of where that concept might come in useful when coding. It’s also particularly useful that examples are shown in C# and F# as this made it much easier for me to understand what the F# code was doing by comparing it with the code in a more familiar language.

I’d certainly recommend this to any .NET developers curious about learning how to apply ideas derived from functional programming to their C# code and indeed to any developers looking to start out learning about functional programming.

Written by Mark Needham

May 24th, 2009 at 7:25 pm

Posted in Books

Tagged with , , ,

C#: Using virtual leads to confusion?

with 7 comments

A colleague and I were looking through some code that I worked on a couple of months ago where I had created a one level hierarchy using inheritance to represent the response status that we get back from a service call.

The code was along these lines:

public class ResponseStatus
{
    public static readonly ResponseStatus TransactionSuccessful = new TransactionSuccessful();
    public static readonly ResponseStatus UnrecoverableError = new UnrecoverableError();
 
    public virtual bool RedirectToErrorPage
    {
        get { return true; }
    }
}
 
public class UnrecoverableError : ResponseStatus
{
 
}
 
public class TransactionSuccessful : ResponseStatus
{
    public override bool RedirectToErrorPage
    {
        get { return false; }
    }
}

Looking at it now it does seem a bit over-engineered, but the main confusion with this code is that when you click through to the definition of ‘RedirectToError’ it goes to the ResponseStatus version of that property and it’s not obvious that it is being overridden in a sub class, this being possible due to my use of the virtual key word.

You therefore need to look in two places to work out what’s going on which isn’t so good.

A solution which we came up with which is a bit cleaner is like so:

public abstract class ResponseStatus
{
    public static readonly ResponseStatus TransactionSuccessful = new TransactionSuccessful();
    public static readonly ResponseStatus UnrecoverableError = new UnrecoverableError();
 
    public abstract bool RedirectToErrorPage { get; }
}
 
public class UnrecoverableError : ResponseStatus
{
    public override bool RedirectToErrorPage
    {
        get { return true; }
    }
}
 
public class TransactionSuccessful : ResponseStatus
{
    public override bool RedirectToErrorPage
    {
        get { return false; }
    }
}

When you have more response statuses then I suppose there does become a bit more duplication but it’s traded off against the improved ease of use/reading that we get.

It’s generally considered good practice to favour composition over inheritance and from what I can tell the virtual keyword is only ever going to be useful if you’re creating an inheritance hierarchy.

An interesting lesson learned.

Written by Mark Needham

May 6th, 2009 at 7:30 pm

Posted in .NET

Tagged with ,

F#: A day of writing a little twitter application

with 13 comments

I spent most of the bank holiday Monday here in Sydney writing a little application to scan through my twitter feed and find me just the tweets which have links in them since for me that’s where a lot of the value of twitter lies.

I’m sure someone has done this already but it seemed like a good opportunity to try and put a little of the F# that I’ve learned from reading Real World Functional Programming to use. The code I’ve written so far is at the end of this post.

What did I learn?

  • I didn’t really want to write a wrapper on top of the twitter API so I put out a request for suggestions for a .NET twitter API. It pretty much seemed to be a choice of either Yedda or tweetsharp and since the latter seemed easier to use I went with that. In the code you see at the end I have added the ‘Before’ method to the API because I needed it for what I wanted to do.
  • I found it really difficult writing the ‘findLinks’ method – the way I’ve written it at the moment uses pattern matching and recursion which isn’t something I’ve spent a lot of time doing. Whenever I tried to think how to solve the problem my mind just wouldn’t move away from the procedural approach of going down the collection, setting a flag depending on whether we had a ‘lastId’ or not and so on.

    Eventually I explained the problem to Alex and working together through it we realised that there are three paths that the code can take:

    1. When we have processed all the tweets and want to exit
    2. The first call to get tweets when we don’t have a ‘lastId’ starting point – I was able to get 20 tweets at a time through the API
    3. Subsequent calls to get tweets when we have a ‘lastId’ from which we want to work backwards from

    I think it is probably possible to reduce the code in this function to follow just one path by passing in the function to find the tweets but I haven’t been able to get this working yet.

  • I recently watched a F# video from Alt.NET Seattle featuring Amanda Laucher where she spoke of the need to explicitly state types that we import from C# into our F# code. You can see that I needed to do that in my code when referencing the TwitterStatus class – I guess it would be pretty difficult for the use of that class to be inferred but it still made the code a bit more clunky than any of the other simple problems I’ve played with before.
  • I’ve not used any of the functions on ‘Seq’ until today – from what I understand these are available for applying operations to any collections which implement IEnumerable – which is exactly what I had!
  • I had to use the following code to allow F# interactive to recognise the Dimebrain namespace:
    #r "\path\to\Dimebrain.Tweetsharp.dll"

    I thought it would be enough to reference it in my Visual Studio project and reference the namespace but apparently not.

The code

This is the code I have at the moment – there are certainly some areas that it can be improved but I’m not exactly sure how to do it.

In particular:

  • What’s the best way to structure F# code? I haven’t seen any resources on how to do this so it’d be cool if someone could point me in the right direction. The code I’ve written is just a collection of functions which doesn’t really have any structure at all.
  • Reducing duplication – I hate the fact I’ve basically got the same code twice in the ‘getStatusesBefore’ and ‘getLatestStatuses’ functions – I wasn’t sure of the best way to refactor that. Maybe putting the common code up to the ‘OnFriendsTimeline’ call into a common function and then call that from the other two functions? I think a similar approach can be applied to findLinks as well.
  • The code doesn’t feel that expressive to me – I was debating whether or not I should have passed a type into the ‘findLinks’ function – right now it’s only possible to tell what each part of the tuple means by reading the pattern matching code which feels wrong. I think there may also be some opportunities to use the function composition operator but I couldn’t quite see where.
  • How much context should we put in the names of functions? Most of my programming has been in OO languages where whenever we have a method its context is defined by the object on which it resides. When naming functions such as ‘findOldestStatus’ and ‘oldestStatusId’ I wasn’t sure whether or not I was putting too much context into the function name. I took the alternative approach with the ‘withLinks’ function since I think it reads more clearly like that when it’s actually used.
#light
 
open Dimebrain.TweetSharp.Fluent
open Dimebrain.TweetSharp.Extensions
open Dimebrain.TweetSharp.Model
open Microsoft.FSharp.Core.Operators 
 
let getStatusesBefore (statusId:int64) = FluentTwitter
                                            .CreateRequest()
                                            .AuthenticateAs("userName", "password")
                                            .Statuses()
                                            .OnFriendsTimeline()
                                            .Before(statusId)
                                            .AsJson()
                                            .Request()
                                            .AsStatuses()
 
let withLinks (statuses:seq<Dimebrain.TweetSharp.Model.TwitterStatus>) = 
    statuses |> Seq.filter (fun eachStatus -> eachStatus.Text.Contains("http"))
 
let print (statuses:seq<Dimebrain.TweetSharp.Model.TwitterStatus>) =
    for status in statuses do
        printfn "[%s] %s" status.User.ScreenName status.Text    
 
let getLatestStatuses  = FluentTwitter
                            .CreateRequest()
                            .AuthenticateAs("userName", "password")
                            .Statuses()
                            .OnFriendsTimeline()
                            .AsJson()
                            .Request()
                            .AsStatuses()                                    
 
let findOldestStatus (statuses:seq<Dimebrain.TweetSharp.Model.TwitterStatus>) = 
    statuses |> Seq.sort_by (fun eachStatus -> eachStatus.Id) |> Seq.hd
 
let oldestStatusId = (getLatestStatuses |> findOldestStatus).Id  
 
let rec findLinks (args:int64 * int * int) =
    match args with
    | (_, numberProcessed, recordsToSearch) when numberProcessed >= recordsToSearch -> ignore
    | (0L, numberProcessed, recordsToSearch) -> 
        let latestStatuses = getLatestStatuses
        (latestStatuses |> withLinks) |> print
        findLinks(findOldestStatus(latestStatuses).Id, numberProcessed + 20, recordsToSearch)    
    | (lastId, numberProcessed, recordsToSearch) ->  
        let latestStatuses = getStatusesBefore lastId
        (latestStatuses |> withLinks) |> print
        findLinks(findOldestStatus(latestStatuses).Id, numberProcessed + 20, recordsToSearch)
 
 
let findStatusesWithLinks recordsToSearch =
    findLinks(0L, 0, recordsToSearch) |> ignore

And to use it to find the links contained in the most recent 100 statuses of the people I follow:

findStatusesWithLinks 100;;

Any advice on how to improve this will be gratefully received. I’m going to continue working this into a little DSL which can print me up a nice summary of the links that have been posted during the times that I’m not on twitter watching what’s going on.

Written by Mark Needham

April 13th, 2009 at 10:09 pm

Posted in .NET,F#

Tagged with ,

Functional C#: The hole in the middle pattern

with 6 comments

While reading Real World Functional Programming I came across an interesting pattern that I have noticed in some code bases recently which I liked but didn’t know had been given a name!

The hole in the middle pattern, coined by Brian Hurt, shows a cool way of using higher order functions in order to reuse code in cases where the code typically looks something like this:

public void SomeServiceCall() 
{
	var serviceClient = CreateServiceClient();
 
	try 
	{
		serviceClient.MakeMethodCall();
	}
	catch(SomeServiceException someServiceException) 
	{
		// Handle exception
	}
	finally 
	{
		serviceClient.Close();
	}
}

The first and the third lines (initialisation and finalisation) are always the same but the service.MakeMethodCall() varies depending on which service we are using. The more services we have the more boring it gets writing out the same code over and over again.

In C# 3.0 we could reuse code in this situation by passing in a lambda expression which calls that method, allowing us to vary the important part of the method call while keeping the scaffolding the same.

public void SomeServiceCall(Action<TServiceClient> callService) 
{
	var serviceClient = CreateServiceClient();
 
	try 
	{
		callService(serviceClient);
	}
	catch(SomeServiceException someServiceException) 
	{
		// Handle exception
	}
	finally 
	{
		serviceClient.Close();
	}
}
SomeServiceCall(service => service.SomeMethodCall())

One of the things I’ve noticed with the ability to pass in functions to methods is that sometimes we end up making the code really difficult to read by doing so but when we’re dealing with services this seems to be one of the best and most obvious uses of Actions/Funcs in C# 3.0 and it leads to a more reusable and easy to understand API.

Written by Mark Needham

April 4th, 2009 at 11:41 am

Posted in .NET

Tagged with ,

F#: Forcing type to unit for Assert.ShouldThrow in XUnit.NET

with 2 comments

I’ve started playing around with F# again and decided to try and create some unit tests around the examples I’m following from Real World Functional Programming. After reading Matt Podwysocki’s blog post about XUnit.NET I decided that would probably be the best framework for me to use.

The example I’m writing tests around is:

let convertDataRow(str:string) =
    let cells = List.of_seq(str.Split([|','|]))
    match cells with 
    | label::value::_ -> 
        let numericValue = Int32.Parse(value)
        (label, numericValue)
    | _ -> failwith "Incorrect data format!"

I started driving that out from scratch but ran into a problem trying to assert the error case when an invalid data format is passed in.

The method to use for the assertion is ‘Assert.ShouldThrow’ which takes in an Assert.ThrowsDelegate which takes in an argument of type unit->unit.

The code that I really want to write is this:

[<Fact>]
let should_throw_exception_given_invalid_data () =
    let methodCall = convertDataRow "blah"
    Assert.Throws<FailureException>(Assert.ThrowsDelegate(methodCall))

which doesn’t compile giving the error ‘This expression has type string*int but is used here with type unit->unit’.

I got around the first unit by wrapping the convertDateRow in a function which takes in no arguments but the output was proving tricky. I realised that putting a call to printfn would solve that problem, leaving me with this truly hacky solution:

[<Fact>]
let should_throw_exception_given_invalid_data () =
    let methodCall = fun () -> (convertDataRow "blah";printfn "")
    Assert.Throws<FailureException>(Assert.ThrowsDelegate(methodCall))

Truly horrible and luckily there is a way to not do that printfn which I came across on the hubfs forum:

[<Fact>]
let should_throw_exception_given_invalid_data () =
    let methodCall = (fun () -> convertDataRow "blah" |> ignore)
    Assert.Throws<FailureException>(Assert.ThrowsDelegate(methodCall))

The ignore function provides a neat way of ignoring the passed value i.e. it throws away the result of computations.

Written by Mark Needham

March 28th, 2009 at 2:35 am

Posted in F#

Tagged with ,

NUnit: Tests with Context/Spec style assertions

with 2 comments

I recently started playing around with Scott Bellware’s Spec-Unit and Aaron’s Jensen’s MSpec, two frameworks which both provide a way of writing Context/Spec style tests/specifications.

What I particularly like about this approach to writing tests is that we can divide assertions into specific blocks and have them all evaluated even if an earlier one fails.

NUnit is our testing tool of choice at the moment and we wanted to try and find a way to test the mapping between the domain and service layers of the application.

Testing in the normal way was resulting in a test that was absolutely massive and a bit of a nightmare to debug when something changed.

Luckily Dave came up with the idea of using the TestFixtureSetUp attribute on a method which would setup the test data and then call the appropriate method on the object under test.

We could then have smaller tests which asserted various parts of the mapping.

[TestFixture]
public class FooAdaptorTest 
{
	private Foo foo;
	private FooMessage fooMessage;
 
	[TestFixtureSetUp]
	public void GivenWeTransformAFoo()
	{
		foo = new Foo { Bar = "bar", Baz = "baz" };
		fooMessage = new FooAdaptor().MapFrom(foo);	
	}
 
	[Test]
	public void ShouldMapBar() 
	{
		Assert.AreEqual(foo.Bar, fooMessage.Bar);	
	}
 
	[Test]
	public void ShouldMapBaz() 
	{
		Assert.AreEqual(foo.Baz, fooMessage.Baz);		
	}
}

Of course this is a very simple example, and in a real example we would test more than just one property per test.

The Setup method does get pretty big depending on how much mapping needs to be done but it seems a reasonable trade off for the increased readability we get in the smaller size of each of the tests.

I know this isn’t the normal way of using NUnit but I think it’s cool to try and think outside the normal approach to find something that works better for us.

Written by Mark Needham

March 1st, 2009 at 4:43 pm

Posted in .NET

Tagged with , ,

C#: Wrapping DateTime

with 5 comments

I think it was Darren Hobbs who first introduced me to the idea of wrapping dates in our system to describe what that date actually means in our context, and after suffering the pain of passing some unwrapped dates around our code I think I can safely say that wrapping them is the way to go.

The culprit was a date of birth which was sometimes being created from user input and sometimes being retrieved from another system.

The initial (incorrect) assumption was that we would be passing around the date in the same string format and there was no point wrapping the value as we were never doing anything with the data.

It proved to be a bit of a nightmare trying to work out which state the data of birth was in various parts of the application and we ended up doing conversions to the wrong format and then undoing those and losing the formatting in other places!

Step 1 here was clearly not to pass around the date as a string but instead to convert it to a DateTime as soon as possible.

This is much more expressive but we can take this one step further by wrapping that date time in a DateOfBirth class.

public class DateOfBirth 
{
	private DateTime? dateOfBirth
 
	public DateOfBirth(DateTime? dateOfBirth) 
	{
		this.dateOfBirth = dateofBirth;
	}
 
	public string ToDisplayFormat()
	{
		return dateOfBirth == null ? "" : dateOfBirth.Value.ToString("dd MMM yyyy");
	}
}

When we want to display this object on the page we just have to call the ToDisplayFormat() and if that date format needs to change then we have only one place to make that change. Creating this class removed at least 3 or 4 ‘DateTime.Parse(…)’ and ‘DateTime.ToString(…)’ calls throughout the code.

Now we could achieve the same functionality using an extension method on DateTime? but it’s not as expressive as this in my opinion. It is also really obvious when looking at the code to know what type we are dealing with and it is really obvious when reading this class which method we will use to get the format to display to the user.

I will certainly be looking to wrap any DateTimes I come across in future.

Written by Mark Needham

February 25th, 2009 at 11:12 pm

Posted in .NET

Tagged with ,

C#: Wrapping collections vs Extension methods

with 3 comments

Another interesting thing I’ve noticed in C# world is that there seems to be a trend towards using extension methods as much as possible. One area where this is particularly prevalent is when working with collections.

From reading Object Calisthenics and working with Nick I have got used to wrapping collections and defining methods on the wrapped class for interacting with the underlying collection.

For example, given that we have a collection of Foos that we need to use in our system we might wrap that in an object Foos.

public class Foos
{
    private readonly IEnumerable<Foo> foos;
 
    public Foos(IEnumerable<Foo> foos)
    {
        this.foos = foos;
    }
 
    public Foo FindBy(string id)
    {
        return foos.Where(foo => foo.Id == id).First();
    }
 
    // some other methods to apply on the collection
}

Extension methods provide another way of achieving the same thing while not needing to wrap it.

public static class FooExtensions
{
    public static Foo FindBy(this IEnumerable<Foo> foos, string id)
    {
        return foos.Where(foo => foo.Id == id).First();
    }
}

It seems like there isn’t much difference in wrapping the collection compared to just using an extension method to achieve the same outcome.

The benefit I see in wrapping is that we take away the ability to do anything to the collection that we don’t want to happen. You only have the public API of the wrapper to interact with.

The benefit of the extension method approach is that we don’t need to create the object Foos – we can just call a method on the collection.

I’m not sure which is a better approach – certainly languages which provide the ability to open classes seem to favour taking that approach over wrapping but I still think it’s nice to have the wrapper as it means you don’t have to be explicitly passing collections all around the code.

But maybe that’s just me.

Written by Mark Needham

February 23rd, 2009 at 8:24 pm

Posted in .NET

Tagged with ,

C#: Implicit Operator

with 3 comments

Since it was pointed out in the comments on an earlier post I wrote about using the builder pattern how useful the implicit operator could be in this context we’ve been using it wherever it makes sense.

The main benefit that using this approach provides is that our test code becomes more expressive since we don’t need to explicitly call a method to complete the building of our object.

public class FooBuilder 
{
	private string bar = "defaultBar";
 
	public FooBuilder Bar(string value)
	{
		bar = value;
		return this;
	}
 
	public static implicit operator Foo(FooBuilder builder) 
	{
		return new Foo { Bar = builder.bar };
	}
}
public class Foo 
{
	public string Bar { get; set; }
}

We can then create a ‘Foo’ in our tests like this:

var foo = new FooBuilder().Bar("bar");

The type of ‘foo’ is actually ‘FooBuilder’ but it will be implicitly converted to Foo when needed.

Alternatively we can force it to Foo earlier by explicitly defining the type:

Foo foo = new FooBuilder().Bar("bar");

While playing around with the specification pattern to try and create a cleaner API for some querying of collections I tried to create a specification builder to chain together several specifications.

public interface IFooSpecification
{
    bool SatisfiedBy(Foo foo);
    IFooSpecification And(IFooSpecification fooSpecification);
}
public abstract class BaseFooSpecification : IFooSpecification
{
    public abstract bool SatisfiedBy(Foo foo);
    public  IFooSpecification And(IFooSpecification fooSpecification)
    {
        return new AndSpecification(this, fooSpecification);   
    }
}
public class FooBar : BaseFooSpecification
{
    private readonly string bar;
 
    public FooBar(string bar)
    {
        this.bar = bar;
    }
 
    public override bool SatisfiedBy(Foo foo)
    {
        return foo.Bar == bar;
    }
}
public class FooQuery 
{
    private FooBar fooBarSpecification;
    private FooBaz fooBazSpecification;
 
    public FooQuery Bar(string value)
    {
        fooBarSpecification = new FooBar(value);
        return this;
    }
 
    public FooQuery Baz(string value)
    {
        fooBazSpecification = new FooBaz(value);
        return this;
    }
 
 
    public static implicit operator IFooSpecification(FooQuery fooQuery)
    {
        // User-conversion to interface error message displayed by Resharper
 
    }
}

The intention was to be able to filter a collection of foos with code like the following:

foos.FindBy(new FooQuery().Bar("bar").Baz("baz"));

Unfortunately the C# language specification explicitly doesn’t allow this:

A class or struct is permitted to declare a conversion from a source type S to a target type T provided all of the following are true:

  • Neither S nor T is object or an interface-type.

User-defined conversions are not allowed to convert from or to interface-types. In particular, this restriction ensures that no user-defined transformations occur when converting to an interface-type, and that a conversion to an interface-type succeeds only if the object being converted actually implements the specified interface-type.

I tried casting to the BaseFooSpecification abstract class instead and although that does compile it seemed to be leading me down a path where I would need to change the ‘FindBy’ signature to take in a BaseFooSpecification which I wasn’t keen on.

It didn’t prove possible to implicitly convert to a BaseFooSpecification when the signature for the method expected an IFooSpecification even though BaseFooSpecification implements IFooSpecification.

I don’t think there is a way to get around this in C# 3.0 so I just ended up creating an explicit method to convert between the two – not quite as nice to read but the best I could come up with.

Written by Mark Needham

February 22nd, 2009 at 10:20 pm

Posted in .NET

Tagged with ,