Mark Needham

Thoughts on Software Development

Archive for October, 2009

Coding: Copy/Paste then refactor

with 5 comments

We’re currently reading Michael Feathers ‘Working Effectively With Legacy Code‘ in our technical book club and one interesting technique which he describes in the Test Driven Development section is copying and pasting some existing code, changing the appropriate part to make the test pass before refactoring to remove the duplication we just created.

I can’t remember coming across this approach previously but I found myself using it to solve a Scala problem last week.

I wanted to find the smallest number in a list having previously written the code to find the largest number in a list.

This was the code I already had:

1
2
3
4
5
6
7
8
  def max(values : List[Int]) : Int = {
    def inner(remainingValues : List[Int], currentMax : Int) : Int = remainingValues match {
      case Nil => currentMax
      case head::tail if head > currentMax => inner(tail, head)
      case _::tail => inner(tail, currentMax)
    }
    inner(values, Math.MIN_INT)
  }

I figured finding the smallest value would be quite similar to that but I couldn’t quite see where the duplication between the two functions was going to be.

I somewhat reluctantly found myself copying and pasting my way to the following code:

1
2
3
4
5
6
7
8
  def min(values : List[Int]) : Int = {
     def inner(remainingValues : List[Int], currentMin : Int) : Int = remainingValues match {
      case Nil => currentMin
      case head::tail if head < currentMin => inner(tail, head)
      case _::tail => inner(tail, currentMin)
    }
    inner(values, Math.MAX_INT)
  }

Having done that it became clearer that the duplication between the two functions was on line 4 where we do the comparison of values and on line 7 where we choose the initial max/min value.

My first attempt at refactoring was to try and pull out the comparator function on line 4:

def comparator(x : Int, y : Int, f : Function2[Int, Int, Boolean]) : Boolean = f(x,y)

Which resulted in the ‘max’ function looking like this:

  def max(values : List[Int]) : Int = {
    def inner(remainingValues : List[Int], currentMax : Int) : Int = remainingValues match {
      case Nil => currentMax
      case head::tail if comparator(head, currentMax, (x,y) => x > y) => inner(tail, head)
      case _::tail => inner(tail, currentMax)
    }
    inner(values, Math.MIN_INT)
  }

At this point I realised that I was doing the refactoring the wrong way around and that I should be trying to make ‘max’ and ‘min’ call another function while passing in the differences as parameters to that function.

I ended up with this common function:

   def findValue(values : List[Int], comparisonFunction : Function2[Int, Int, Boolean], startingValue : Int) : Int = {
    def inner(remainingValues : List[Int], current : Int) : Int = remainingValues match {
      case Nil => current
      case head::tail if comparisonFunction(head, current) => inner(tail, head)
      case _::tail => inner(tail, current)
    }
    inner(values, startingValue)
  }
  def max(values : List[Int]) : Int = findValue(values, (x,y) => x>y, Math.MIN_INT)
  def min(values : List[Int]) : Int = findValue(values, (x,y) => x<y, Math.MAX_INT)

I’m not sure whether the name ‘findValue’ is a very good one for the common function but I can’t think of a better one at the moment.

The neat thing about this approach is that it made it much easier to see where the duplication was and I was able to have both the functions working before trying to do that refactoring.

I’m not sure if there are built in functions to do these calculations. I looked quickly and couldn’t find any and thought it would be an interesting exercise to write the code anyway.

It worked out much better than I thought it would.

Written by Mark Needham

October 31st, 2009 at 5:54 pm

Posted in Coding

Tagged with

Coding: Invariant checking on dependency injected components

with 6 comments

I’ve written a couple of times previously about invariant checking in constructors and I had an interesting discussion with some colleagues recently around doing this type of defensive programming when the object in question has its dependencies injected by a container.

Quite often we would see code similar to this in a controller:

public class SomeController 
{
	public SomeController(Dependency1 valueOne, Dependency2 valueTwo)
	{
		AssertThat.isNotNull(valueOne);
		AssertThat.isNotNull(valueTwo);
		// and so on
	}
}

Where ‘SomeController’ would have ‘Dependency1′ and ‘Dependency2′ set up in a Spring configuration file in this example.

I can’t really see much benefit in this type of pre condition checking although Raph pointed out that if we got the configuration for this bean wrong then having these assertions would allow us to get quicker feedback.

While that is true it seems to me that we would maybe achieve quicker feedback by a few seconds in return for the clutter that we end up creating in our code. We have to read those assertions every time we come to this class.

I would prefer to have some specific tests for the dependency injection container if we’re interested in getting quick feedback in that area.

That seems to be a cleaner solution than having these types of checks in production code or am I missing something?

Written by Mark Needham

October 31st, 2009 at 3:00 am

Posted in Coding

Tagged with

Coding: Consistency when invariant checking

with 6 comments

I wrote a while ago about reading the ASP.NET MVC source code and noticing that it makes use of code inside its constructors to ensure that null values can’t be passed in and while I’m still not convinced this is the way to go I think if we do take this approach then we need to ensure we do so consistently.

Something which happens quite often is that you’ll come across code which makes use of defensive programming in one of its constructors like so:

public class SomeObject 
{
	public SomeObject(Dependency1 valueOne, Dependency2 valueTwo)
	{
		AssertThat.isNotNull(valueOne);
		AssertThat.isNotNull(valueTwo);
		// and so on
	}
}

This is fine in itself but you’ll often find another constructor defined close by which looks like this:

public class SomeObject 
{
	public SomeObject(Dependency1 valueOne, Dependency2 valueTwo)
	{
		this.valueOne = null;
		this.valueTwo = null;
	}
}

Depending which constructor you use the outcome is quite different and you probably wouldn’t expect the default constructor to allow nulls seeing as the other one prevents you from doing this.

This inconsistency can happen for a variety of reasons.

Test only constructor

It might be a test only constructor which is not intended to be called in production case. It’s much easier to get an object under test if we don’t have to supply any parameters.

In that case though we’re testing the object constructed in a much different state than it will be when we really use it so the value of this type of testing is questionable.

It’s probably best to look why we need that test only constructor and see if we can change the code so that we can avoid that problem.

An implicit domain concept

Another reason is that the original pre condition of the object might have changed and now it isn’t always the case that we need to provide ‘Dependency1′ and ‘Dependency2′.

In this case I think it would make more sense to make that more explicit, perhaps by making use of the null object pattern or by finding another abstraction which doesn’t require nulls to be passed around.

Either way we should probably look to ensure that both constructors either ignore nulls or both stop them from being set.

Didn’t realise pre conditions were being checked

Although this one might seem a bit far fetched it’s pretty easy for this to be the case if there are different people working on the code base.

Someone might glance at the class, see that it doesn’t quite allow them to construct the object the way they want and create another constructor without realising the way the original one was working.

In this case talking more about the way that we’re solving this type of problem across the code base should be enough to ensure that we get some kind of consistency.

Written by Mark Needham

October 29th, 2009 at 11:06 pm

Posted in Coding

Tagged with

Coding: Connascence – Some examples

with one comment

I’ve been reading Meilir Page Jones’ ‘Fundamentals of Object Oriented Design in UML‘ recently and one of the chapters that I found the most interesting is the one where he talks about ‘connascence’.

Connascence describes the relation between two different bits of code and two bits of code are said to be connascent if a change to one bit of code would require a change to the other bit of the code or if some change to another piece of code would require both bits of code to change for our program to still be correct.

I think this principal is quite similar to the idea of coupling which we seem to use more frequently these days but I found it quite compelling that as I was reading through the different types of connascence that Page-Jones describes I was easily able to identify mistakes I’ve made and seen made in code.

There are many different types of connascence and Jim Weirich goes through many of them in his presentation from MountainWest RubyConf 2009 titled ‘The building blocks of modularity‘.

I’ll just cover a couple of the ones that seem to cause the most pain from my experience.

Connascence of execution

This describes the situation where two different lines of code have to be executed in a certain order for the program to be executed correctly.

A typical example of this type of connascence occurs when we make use of setter methods to construct objects:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public void SomeMethod()
{
	var someObject = new SomeObject();
	someObject.Property1 = new Property1();
 
	// some where else far far away
 
	SomeOtherMethod(someObject);
}
 
private void SomeOtherMethod(SomeObject someObject)
{
	someObject.Property1.AnotherProperty = new AnotherProperty();
}

In this example we need line 2 to be executed before line 13 otherwise we’ll get a null pointer exception. These two lines therefore have connascence of execution.

Quite often line 13 of this example would be hidden away in a chain of method calls and it wouldn’t be completely obvious that it relies on a line further up being executed first.

We eventually end up making what we think is an innocuous reordering of the method calls and suddenly our code doesn’t work any more.

In this case to reduce this type of connascence we might look at using a constructor to create our objects, use one of the builder/factory patterns or at least try and capture related code in the same method so that the potential for confusion is reduced.

Connascence of algorithm

This typically describes the situation where we are making use of a data structure in a specific way such that all the pieces of code which interact with that data structure need to know exactly how that data structure works in order to make use of it.

This problem seems to happen particularly when we’re over using lists when perhaps another level of abstraction is required.

One example of this might be where a piece of code assumes that another piece of code inserts a certain value into a list.

public class AClassWhichPutsASpecificItemInAList
{
	public List<SomeObject> CreateTheEvil()
	{
		var myEvilList = new List<SomeObject>();
 
		myEvilList.Add(new SomeObject("SomeSpecialName"));
		// and so on
 
		return myEvilList;
	}
}
public class SomeOtherClassFarFarAway
{
	public void SomeMethod(List<SomeObject> someObjects)
	{
		var speciallyNamedObject = someObjects.Where(s.Name == "SomeSpecialName").First();
	}
}

Despite the fact that these two classes never refer to each other they have connascence of algorithm because if ‘AClassWhichPutsASpecificItemInAList’ decides not to put that value into the list then ‘SomeOtherClassFarFarAway’ may stop working so we will need to change that as well.

This type of connascence is much more implicit than some of the other types and it may not be immediately obvious that two pieces of code are related.

We could get around this problem by encapsulating this type of logic in its own type so that at least we only have to deal with it once.

The goal is to try and make an implicit piece of code more explicit.

Connascence of convention

This describes the situation where there is an implicit convention for the meaning behind the value of a certain piece of code such that every other bit of code that touches it needs to know this convention. This is quite similar to connascence of algorithm.

An example that I came across recently was around how we passed around the value of an option selected on a form by the user.

The user could either select ‘Yes’, ‘No’ or they could choose not to answer the question. If they entered ‘Yes’ then they would need to answer another question immediately after this.

Later on we needed to pass this data to the service layer. A value of ‘No’ if they selected that, the answer to the next question if they answered ‘Yes’ and ‘None’ if they didn’t enter anything.

We ended up having code similar to this in the service call and then again in the binder:

public class TheServiceThatUsesThatValue
{
	public void CallTheService(string theAnswer)
	{
		var theRequest = new ServiceRequest();
 
		if(theAnswer == "None")
		{
			theRequest.TheAnswer = "None";
		}
		else if(theAnswer == "No")
		{
			theRequest.TheAnswer = "No";
		}
		else
		{
			theRequest.TheAnswer = LookUpTheCodeFor(theAnswer);
		}
 
	}	
}

We eventually gave up on the idea of passing ‘None’ around because that was the bit causing the most confusion. Instead we stored the answer in a nullable data type and then did the conversion to ‘None’ when necessary in the Service class.

In summary

There are just a couple of the examples that Page-Jones outlines but the general idea is that we want to try and minimise the connascence in our system by creating well encapsulated objects.

Within those objects it makes sense to keep connascence high and in fact if it’s not then it might suggest that we have another object waiting to get out.

Written by Mark Needham

October 28th, 2009 at 10:43 pm

Posted in Coding

Tagged with

Book Club: Working Effectively With Legacy Code – Chapters 6 & 7 (Michael Feathers)

with 2 comments

In our latest technical book club we covered chapters 6 & 7 – ‘I Don’t Have Much Time And I Have To Change It’ and ‘It Takes Forever To Make A Change’ – of Michael Feathers’ ‘Working Effectively With Legacy
Code
‘.

The first chapter discusses various different techniques that we can use to add in new code to a legacy code base. These include:

  • Sprout method – create a new method for our new functionality and make a call to it from existing code.
  • Sprout class – create a new class for our new functionality and call to that class from existing code.
  • Wrap method – rename an existing method; create a new method with the old one’s name; add in the new functionality in the new method & then delegate to the newly named existing method.
  • Wrap class – create a new class which takes in the old one in its constructor and then delegates to the original for most methods but also implements new functionality. Typically the decorator pattern.

The second chapter discusses some common problems we may experience while trying to make changes.

These are some of my thoughts and our discussion of these chapters:

  • The thing that stood out for me in our discussion was the realisation that applying any of these techniques is probably going to make the code worse in the short term but hopefully lead us to a situation where we can make it better. If we use the ‘sprout class’ technique, for example, then we will end up with a new class which just does our new bit of functionality. Our code is therefore in an inconsistent state.

    I would actually prefer to leave the code in an inconsistent state if we are driving to a better solution although I have worked with colleagues who prefer to keep things consistent instead. I can certainly see why you might want to do this on a short running project where there may not be time to make all the changes that you’d like to.

  • Tom also pointed out that we need to remember that what we are doing is not design – that can come later on when the code is testable. Using these techniques is an alternative to rewriting the code which often doesn’t work out as well as we’d hope.
  • I quite liked the following observation:

    Typically, changes cluster in systems. If you are changing it today, chances are, you’ll have a change close by pretty soon

    On the projects I’ve worked on there are often lots of areas in the code base that require refactoring but we tend to focus on the area that we’re currently working on as that tends to give us the biggest pay back for the time spent.

    Having said that I quite like Fabio’s idea of finding how various items of technical debt fall in terms of the pain they’re causing and the effort it costs to fix them. I wonder if the code we’re working on now would be more likely to fall into the high ‘pain’ areas of that type of matrix.

  • Cam pointed out that with the sprout method and sprout class techniques it’s quite cool that Feathers suggests driving their API by making a call to them the from existing method. That way we can see what values the new method will need to take in based on how it will actually be used.

    While discussing this Alex pointed out something I hadn’t quite grasped – the techniques described are useful for minimising the amount of change to the original method as well as making the new pieces of code easier to test.

    It’s really easy to make mistakes when coding and when there is no safety net to save us we should look to avoid tinkering with that code too much until we’ve created one!

Written by Mark Needham

October 26th, 2009 at 11:10 pm

Posted in Book Club

Tagged with

Scala: Converting an input stream to a string

with 8 comments

I was playing around with Scala over the weekend and one thing that I wanted to do was get the data from a HTTP response as a string so that I could parse the xml returned.

The data source is fairly small so loading the stream into memory wasn’t a problem.

Carlos pointed me to a bit of Java code that did this and I converted it as literally as possible into Scala.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
 def convertStreamToString(is: InputStream): String = {
    val reader = new BufferedReader(new InputStreamReader(is));
    val sb = new StringBuilder();
 
    var line : String = null;
    try {
      while ((line = reader.readLine()) != null) {
        sb.append(line + "\n");
      }
    } catch {
      case e: IOException => e.printStackTrace();
    } finally {
      try {
        is.close();
      } catch {
        case e: IOException => e.printStackTrace();
      }
    }
 
    sb.toString();
  }

The problem with this bit of code becomes clear when we try to run it through the REPL:

7:warning: comparing values of types Unit and Null using `!=' will always yield true
             while ((line = reader.readLine()) != null) {

In Java the expression ‘(line = reader.readLine())’ would return the value of ‘reader.readLine()’ as far as I understand. We can then compare that to null.

In Scala the evaluation of that expression is ‘Unit’ so if we ever run this function we end up in an infinite loop and eventually run out of memory.

I rewrote this function making use of recursion to get around the problem:

  def convertStreamToString(is : InputStream) : String = {
    def inner(reader : BufferedReader, sb : StringBuilder) : String = {
      val line = reader.readLine()
      if(line != null) {
        try {
          inner(reader, sb.append(line + "\n"))
        } catch {
          case e : IOException => e.printStackTrace()
        } finally {
          try {
            is.close()
          } catch {
            case e : IOException => e.printStackTrace()
          }
        }
 
      }
      sb.toString()
    }
 
    inner(new BufferedReader(new InputStreamReader(is)), new StringBuilder())
  }

The code is still peppered with error handling statements but it now allows us to convert the stream into a string.

Is there a better/cleaner/more Scala-esque way to do this?

Written by Mark Needham

October 26th, 2009 at 6:32 am

Posted in Scala

Tagged with

Testing End Points: Integration tests vs Contract tests

with 3 comments

We recently changed the way that we test against our main integration point on the project I’ve been working on so that in our tests we retrieve the service object from our dependency injection container instead of ‘newing’ one up.

Our tests therefore went from looking like this:

[Test]
public void ShouldTestSomeService()
{
	var someService = new SomeService();
	// and so on
}

To something more like this:

[Test]
public void ShouldTestSomeService()
{
	var someService = UnityFactory.Container.Resolve<ISomeService>();
	// and so on
}

This actually happened as a side effect of another change we made to inject users into our system via our dependency injection container.

We have some ‘authenticated services’ which require the request to contain a SAML token for a valid user so it seemed to make sense to use the container in the tests instead of having to duplicate this piece of behaviour for every test.

We needed to add our fake authorised user into the container for our tests but apart from this the container being used is the same as the one being used in the production code.

Our tests are therefore now calling the services in a way which is much closer to the way that they are called in the code than was previously the case.

I think this is good as it was previously possible to have the tests working but then have a problem calling the services in production because something in the container wasn’t configured properly.

The downside is that these tests now have more room for failure than they did previously and they are not just testing the end point which was their original purpose.

In a way what we have done is convert these tests from being contract tests to integration tests.

I like the new way but I’m not completely convinced that it’s a better approach.

Written by Mark Needham

October 25th, 2009 at 12:04 am

Posted in Testing

Tagged with

Value objects: Immutability and Equality

with 13 comments

A couple of weeks ago I was working on some code where I wanted to create an object composed of the attributes of several other objects.

The object that I wanted to construct was a read only object so it seemed to make sense to make it a value object. The object would be immutable and once created none of the attributes of the object would change.

This was my first attempt at writing the code for this object:

public class MyValueObject
{
    private readonly string otherValue;
    private readonly SomeMutableEntity someMutableEntity;
 
    public MyValueObject(SomeMutableEntity someMutableEntity, string otherValue)
    {
        this.someMutableEntity = someMutableEntity;
        this.otherValue = otherValue;
    }
 
    public string SomeValue { get { return someMutableEntity.SomeValue; } }
 
    public int SomeOtherValue {get { return someMutableEntity.SomeOtherValue; }}
 
    public string OtherValue { get { return otherValue; } }
 
    public bool Equals(MyValueObject obj)
    {
        if (ReferenceEquals(null, obj)) return false;
        if (ReferenceEquals(this, obj)) return true;
        return Equals(obj.OtherValue, OtherValue) && Equals(obj.SomeOtherValue, SomeOtherValue) && Equals(obj.SomeValue, SomeValue);
    }
 
    public override bool Equals(object obj)
    {
		// other equals stuff here
    }
}

It wasn’t immediately obvious to me what the problem was with this solution but it felt really weird to be making use of properties in the equals method.

After discussing this strangeness with Dave he pointed out that ‘MyValueObject’ is not immutable because although the reference to ‘SomeMutableEntity’ inside the object cannot be changed the object itself had lots of setters and could therefore be changed from outside this class.

There are two ways to get around this problem:

  1. We still inject ‘SomeMutableEntity’ but we extract the values from it in the constructor and don’t keep a reference to it.
  2. The client that creates ‘MyValueObject’ constructs the object with the appropriate values

The first solution would work but it feels really weird to pass in the whole object when we only need a small number of its attributes – it’s a case of stamp coupling.

It also seems quite misleading as an API because it suggests that ‘MyValueObject’ is made up of ‘SomeMutableEntity’ which isn’t the case.

My preferred solution is therefore to allow the client to construct ‘MyValueObject’ with all the parameters individually.

public class MyValueObject
{
    private readonly string otherValue;
    private readonly int someOtherValue;
    private readonly string someValue;
 
    public MyValueObject(string someValue, string otherValue, int someOtherValue)
    {
        this.someValue = someValue;
        this.otherValue = otherValue;
        this.someOtherValue = someOtherValue;
    }
 
    public string SomeValue { get { return someValue; } }
 
    public int SomeOtherValue { get { return someOtherValue; } }
 
    public string OtherValue { get { return otherValue; } }
}

The constructor becomes a bit more complicated but it now feels a bit more intuitive and ‘MyValueObject’ lives up to its role as a value object.

The equals method can now just compare equality on the fields of the object.

public class MyValueObject
{
	...
 
	public bool Equals(MyValueObject obj)
	{
	    if (ReferenceEquals(null, obj)) return false;
	    if (ReferenceEquals(this, obj)) return true;
	    return Equals(obj.otherValue, otherValue) && Equals(obj.someValue, someValue) && obj.someOtherValue == someOtherValue;
	}
}

A client might create this object like so:

var myValueObject = new MyValueObject(someMutableEntity.SomeValue, otherValue, someMutableEntity.SomeOtherValue);

Written by Mark Needham

October 23rd, 2009 at 11:39 pm

Coding: The primitive obsession

with 5 comments

I recently came across an interesting post by Naresh Jain where he details a discussion at the SDTConf 2009 about the code smells that hurt people the most.

Naresh describes the ‘primitive obsession’ anti pattern as being the crux of poor design:

I would argue that I’ve seen code which does not have much duplication but its very difficult to understand what’s going on. Hence I claim, “only if the code had better abstractions it would be a lot easier to understand and evolve the code”. Also when you try to eliminate duplicate code, at one level, there is no literal code duplication, but there is conceptual duplication and creating a high order abstraction is an effective way to solve the problem. Hence I conclude that looking back, Primitive Obsession is at the crux of poor design. a.k.a Biggest Stinker.

When I first read this post I thought Naresh was only talking about primitives such as strings, integers, decimals and so on but after discussing this pattern at today’s book club Tom pointed out that the problem might also be defined as ‘describing high level domain concepts with low level types‘.

I’ve noticed that this pattern in code often seems to happen when we have abstractions in our code which are very generic and don’t express the domain very clearly.

I think the reason people code in this way comes from a desire to minimise the amount of code being written – as I understand it the theory is that if we write less code then it will be more maintainable.

I really like the way Dave Cameron describes the disadvantages of doing this in a comment on Naresh’s post:

Naresh, I agree strongly with your assertion that we need *better* abstractions. Eliminating duplication by introduction a mediocre abstraction leads to pain. Removing duplication and parceling it up in an awkward way can make it more difficult to see a more elegant abstraction later. The short-term benefit of eliminating duplication leads to a long-term pain of awkward design.

The other motivator behind this style of coding is to try and create classes which are really reusable.

Udi Dahan explains why this often doesn’t work but the problem that I see from this attempt at writing reusable code is that it will get reused in places where the concept doesn’t quite make sense.

As a result of this we end up writing a lot of very hacky code to try and make it fit.

The most frequent offender seems to be the creation of collections of things which we then query to find specific items elsewhere in the code. More often that not we’ll also perform some custom logic on that particular item.

At its most extreme we might extract each of the items in a collection and make use of them separately. This somewhat defeats the purpose of the collection.

When this happens it often seems to me that what we actually have is a series of attributes of an object rather than a collection.

While discussing why this type of coding happens it was pointed out that it’s much easier to make use of the primitives than to spend the time looking for a helpful abstraction – I think this is an example of bad laziness.

We might be spending less effort now writing the code but it will be more difficult for anyone else who reads it in future which is not a good trade off.

Another similar pattern I’ve seen quite frequently is where we extend one of the classes in the collection API with our ‘domain concept‘.

While this seems better than just passing around a collection the problem is that any clients of our class have access to any of the methods of the collections API we extend which provides more flexibility than we would like and can lead to our object being in an unexpected state if those methods get used.

In summary

I think the way to solve this problem comes down to spending more time thinking about our domain and the best way to model it and discussing this regularly with other people on the team.

That way I believe our knowledge of how the system actually works will be more explicit in the code which will make it easier for people to understand the code base.

Written by Mark Needham

October 23rd, 2009 at 12:08 am

Posted in Coding

Tagged with

The effect of adding new people to project teams

with 10 comments

I’ve read quite frequently about the challenges we will experience when adding new people onto teams, including Fred Brooks’ ‘The Mythical Man Month‘, but having seen quite a few new people join the project that I’ve been working on over the last few months I think there are actually some significant benefits they can provide.

I think the impact new people provide is particularly useful on a challenging project where they may be able to have a much more immediate impact.

From my observations there seem to be two main ways that new people can positively impact a team:

Driving for simplicity

I think that new people joining a team may well be the best placed to judge the quality of a code base. Since they have very little to no context of the decisions made they only have the code to tell them the story of the project so far.

As a result of this they are the best placed to tell whether or not the code has been written in an obvious way because they will probably have trouble understanding what’s going on if that isn’t the case.

Although following certain design approaches can ease the reading of code for everyone on a team I think that after we spend a while dealing with a level of complexity it becomes normal for us and we don’t feel the pain as much.

Having someone new come in and point this out is invaluable as it helps us to drive towards a more maintainable and easier to work with solution.

Enthusiasm

Something else I noticed is that the level of enthusiasm that new people have is often much higher than people who may have been working on a project for an extended period of time and could be feeling a bit jaded.

They haven’t fought the battles or experienced the pain that current team members may have done so they arrive with few preconceptions and a desire to improve the way that things are being done.

When combined with their desire to simplify the code, the impact these people have can be quite significant.

An added benefit is that enthusiasm is contagious so some of it is bound to rub off on other members of the team thereby leading to improved morale.

In Summary

I actually found it quite surprising to notice the impact that new people can have because I’d previously been more inclined to believe that people who had worked on a project for longer would provide much more value.

From what I’ve seen new members to a team can provide a lot of value very quickly so it’s certainly an approach worth keeping in mind.

Written by Mark Needham

October 21st, 2009 at 6:06 pm