Mark Needham

Thoughts on Software Development

Archive for January, 2009

YAGNI: Some thoughts

with 2 comments

If you hang around a team practicing XP for long enough, one of the phrases you are bound to hear is YAGNI (You Ain’t Gonna Need It).

Although it can sometimes be used to ignore things we don’t want to focus on as Ian points out, in general the aim is to stop people from working on code that isn’t currently required.

So assuming our team isn’t being lazy and trying to avoid decisions that they don’t want to think about, why do we hear the YAGNI call and more importantly, perhaps, what happens when we don’t heed that call.

Jack of all trades, master of none

One of the problems of writing APIs that are designed based on potential future use (by multiple clients) is that the API ends up not being what any of the clients want – it does its job but not in a way that makes life easy for any of the clients.

From my experience the easiest way to design usable APIs is to drive the design by using examples. We only write code if we have an example/test to drive that out.

At a higher level this means that we should drive the design of our code by working out how the client is going to use it rather than trying to anticipate how it might be used. This is sometimes known as client driven development as opposed to assumption driven development.

Joe Walnes’ XStream library is often referenced as an easy to use library because he only added features that he needed (as a client of the library) rather than trying to imagine what features people might want.

Change becomes difficult

If we have driven our code based on assumptions of how we think it might be used in the future then it becomes more difficult to change it because we need to ensure that the changes we make won’t cause problems for these potential future clients.

Code driven this way rather than by examples tends to be much more complicated because we don’t know which cases we need to handle and which we don’t – we end up trying to handle them all.

Making changes to code after it has been written is quite common but we have now made this more difficult for ourselves. Changes end up taking longer and we can’t be sure that the change will work for anyone beyond our current clients anyway.

Takes longer

When we only develop an API for clients that currently exist we are writing much less code than when we try to code for the generic case and therefore we can accomplish our task much more quickly. The inverse being that when we don’t we spend a lot of time trying to write a solution that veers more and more towards being a framework.

This doesn’t mean that we should completely tie our API to that client – instead we should ensure that our solution is flexible and easy to change in the future.

Written by Mark Needham

January 17th, 2009 at 9:01 pm

Posted in Coding

Tagged with

The danger of commenting out code

with 19 comments

An idea which is considered common sense by most developers but which is not always adhered to is that of not commenting out code.

Code is nearly always under source control anyway so commenting out code which is not being used doesn’t really serve any positive purpose and it can have quite a few negative effects.

Clutter

Ideally we should be able to read through the code without too much confusion – each method’s name being descriptive enough that we can work out what is going on.

Having commented out code stops the flow of your eyes as you go down the code – there is a distraction and you want to know what it is.

Even more time is then wasted trying to work out why the code was commented out and whether it might still be useful or not.

Misleading

Unless we trawl through the source control history it is difficult to know when and why a piece of code was commented out.

Sometimes when we discover a bug in our code we eventually end up debugging an area of the code which has some code commented out.

If that bug didn’t previously exist then the natural thought is that perhaps commenting out that code is what caused the bug to appear. Clearly that is not always the case!

I have debugged code before which had parts commented out where uncommenting the code actually made the situation even worse.

Leaving uncommented code in seems fairly harmless but it can waste quite a bit of time if someone misunderstands why it was commented out in the first place.

Broken Window Theory

The Broken Window Theory proposes the idea that if we do one bad thing in our code (i.e. break a window) then it becomes more likely that the next person who encounters that code will do something bad as well eventually leading to the degeneration of the code.

This is the same when commenting out code – if people see that there is code commented out then it becomes more acceptable for them to also comment out code and before you know it there are large chunks of code commented out and noone really knows why.

So is commenting ever ok…

I think commenting in the sense I describe here only really makes sense in the short term i.e. commenting out some code to see whether it is actually needed or not, running the tests, and then deleting it permanently if necessary.

If our aim is to produce expressive and easy to understand code then removing code when it is no longer needed can go a long way to helping us achieve this.

Written by Mark Needham

January 17th, 2009 at 4:02 pm

Posted in Coding

Tagged with

Coding Dojo #6: Web Driver

with one comment

We ran a sort of coding dojo/more playing around with web driver learning session this evening, coding some tests in Java driving Planet TW from the code.

The Format

We had the same setup as for our normal coding dojos but only one person was driving at a time and the others were watching from around them offering tips on different approaches. I think only a couple of us drove during the session.

What We Learnt

  • This was an interesting way to start learning about a tool that I hadn’t previously used. Two of my colleagues had used it before and they were able to provide knowledge of best practices, such as the Page Object pattern. I finally got the value in this pattern today after seeing the way that we can use the PageFactory to help cut out a lot of the boiler plate code usually needed to get the elements on each page into a class.
  • Web Driver seems to be simpler to setup than Selenium from my experiences tonight. We don’t have to worry about the reverse proxy like we do when using Selenium which makes things much easier. The tests, especially when using the Html Unit driver, ran fairly rapidly.
  • We worked with the Safari driver for most of the time but had to put in a lot of sleeps because the calls to pages didn’t seem to wait for that page to load before going onto the next step. A quick browse of the mailing list suggests that this is an area that will be worked on soon. The Html Unit Driver worked really well though.
  • I learnt about the idea of LiFT style APIs – we can write web driver tests in this style by using the correct context wrapper. Effectively an acceptance testing DSL:

    LiFT allows writing automated tests in a style that makes them very readable, even for non-programmers. Using the LiFT API, we can write tests that read almost like natural language, allowing business requirements to be expressed very clearly. This aids communication amongst developers and customers, helping give all stakeholders confidence that the right things are being tested.

  • Liz mentioned an earlier discussion she had been having around the creation of strings using literals (“string”) or by using the constructor (new String(“string)). The latter is not encouraged as those strings are not put into the string pool and therefore cannot be reused. There is more discussion of the two approaches to creating strings on the Code Ranch forums and on Ethan Nicholas’ blog.

Next Time

  • Next week we are going to explore the Retlang concurrency library. I think the plan is to take a concurrency problem and try to solve it with the library.
  • I’m still not sure how well the Dojo format works for learning or exploring things that are new to most of the group. This week’s one certainly wasn’t as intense as last week’s although I still learnt about things that I previously didn’t know about.

Written by Mark Needham

January 15th, 2009 at 12:37 am

Posted in Coding Dojo

Tagged with

F#: Partial Function Application with the Function Composition Operator

with 9 comments

In my continued reading of F# one of the ideas I’ve come across recently is that of partial function application.

This is a way of allowing us to combine different functions together and allows some quite powerful syntax to be written.

The term ‘currying’ is perhaps a better known term for describing this although as I understand they are not exactly the same.

Currying is where we return a function that has been partially applied, in such a way that we can chain together a group of functions with a single argument.

I first came across this idea with the forward piping operator when reading about Matthew Podwysocki’s FsTest project but there is an even cleaner way of chaining functions together using Function Composition.

The function composition operator (>>) is defined thus:

> (>>);;
val it : (('a -> 'b) -> ('b -> 'c) -> 'a -> 'c)

We take in two functions (‘a -> ‘b and ‘b -> ‘c) and one value (‘a).
We evaluate the first function (‘a -> ‘b) with the argument ‘a and then pass the result to the second function (‘b -> ‘c).

The way I understand this:

  • The first function takes in ‘a (which is the 3rd argument passed to >>) and returns ‘b
  • The second function takes in ‘b (which is the return value of the first function) and returns ‘c (which is the return value of >>)

Chris Smith perhaps best explains this as follows:

> let inline (>>) f g x = g(f x)
val inline ( >> ) : ('a -> 'b) -> ('b -> 'c) -> 'a -> 'c

Given two functions (f, g) and a value x compute the result of f(x) and pass the result to g.

From my reading so far this operator makes it even easier to write code in a declarative way (although I suppose the functional programming approach does encourage that in the first place).

We can achieve a lot of this nice declarative style by using the forward piping operator (|>) but the function composition operator takes it one step further:

Say we want to take a list of numbers, square all of them and then only show the negative ones:

let findOddSquares = List.map (fun x-> x*x) >> List.filter (fun x -> x%2 <> 0);;

If we did this using the forward piping operator it would read like this i.e. we need to explicitly define the list:

let findOddSquares list = list |> List.map (fun x-> x*x) |> List.filter (fun x -> x%2 <> 0);;

It’s not that much more verbose but there’s less code for doing the same thing if we use the function composition operator. I still think the forward piping operator works nicely when we just want to switch the order of the function and the data though:

> [1..10] |> findOddSquares
val it : int list = [1; 9; 25; 49; 81]

As with all my F# posts, I’m still learning so please point out if I have anything wrong. Chris Smith has a closer to real life example of how to use partial function application that probably shows the benefits more than I have here.

* Update *

As pointed out in the comments I’m actually finding the odd squares not the negative ones as I originally posted.

Written by Mark Needham

January 12th, 2009 at 10:22 pm

Posted in .NET,F#

Tagged with , , ,

How does the user language fit in with the ubiquitous language?

with 2 comments

We’ve been doing some work this week around trying to ensure that we have a ubiquitous language to describe aspects of the domain across the various different systems on my project.

It’s not easy as there are several different teams involved but one thing we realised while working on the language is that the language of the business is not the same as the language of the user.

Although this is the first time that I recall working on a project where the language of the user is different to the language of the domain I’m sure there must be other domains where this is the case as well.

In our case the language is simplified so that it makes sense to the user – the terms used by the business make sense in that context but would be completely alien if we used it on the interfaces from which our users interact with the system.

At the moment our domain model represents the business terminology and then when we show the data to the user we refer to it by a different name. The problem with this approach is that there is a mental translation step in trying to remember which business term maps to which user term.

We can probably solve this problem somewhat by having the user terms represented in our Presentation Model but this still doesn’t help remove the translation problem when it comes to discussions away from the code.

At the moment there aren’t that many terms which differ but I’m not sure what the approach should be if there become more in the future, should we have a whole user language as well as a business specific ubiquitous one or should our ubiquitous language be the user language?

Written by Mark Needham

January 10th, 2009 at 3:38 pm

Finding the value in fixing technical debt

with 6 comments

Technical debt is a term coined by Martin Fowler which we tend to use on our projects to describe a number of different situations on projects as Ian Cartwright points out in his post on the subject.

Ian covers it in more detail, but to summarise my understanding of what technical debt actually is:

Technical debt is where we know that something we choose not to take care of now is going to affect us in the future.

The latter part of the sentence is interesting because it is somewhat subjective. There are clearly different levels of what ‘affect us’ means.

Phantom Debt

The most interesting area of this is around the area of what Ian describes as Phantom debt.

This is ‘technical debt’ as invoked by a developer who has decided they don’t like part of the code base and hence want to rewrite it. Technical Debt certainly sounds better than ‘egotistical refactoring session’ ;-)

Since reading Uncle Bob’s Clean Code I’ve become a bit fanatical in my approach to trying to make code as readable as possible, mainly by extracting code into methods which describe what is going on.

Whenever I come across code that doesn’t make sense to me I try to break it into methods which make it easier for me to understand and hopefully easier for the next person to read to.

I don’t think that’s technical debt in the typical sense because it is difficult to put a value on how that is going to hurt us in the future – I am only trying to make my life easier the next time I come across this piece of code. The problem is that it is my opinion that structuring code in this way is preferable. I have certainly worked with people who consider it overkill.

The benefits of handling this type of ‘debt’ are not as great as taking care of a concurrency issue which could cause the application to work in a non deterministic way when handling a high number of users for example. On the other hand the amount of time to fix it is much less.

Clearly if I went through the whole code base and applied this refactoring everywhere that would not be adding value so my approach tends to be that I’ll only refactor if I’m working with that particular piece of code and its current state is hindering my ability to do so.

This is fairly similar to the advice I have been given around the best approach to getting tests around legacy code – only put tests around that code when you have to work with that particular piece of code otherwise you’ll be there all day.

Looking at the value we’re adding

There is a bit of balance between making the code perfect and adding value to the customer.

One of the ideas of lean is that we should always look at the value that we are adding to the customer and in removing some kinds of technical debt I suppose we are not actually adding tangible value. I don’t think it’s completely wasted time though because we are (hopefully) helping to reduce the time wasted trying to read difficult to understand code, making debugging easier etc.

It’s definitely a tricky balance to find though.

Written by Mark Needham

January 10th, 2009 at 2:04 pm

Coding Dojo #5: Uno

without comments

We ran our 5th coding dojo on Thursday night, writing the card game Uno in Java. We didn’t all know the rules so this video explained it – surely a parody but you never know!

The Format

We used the Randori approach again with 6 people participating for the majority of the session. Everyone paired with everyone else at least once and sometimes a couple of times.

We had the pair driving at the front of the room and everyone else further back to stop the tendency of observers to whiteboard stuff.

What We Learnt

  • Modeling games is really good for practicing design skills. Most people had played the game so we had domain experts who could use their knowledge to help drive out the API of the various classes. We didn’t get to the scoring part of the game in the time available but it was quite cool to see our code with all the terms detailed in Wikipedia’s entry on the term.
  • We managed to drive the design much more effectively than we have done on previous sessions. The flexibility to move between classes depending on where it made most sense to test from next was finally there and we didn’t end up with the problem we’ve had on previous sessions where we ended up with coarsely grained tests and then tried to code the whole application in one go.
  • It was quite painful for me personally having to manually perform operations on collections in Java rather than having the selection of functional operators that are available in C# 3.0.
  • It wasn’t a new learning but I’ve noticed in my project work that I’ve become a lot more keen to keep the steps really small – there is a bit of pressure on you to do this in a dojo situation and I think it’s just continued over from there. Every time I try to be too clever and take a big step something inevitably doesn’t work and I end up doing the small steps anyway. It’s also a lot of fun coding in this type of environment and watching how others approach problems and how they pair with each other. If you get a chance to attend a dojo I think it’d definitely be worthwhile.

Other Dojo Thoughts

  • Some ideas for future coding dojos that we discussed were:
    • Concurrency – using the Retlang/Jetlang libraries
    • Do some stuff with Web Driver
    • Modeling games
    • Taking an open source project and refactoring it
  • I notice there are a couple of sessions of coding/coding dojos planned for Jason Gorman’s Software Craftsmanship conference. It will be interesting to see how those work out, especially if there are high numbers of participants. We’ve always had a fairly small number of people involved which I think has helped to keep everyone involved. I’m not convinced it would be effective with many more participants.

Written by Mark Needham

January 8th, 2009 at 11:41 pm

Posted in Coding Dojo,Java

Tagged with

Javascript Dates – Be aware of mutability

without comments

It seems that much like in Java, dates in Javascript are mutable, meaning that it is possible to change a date after it has been created.

We had this painfully shown to us when using the datejs library to manipulate some dates.

The erroneous code was similar to this:

var jan312009 = new Date(2008, 1-1, 31);
var oneMonthFromJan312009 = new Date(jan312009.add(1).month());

See the subtle error? Outputting these two values gives the following:

Fri Feb 29 2008 00:00:00 GMT+1100 (EST)
Fri Feb 29 2008 00:00:00 GMT+1100 (EST)

The error is around how we have created the ‘oneMonthFromJan312009′:

var oneMonthFromJan312009 = new Date(jan312009.add(1).month());

We created a new Date but we are also changing the value in ‘jan312009′ as well.

It was the case of having the bracket in the wrong place. It should actually be after the ‘jan312009′ rather than at the end of the statement.

This is the code we wanted:

var jan312009 = new Date(2008, 1-1, 31);
var oneMonthFromJan312009 = new Date(jan312009).add(1).month());

Which leads to more expected results:

Sat Jan 31 2009 00:00:00 GMT+1100 (EST)
Sat Feb 28 2009 00:00:00 GMT+1100 (EST)

Written by Mark Needham

January 7th, 2009 at 11:17 pm

Posted in Javascript

Tagged with ,

Javascript: Add a month to a date

with 5 comments

We’ve been doing a bit of date manipulation in Javascript on my current project and one of the things that we wanted to do is add 1 month to a given date.

We can kind of achieve this using the standard date libraries but it doesn’t work for edge cases.

For example, say we want to add one month to January 31st 2009. We would expect one month from this date to be February 28th 2009:

var jan312009 = new Date(2009, 1-1, 31);
var oneMonthFromJan312009 = new Date(new Date(jan312009).setMonth(jan312009.getMonth()+1));

The output of these two variables is:

Sat Jan 31 2009 00:00:00 GMT+1100 (EST)
Tue Mar 03 2009 00:00:00 GMT+1100 (EST)

Not quite what we want!

Luckily there is a library called datejs which has taken care of this problem for us. It provides a really nice DSL which makes it very easy for us to do what we want.

We can add a month to a date very easily now:

var jan312009 = new Date(2009, 1-1, 31);
var oneMonthFromJan312009 = new Date(jan312009).add(1).month();
Sat Jan 31 2009 00:00:00 GMT+1100 (EST)
Sat Feb 28 2009 00:00:00 GMT+1100 (EST)

There are loads of other useful date manipulation functions which you can read more about on the API, just don’t forget that date in Javascript is mutable so any manipulation done to dates contained in vars will change the original value.

Written by Mark Needham

January 7th, 2009 at 11:00 pm

Posted in Javascript

Tagged with ,

Outliers: Book Review

with 7 comments

The Book

Outliers by Malcolm Gladwell

The Review

I came across this book following recommendations by Jason Yip and Steven ‘Doc’ List on Twitter.

I’ve previously read The Tipping Point and Blink and I like his easy going style so it was a no brainer that I was going to read this one.

I found that this book complimented Talent is Overrated quite nicely. Outliers covers how the story of how people became the best at what they do whereas Talent is Overrated focuses more on what you need to do if you want to become one of these people.

What did I learn?

  • Both this and Talent is Overrated refer to the paper written by K. Anders Ericsson about Expert Performance and Deliberate Practice. This is complemented quite nicely by a discussion on the Software Craftsmanship user group. Gladwell doesn’t cover the ideas of Deliberate Practice so much and in a lot of the examples he referred to the need for 10,000 hours of practice to become an expert but didn’t focus on the fact that this practice needs to be on tasks just above our level of competence. This type of practice is very mentally exhausting and we probably can’t do it for more than 4 or 5 hours a day at the most.
  • There are various examples talking about the luck involved in being given the opportunity to practice skills which will make us successful, ranging from Bill Gates and Bill Joy to Canadian ice hockey players. The lessons we can learn from here are that not everyone has the same opportunities and in some cases this can be changed by society. One example of this is around Canadian ice hockey leagues where the vast majority of players tend to be born in the first few months of the year because they had a strength advantage over younger children and therefore got picked for province teams and widened the gap over the others. This problem could be solved by creating 3 intakes so that kids born later can compete against others their size.

    I think it’s important to note that even if you get the opportunity it still needs to be seized upon and the hard work put in if you are to achieve something.

  • There was an intriguing chapter in the book which talked about the Power Distance Index (PDI) and how this varied in different cultures. To paraphrase, the PDI basically describes the willingness of people to speak up to people higher in the hierarchy than themselves. The higher the value the less likely people are to challenge authority. This proved particularly problematic on airplanes where clear communication is necessary between crew members who are at different levels on a hierarchy.

    I think this idea is applied in software development too – when developing in an agile way I have certainly noticed that there is less hierarchy than in a more waterfall team where the architect would dictate how the work is going to be done and the developers would just follow those instructions. In a way it can manifest itself in pair programming where more Junior team members are afraid to challenge the ideas of the Senior team members although I haven’t noticed this so much.

  • To again reference the chapter about airplane disasters, it was interesting to note that the planes generally weren’t crashing because of technical problems, but because of failure to communicate effectively. This is exactly the same in software development where the ability to communicate effectively is at least as important as having good technical skills. As Ayende points out, a lot of our job is social engineering.
  • Meaningful work is described as work which has autonomy, complexity and a clear link between effort and reward. This is the type of work which motivates people to put in the effort to become successful.

    This idea of a connection between effort and reward seems to link to the ideas around Risk/Reward contracts discussed in Lean Software Development in that there needs to be some sort of motivation for improvement i.e. a financial reward.

  • There is an interesting discussion around Maths and how quickly people tend to give up if they can’t immediately solve a problem. Maths is considered to be a skill that you either have or don’t have, but examples are given of a different learning approach taken to the subject which allows students to take their time to grasp the ideas.

    I am trying to temper my tendency to do this in my F# learning. Luckily for me there is no rush for me to learn it so I’m taking my time to try and really understand how everything works.

  • Cultured cultivation is described as a difference in the way middle class and lower class families raise their children. Parents of the former give their children a sense of entitlement and encourage them to “customise the environment for their best purposes”. This sounds very much like the parents acting as a teacher/coach to their children and guiding them on their journey.

    We can apply this in software development by adapting our principles to different situations. For example we can use OO techniques in many more situations if we take the time to consider the problem we are trying to solve. We shouldn’t sacrifice our approach just because the problem seems too different to use it.

In Summary

The book is very easy to read but a lot of it is just providing example after example to back up a point made earlier on. It would have been nice to see more ideas around how we can grasp opportunities that come our way rather than focusing so much on the luck element of this.

There are a few other reviews of the book that I found quite interesting to read. Each review approaches the book from a slightly different angle and takes different ideas out.

Written by Mark Needham

January 6th, 2009 at 11:23 pm

Posted in Books

Tagged with , ,