Mark Needham

Thoughts on Software Development

Archive for July, 2010

Writing off a badly executed practice

with 8 comments

I recently came across an interesting post about pair programming by Paritosh Ranjan where he outlines some of the problems he’s experienced with this practice.

While some of the points that he raises are certainly valid I think they’re more evidence of pair programming not being done in an effective way rather than a problem with the idea in itself.

To take one example:

Generally people don’t think a lot while pair programming as the person who wants to think about the pros and cons will be considered inefficient (as he will slow down the coding speed). So, generally people show fake confidence on the effectiveness of the proposed solution.

While it’s certainly possible to end up with this scenario, I’ve also taken part in pairing sessions where we were able to think through a problem and come up with a design much more effectively than either of us would have been able to alone.

Something that I’ve noticed that I do too frequently is seeing a practice being executed in a sub optimal way and coming to the conclusion that there’s a problem with the practice itself.

For example I wrote a post about a month ago outlining some of the problems that we’d been having with retrospectives on some of the projects that I’ve been working on and at one stage towards the beginning of the year I was wondering whether there was even a lot of value in having a retrospective.

What was pointed out in the comments of that post and subsequently on threads on our internal mailing list is that to keep people engaged we should vary the way we run the retrospective rather than running the same format every time.

I’m told Esther Derby and Diana Larsen’s ‘Agile Retrospectives‘ has a lot of ideas on how to run retrospectives more effectively but I haven’t quite got around to reading it just yet.

Another practice which I’ve been doubting is the Iteration Kick Off meeting which often seems to be a session where we read through every single story that’s coming up while the majority of the people in the room are completely disengaged. These meetings often dragged on for an hour or more.

Discussing this with a business analyst colleague last week he pointed out that he runs these meetings in a completely different way. His goal is to communicate what functionality is coming up so that any coding decisions can be made with that in mind.

That communication doesn’t necessarily have to be in a meeting, it could just be a conversation had over lunch. If it is a meeting then he’d look to keep it short and to the point.

The underlying trend behind all of these is that we saw a practice being done in a sub optimal way and came to the conclusion that there must be a problem with the practice.

I’m coming to the conclusion that it would be more effective to look at the goal the practice is trying to achieve first and see if we can change the way we’re executing the practice to achieve what we want.

If not then perhaps the practice is at fault and we need to look for another way to achieve our goal.

Written by Mark Needham

July 17th, 2010 at 11:13 am

Posted in Agile

Tagged with

TDD: I hate deleting unit tests

with 10 comments

Following on from my post about the value we found in acceptance tests on our project when doing a large scale refactoring I had an interesting discussion with Jak Charlton and Ben Hall about deleting unit tests when they’re no longer needed.

The following is part of our discussion:

Ben:

@JakCharlton @markhneedham a lot (not all) of the unit tests created can be deleted once the acceptance tests are passing…

Jak:

@Ben_Hall @markhneedham yep I agree, but that isn’t what TDD really advocates – its a balance, unit tests work well in some places

Me:

@Ben_Hall @JakCharlton gotta be courageous to do that.Its like you’re ripping away the safety net. Even if it might be an illusion of safety

Jak:

@markhneedham one of the XP principles … Courage :)

dragons2.jpg

While Jak and Ben are probably right I do find myself feeling way more anxious about deleting test code than I would deleting production code.

I think that this is mostly because when I delete production code we usually have some tests around that code so there is a degree of safety in doing so.

Deleting tests seems a bit more risky because there’s much more judgement involved in working out whether we’re removing the safety net that we created by writing those tests in the first place.

The diagram on the right hand side shows the way I see the various safety nets that we create to protect us from making breaking changes in production code.

In this case it might seem that a unit test is providing safety but it’s now an illusion of safety and in actual fact it’s barely protecting us at all.

I find it much easier to delete a unit test if it’s an obvious duplicate or if we’ve completely changed the way a piece of code works such that the test will never pass again anyway…

..and I find it more difficult to judge when we end up with tests which overlap while testing similar bits of functionality.

Do others feel like this as well or am I just being overly paranoid?

Either way does anyone have any approaches that give you ore confidence that you’re not deleting something that will come back to haunt you later?

Written by Mark Needham

July 15th, 2010 at 11:15 pm

Posted in Testing

Tagged with

Drive – Dan Pink

without comments

One of the more interesting presentations doing the rounds on twitter and on our internal mailing lists is the following one by Dan Pink titled ‘Drive – The surprising truth about what motivates us‘.

This topic generally interests me anyway but it’s quite intriguing that the research Dan has gathered support for what I imagine many people intrinsically knew.

Incentives

The presentation dispels the myth that money always works as a motivator for getting people to do what we want them to do. Rather its value is more noticeable as an anti de-motivator – Pink suggests that organisations need to pay people enough so that money is no longer an issue.

He goes on to detail research which shows that money doesn’t work as a motivator when a task requires even rudimentary cognitive skill which suggests it wouldn’t be all that effective in a software development context.

Following on with this logic Pink suggests that before he did his research he would have thought that if you wanted to get innovation in an organisation then that would best be achieved by giving out an innovation bonus. He now suggests this wouldn’t be effective.

We had an interesting example of this recently where the ThoughtWorks UK and Australia offices tried to run an iPad App competition with the prize of an iPad for the winning entry.

This worked fine in Australia but didn’t at all in the UK. Since the ‘incentive’ was the same in both cases I’m inclined to believe that the reason it worked better in Australia was due to the fact that the guys there managed to create a spirit of competition between the Melbourne & Sydney offices and between the participants.

How do we get better performance?

Pink suggests that there are three main factors which lead to better performance & personal satisfaction – autonomy, mastery and purpose – and I think that to an extent we can get all of these with the agile approach to software development.

Autonomy

Pink suggests that people like to be self directed and direct their own lives and he gives the example of Atlassian’s Fedex days as an example of developers being given the freedom to do whatever they want for 24 hours and coming up with some really useful products at the end of it.

I think we have autonomy to an extent on software projects. We might not have the ability to choose exactly what problem we’re solving for our customer but we do mostly have the ability to choose how we’re going to solve it.

We can choose how we’re going to design the solution, discussing various trade offs with our customer if there’s more than one way to solve it…

Mastery

Pink points out that many people play musical instruments at the weekend even though they have little chance of ever earning money from doing so and he suggests that the reason for this is its fun and we have an urge to get better.

Software development is almost a perfect fit for mastery because there are so many different areas for improvement no matter how good you are.

Corey Haines has been pushing the idea of ‘Learn To Type Week‘ which is just one example of this.

I thought I was pretty good at touch typing but after trying out the different exercises he’s been linking to I realise that there are still areas in which I can improve upon and it’s quite fun trying out the exercises and trying to get better.

Apart from that there are code katas, code koans, new languages and many other ways in which we can improve ourselves.

Purpose

Pink describes purpose as “what gets you up in the morning racing to go to work” and suggests that we need something more than just making profit.

I mostly don’t feel that I’m actually “going to work” on the projects I’ve been on because the majority of the time the other two factors are being met and I have the opportunity to work with passionate people who want to deliver high quality software together.

I’m not sure if that quite gets to the essence of purpose but it certainly doesn’t feel that what I’m working on is pointless – we’re building something which helps solve a problem for our customer and that seems like a valuable thing to do to me.

Overall

I found this presentation really interesting and the way it’s presented with the cartoons is really cool as well.

I haven’t yet read Dan Pink’s book ‘Drive‘ which presumably expands upon this talk but I’ve read ‘A Whole New Mind‘ which was interesting reading so I’m sure I’ll read this one at some stage.

Written by Mark Needham

July 15th, 2010 at 12:21 am

J: Tacit Programming

with one comment

A couple of months ago I wrote about tacit programming with respect to F#, a term which I first came across while reading about the J programming language.

There’s a good introduction to tacit programming on the J website which shows the evolution of a function which originally has several local variables into a state where it has none at all.

I’ve been having a go at writing Roy Osherove’s TDD Kata in J and while I haven’t got very far yet I saw a good opportunity to move the code I’ve written so far into a more tacit style.

From my understanding two of the ways that we can drive towards a tacit style are by removing explicitly passed arguments and variables from our code.

Remove explicitly passed arguments

The second part of the kata requires us to allow new lines separating numbers as well as commas so with the help of the guys on the J mailing list I wrote a function which converts all new lines characters into commas:

replaceNewLines =: 3 : 0
	y rplc ('\n';',')
)

‘rplc’ takes an input as its left hand argument and a 2 column boxed matrix as its right hand argument.

In this case the left hand argument is ‘y’ which gets passed to ‘replaceNewLines’ and the boxed matrix contains ‘\n’ and ‘,’. The left item in the matrix gets replace by the right item.

We want to get to the point where we don’t have to explicitly pass y – it should just be inferred from the function definition.

As is the case in F# it seems like if we want to do this then we need to have the inferred value (i.e. y) passing as the last argument to a function which in this case means that it needs to be passed as the right hand argument.

‘rplc’ is actually the same as another function ‘stringreplace’ which takes in the arguments the other way around which is exactly what we need.

replaceNewLines =: 3 : 0
	('\n';',') stringreplace y
)

The next step is to apply the left hand argument of ‘stringreplace’ but infer the right hand argument.

We can use the bond conjunction (&) to do this. The bond conjunction creates a new function (or verb in J speak) which has partially applied the left argument to the verb passed as the right hand argument.

replaceNewLines =:  ('\n' ; ',') & stringreplace

‘replaceNewLines’ represents the partial application of ‘stringReplace’. We can now pass a string to ‘replaceNewLines’ and it will replace the new lines characters with commas.

Remove local variables

My ‘add’ function currently looks like this:

1
2
3
4
add =: 3 : 0
	newY =. replaceNewLines y
 	+/ ". newY
 )

We want to try and drive ‘add’ to the point where it’s just a composition of different functions.

At the moment on line 3 we have ‘+/’ which can be used to get the sum of a list of numbers.

e.g.

+/ 1 2 3
> 6

We also have ‘”.’ which converts a character array into numbers.

e.g.

". '1,2,3'
> 1 2 3

In order to compose our 3 functions together without explicitly passing in ‘y’ or ‘newY’ we need to make use of atop conjunction (@) which “combines two verbs into a derived verb that applies the right verb to its argument and then applies the left verb to that result.”

It works in the same way as Haskell’s function composition operator i.e. it applies the functions starting with the one furthest right.

We end up with this:

add =: +/ @ ". @ replaceNewLines

or if we want to inline the whole thing:

add =:  +/ @ ". @ ( ('\n' ; ',') & stringreplace )

Overall

I’m still getting the hang of this language – it’s taking much longer than with any other languages I’ve played with – but so far the ideas I’ve come across are very interesting and it seems like it massively reduces the amount of code required to solve certain types of problems.

Written by Mark Needham

July 13th, 2010 at 2:47 pm

Posted in J

Tagged with

Linchpin: Book Review

with 5 comments

I’ve read a couple of Seth Godin’s other books – Tribes and The Dip – and found them fairly readable so I figured his latest offering, Linchpin, would probably be worth a read too.

51fMyB3O1TL._SL500_AA300_.jpg

This is the first book that I’ve read on the iPad’s Kindle application and it was a reasonably good reading experience – I particularly like the fact that you can make notes and highlight certain parts of the text.

It’s also possible to see which areas of the book other Kindle users have highlighted and you can see your highlights/notes via the Kindle website.

The Review

The sub-title of the book: ‘Are you indispensable? How to drive your career and create a remarkable future’ didn’t really appeal to me all that much but I’ve found that these types of things are often just for provocation and don’t necessarily detract from the message.

As with the other Godin books I’ve read, he spends a lot of time making his original point and in this one it felt like I’d already agreed with what he was saying for at least 50 pages before we moved onto something else.

Despite that I think there were some good points made in the book. These were some of the more interesting ones for me:

  • One of my favourite quotes from the book is the following about fire:

    Fire is hot. That’s what it does. If you get burned by fire, you can be annoyed at yourself, but being angry at the fire doesn’t do you much good. And trying to teach the fire a lesson so it won’t be hot next time is certainly not time well spent.

    Our inclination is to give fire a pass, because it’s not human. But human beings are similar, in that they’re not going to change any time soon either.

    One of the things I’ve noticed over time is that it’s very unlikely that the core way a person behaves is likely to change regardless of the feedback that they get from other people.

    Pat touches on this in a post where he points out that we need to be prepared for feedback to not be accepted.

    I don’t think this means we should stop giving feedback to people if we think it will help them be more effective but it does seem useful to keep in mind and help us avoid frustration when we can’t change a person to behave in the way we’d like them to.

  • Godin makes an interesting point about the value of the work that we do:

    A day’s work for a day’s pay (work <=> pay). I hate this approach to life. It cheapens us.

    This is exactly the consulting model – billing by the hour or by the day – and although I’ve come across a couple of alternative approaches I’m not sure that they work better than this model.

    Value based pricing is something which I’ve read about but it seems to me that there needs to be a certain degree of trust between the two parties for that to work out. The other is risk/reward pricing and I’ve heard good/bad stories about this approach.

    It seems to me that we’d need to have a situation where both parties were really involved in the long term outcome of the project/system so if one party is only invested for a short % of this time then it’s going to be difficult to make it work.

  • Shipping is another area he touches on in a way that makes sense to me:

    I think the discipline of shipping is essential in the long-term path to becoming indispensable.

    The only purpose of starting is to finish, and while the projects we do are never really finished, they must ship. Shipping means hitting the publish button on your blog, showing a presentation to the sales team, answering the phone, selling the muffins, sending out your references. Shipping is the collision between your work and the outside world.

    While this is just generally applicable, in software terms this would be about the sort of thing that my colleagues Rolf Russell & Andy Duncan cover in their talk ‘Rapid and Reliable Releases‘.

    I also had an interesting discussion with Jon Spalding about the importance of just getting something out there with respect to the ‘Invisible Hand‘ browser plugin that he’s currently working on. Jon pointed out that it’s often best to ship and then rollback if there are problems rather than spending a lot of time trying to make sure something is perfect before shipping.

  • Godin spends a lot of time pointing out how important human interactions and relationships are and I think this is something that is very important for software delivery teams. One of the most revealing quotes is this:

    You might be parroting the words from that negotiation book or the public-speaking training you went to, but every smart person you encounter knows that you’re winging it or putting us on.

    Virtually all of us make our living engaging directly with other people. When the interactions are genuine and transparent, they usually work. When they are artificial or manipulative, they fail.

    I attended a consulting training course when I first started working at ThoughtWorks 4 years ago and I’ve always found it impossible to actually use any of the ideas because it doesn’t feel natural. It’s interesting that Godin points out why this is!

Overall this book feels to me like a more general version of Chad Fowler’s ‘The Passionate Programmer‘ which is probably a more applicable book for software developers.

This one is still an interesting read although it primarily points out stuff that you probably already knew in a very clear and obvious way.

Written by Mark Needham

July 12th, 2010 at 4:07 pm

Posted in Books

Tagged with ,

The Internet Explorer 6 dilemma

with 10 comments

A couple of weeks ago Dermot and I showcased a piece of functionality that we’d been working on – notably hiding some options in a drop down list.

We showcased this piece of functionality to the rest of the team in Firefox and it all worked correctly.

Our business analyst, who was also acting as QA, then had a look at the story in Internet Explorer 6 and we promptly realised that the way we’d solved the problem didn’t actually work in IE6.

In retrospect we should have showcased the story in IE6 in order to shorten the feedback cycle but if we take that logic even further than it’s clear that we should be running our application in IE6 frequently as we’re developing functionality.

It’s a dilemma that we’ve faced on nearly every project I’ve worked on recently.

We know in the back of our minds that we need to make it work on Internet Explorer 6 but because of Firebug the speed of development is siginifcantly quicker if we use Firefox.

It’s almost as if we’re trading off the longer term safety we would have if we use IE6 all the time for the quick feedback cycles we get from the Firebug console/CSS editor when we’re fiddling around with Javascript and CSS.

The way that we’re working at the moment is to continue using Firefox for local development but trying to make sure that we test and showcase in IE6.

It’s not a foolproof approach, as can be seen by the example I gave at the beginning of this post, so I’d be interested if anyone has any clever ideas for dealing with the situation where we have a requirement to make our application IE6 compatible.

Written by Mark Needham

July 11th, 2010 at 7:31 pm

A new found respect for acceptance tests

with 8 comments

On the project that I’ve been working on over the past few months one of the key benefits of the application was its ability to perform various calculations based on user input.

In order to check that these calculators are producing the correct outputs we created a series of acceptance tests that ran directly against one of the objects in the system.

We did this by defining the inputs and expected outputs for each scenario in an excel spreadsheet which we converted into a CSV file before reading that into an NUnit test.

It looked roughly like this:

acceptance-tests.jpg

We found that testing the calculations like this gave us a quicker feedback cycle than testing them from UI tests both in terms of the time taken to run the tests and the fact that we were able to narrow in on problematic areas of the code more quickly.

As I mentioned on a previous post we’ve been trying to move the creation of the calculators away from the ‘CalculatorProvider’ and ‘CalculatorFactory’ so that they’re all created in one place based on a DSL which describes the data required to initialise a calculator.

In order to introduce this DSL into the code base these acceptance tests acted as our safety net as we pulled out the existing code and replaced it with the new DSL.

acceptance-tests-part2.jpg

We had to completely rewrite the ‘CalculationService’ unit tests so those unit tests didn’t provide us much protection while we made the changes I described above.

The acceptance tests, on the other hand, were invaluable and saved us from incorrectly changing the code even when we were certain we’d taken such small steps along the way that we couldn’t possibly have made a mistake.

This is certainly an approach I’d use again in a similar situation although it could probably be improved my removing the step where we convert the data from the spreadsheet to CSV file.

Written by Mark Needham

July 11th, 2010 at 5:08 pm

Posted in Testing

Tagged with ,

Performance: Do it less or find another way

with 3 comments

One thing that we tried to avoid on the project that I’ve been working on is making use of C# expressions trees in production code.

We found that the areas of the code where we compiled these expressions trees frequently showed up as being the least performant areas of the code base when run through a performance profiler.

In a discussion about the ways to improve the performance of an application Christian pointed out that once we’ve identified the area for improvement there are two ways to do this:

  • Do it less
  • Find another way to do it

We were able to find another way to achieve what we wanted without using them and we favoured this approach because it was much easier to do,

An alternative would have been to cache the compilation of the expression so that it wouldn’t happen every single time a request passed through that code path.

A lot of the time it seems like it’s actually possible to just do something less frequently rather than changing our approach…

For example:

  • Caching a HTTP response so that not every request has to go all the way to the server.
  • Grouping together database inserts into a bulk query rather than executing each one individually.

… and most of the time this would probably be a simpler way to solve our problem.

I quite like this heuristic for looking at performance problems but I haven’t done a lot of work in this area so I’d be interested in hearing other approaches as well.

Written by Mark Needham

July 10th, 2010 at 10:49 pm

Posted in Software Development

Tagged with

Installing Ruby 1.9.2 with RVM on Snow Leopard

with 64 comments

Yesterday evening I decided to try and upgrade the Ruby installation on my Mac from 1.8.7 to 1.9.2 and went on the yak shaving mission which is doing just that.

RVM seems to be the way to install Ruby these days so I started off by installing that with the following command from the terminal:

bash < <( curl http://rvm.beginrescueend.com/releases/rvm-install-head )

That bit worked fine for me but there are further instructions on the RVM website if that doesn’t work.

My colleague David Santoro pointed me to a post on ASCIIcasts detailing the various steps to follow to get Ruby installed.

Unfortunately my first attempt…

rvm install 1.9.2

…resulted in the following error in the log file:

yaml.h is missing. Please install libyaml.
readline.c: In function ‘username_completion_proc_call’:
readline.c:1292: error: ‘username_completion_function’ undeclared (first use in this function)
readline.c:1292: error: (Each undeclared identifier is reported only once
readline.c:1292: error: for each function it appears in.)
make[1]: *** [readline.o] Error 1
make: *** [mkmain.sh] Error 1

I thought that perhaps ‘libyaml’ was the problem but Michael Guterl pointed me to a post on the RVM mailing list which suggests this was a red herring and that the real problem was ‘readline’.

That led me back to a post on the RVM website which explains how to install ‘readline’ and then tell RVM to use that version of readline when installing Ruby.

I tried that and then ran the following command as suggested on Mark Turner’s blog post:

rvm install 1.9.2-head -C --enable-shared,--with-readline-dir=/opt/local,--build=x86_64-apple-darwin10

That resulted in this error:

ld: in /usr/local/lib/libxml2.2.dylib, file was built for i386 which is not the architecture being linked (x86_64)
collect2: ld returned 1 exit status
make[1]: *** [../../.ext/x86_64-darwin10/tcltklib.bundle] Error 1
make: *** [mkmain.sh] Error 1

Michael pointed out that I needed to recompile ‘libxml2.2′ to run on a 64 bit O/S as I’m running Snow Leopard.

I hadn’t previously used the ‘file’ function which allows you to see which architecture a file has been compiled for.

e.g.

file /usr/local/lib/libxml2.2.dylib
 
/usr/local/lib/libxml2.2.dylib: Mach-O dynamically linked shared library i386

I had to recompile ‘libxml2.2′ to run on a 64 bit O/S which I did by downloading the distribution from the xmlsoft website and then running the following commands:

tar xzvf libxml2-2.7.3.tar.gz 
cd libxml2-2.7.3
./configure --with-python=/System/Library/Frameworks/Python.framework/Versions/2.3/
make
sudo make install

Re-running RVM Ruby installation command I then had this error instead:

tcltklib.c:9539: warning: implicit conversion shortens 64-bit value into a 32-bit value
ld: in /usr/local/lib/libsqlite3.dylib, file was built for i386 which is not the architecture being linked (x86_64)
collect2: ld returned 1 exit status
make[1]: *** [../../.ext/x86_64-darwin10/tcltklib.bundle] Error 1
make: *** [mkmain.sh] Error 1

I downloaded the sqlite3 distribution and having untarred the file ran the following commands as detailed on this post:

CFLAGS='-arch i686 -arch x86_64' LDFLAGS='-arch i686 -arch x86_64' 
./configure --disable-dependency-tracking
make
sudo make install

Next I needed to recompiled ‘libxslt’ which I downloaded from the xmlsoft website as well before untarring it and running the following:

./configure
make
sudo make install

Having done that I re-ran the RVM Ruby installation command:

rvm install 1.9.2-head -C --enable-shared,--with-readline-dir=/opt/local,--build=x86_64-apple-darwin10

And it finally worked!

The magnificent yak is borrowed under the Creative Commons Licence from LiminalMike’s Flickr Stream.

Written by Mark Needham

July 8th, 2010 at 1:10 pm

Posted in Ruby

Tagged with

Group feedback

with 3 comments

On an internal mailing list my colleague David Pattinson recently described a feedback approach he’d used on a project where everyone on the team went into a room and they took turns giving direct feedback to each person.

mikado.jpg

Since we were finishing the project that we’ve been working on for the past few months, Christian, Dermot and I decided to give it a try last week.

One thing to note is that this feedback wasn’t linked to any performance review, it was just between the 3 of us to allow us to find ways that we can be more effective on projects that we work on in the future.

Much like David I found this approach to feedback to be the most useful that I’ve seen in nearly 4 years working at ThoughtWorks.

We took it in turns to receive feedback from the other two guys and each person first explained what they wanted the feedback to focus on.

I’ve participated in face to face feedback before but what I liked better about this approach was that someone could make an observation about something that you’d done and then that became a discussion point between the three of us.

In general it seems to promote a more conversational style of feedback than often seems to happen when it’s just one on one.

I think it was really good being able to get two opinions on each behaviour as people often have different takes on the same situation. Taking both viewpoints together along with your own seemed to make it easier to narrow in on the behaviour and see how it could be improved.

When giving feedback it was useful to have someone else doing so at the same time as it helped remind me about things that I’d forgotten about.

I still need to improve the way I give and receive feedback – Pat Kua details a series of tips for extracting behaviours from the actual feedback that people give and has several other posts on the topic. This is the best resource that I’ve come across but I’d be interested in knowing of any others.

The key thing that I’ve noticed when giving feedback is to only point out your observation and the impact it had on you rather than making assumptions about why the person might have done that – you’re nearly always wrong!

Overall though I found the group feedback approach to be useful and it’s something I’ll look to encourage on projects I work on in the future although I’m unsure how well it would scale in a larger team.

Photo taken from AmyZZZ1′s Flickr stream under the Creative Commons licence.

Written by Mark Needham

July 7th, 2010 at 12:17 am

Posted in Feedback

Tagged with