Mark Needham

Thoughts on Software Development

Archive for November, 2008

Saff Squeeze: First Thoughts

with one comment

While practicing some coding by doing the Roman number conversion last weekend I came across an article by Kent Beck which talked of a method he uses to remove the need to use the debugger to narrow down problems.

He calls the method the ‘Saff Squeeze‘ and the basic idea as I understand it is to write the original failing test and then inline the pieces of code that it calls, adding assertions earlier on in the code until the actual point of failure is found.

The thinking behind this is that we will now have another much smaller test which we can add to our test suite although he did point out that it may take longer to solve the problem this way rather than using the debugger.

I’m not a fan of debugging through code and I believe if we are using TDD effectively then it should reduce the need to use the debugger.

When I got a bit stuck with the parsing of the input for my Roman number conversion I decided this would be a good time to give the approach a trial run.

The actual problem was that I was trying to parse a value such as ‘XI’ by called String.split(“”) which looking back of course was ridiculous. This was resulting in an array with 3 values – X,I and “” – which then gave a null pointer exception when I tried to convert it to a numeric value.

Applying the Saff Squeeze allowed me to narrow down this problem and change the implementation when I realised my approach was never going to work.

Although I wasn’t able to keep the test which I had created from all the inlining, it did become clear to me from this exercise that my tests around certain areas of the code were not fine grained enough. I find I lose the discipline a bit when I test from the outside in which is something I am trying to improve. This method proved to be a good way of keeping me honest.

Kent suggested that it would take much longer to debug using this approach but I found I was able to solve my problem almost as quickly as if I had debugged through it with the added benefit that I wrote a few extra tests while I was narrowing down the problem.

It is certainly an interesting approach although one which I need more practice with before trying to introduce it into a work environment.

Written by Mark Needham

November 21st, 2008 at 12:58 am

Posted in Coding

Tagged with , ,

Debugging ASP.NET MVC source code

with one comment

We’ve been doing some work with the ASP.NET MVC framework this week and one of the things we wanted to be able to do is to debug through the source code to see how it works.

Our initial idea was to bin deploy the ASP.NET MVC assemblies with the corresponding pdbs. Unfortunately this didn’t work and we got a conflict with the assemblies deployed in the GAC:

Compiler Error Message: CS0433: The type 'System.Web.Mvc.FormMethod' exists in both 'c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\8553427a\c1d1b9c6\assembly\dl3\898a195a\60680eb9_3349c901\System.Web.Mvc.DLL' and 'c:\WINDOWS\assembly\GAC_MSIL\System.Web.Mvc\1.0.0.0__31bf3856ad364e35\System.Web.Mvc.dll'

We attempted to uninstall the System.Web.Mvc assembly from the GAC but were unable to do so because it has other dependencies.

The next idea was to uninstall ASP.NET MVC using the MSI uninstaller. This worked in terms of getting rid of the assembly from the GAC but it meant that we no longer had support for ASP.NET MVC in Visual Studio.

Luckily Troy pointed out James Kovac’s blog post about debugging the .NET framework source and we were able to debug the ASP.NET MVC code by hooking up the pdbs that come with the source code download.

Some other approaches were pointed out on the ALT.NET mailing list although the above is what worked for us.

Written by Mark Needham

November 19th, 2008 at 9:30 pm

Posted in .NET

Tagged with ,

The Toyota Way: Book Review

with 9 comments

The Book

The Toyota Way by Jeffrey Liker

The Review

I was initially very skeptical about the value of lean in software development but became intrigued as to its potential value after listening to Jason championing it. Since The Toyota Way is the book where many of the ideas originated from I thought it only made sense for this to be my first port of call to learn about lean.

What did I want to learn?

  • What are the similarities and differences between the Lean and Agile principles?
  • How do we apply ideas around quality in software development?
  • How important are leaders to making it all happen?
  • If lean is not about tools what is it about?
  • Is waste ever acceptable in our process?

What did I learn?

  • My doubts about lean come from my perception of an overly tool based focus when it comes to applying lean. As Dan points out those just learning a technique require tools and practices to apply but I don’t believe this can work in the long run. I was therefore very pleased that early on the author recognised this problem and points out that to achieve proper results from applying Toyota’s approach we need more than tools and that lean needs to ‘permeate an organisation’s culture’.
  • The concept of learning by doing was one that came across as being very important and it was emphasised constantly throughout the book. This can certainly be applied in software development, and it is actually often quicker just to try out things and see what happens rather than spending huge amounts of times reading up on the best approach. This is actually something I have learnt several times over the last few weeks and reminded me of something Scott Young posted questioning the value of reading when we’re not actually doing the things we’re reading about.
  • It was interesting to note the similarities between mass production thinking and waterfall compared to the Toyota/lean/agile approach to solving the same problem. My favourite quote referring to this was:

    What is the ideal way to organize your equipment and processes? In traditional mass production thinking…,the answer seems obvious: group similar machines and similarly skilled people together.

    This is a completely different approach to the cross functional teams that we assemble for agile projects and which are used in Toyota.

  • I liked the idea of poka yoke – devices used to make it impossible for an operator to make a mistake. The emphasis is on fixing the process rather than blaming the person. I think this is quite a useful idea to take into the world of software development although I think in the agile world at least there is more emphasis on working together rather than blaming each other for mistakes.
  • Throughout the book there was an emphasis on working out how we can add value to the customer – the idea being that the next stage or process is our customer. Often when we think of the customer it’s only the customer right at the end of the process so this was an interesting difference. In general the book emphasises Toyota’s process focus over the final outcome. I like this idea because there is a lot more to be learnt when it comes to reflecting on our process instead of just the result from my experience.
  • Although most of the book and indeed most of the previous material I have come across about lean and Toyota talks about removing waste and non value added processes, the underlying idea of continuous improvement was what I found the most intriguing. Ideas such as stopping the line if there’s a problem in order to improve that part of the process and expecting for the line to be stopped otherwise there will never be any improvement are concepts that would be quite unusual to many western businesses I would imagine. I think the ‘stopping the line’ idea can be applied in software development to raise problems when they arise as well. This way we can get everyone involved in trying to solve the problem rather than just letting one person struggle with it. I think this idea is partially applied with regards to the continuous integration build status but it would be unusual to see the whole team stop doing their work because it was red.
  • Early on in the book the TPS House was mentioned – the idea being that everything must be strong from the use of the tools to the belief in the philosophy. It seemed to me from reading this that applying Toyota’s ideas successfully is an all or nothing game – i.e. it would be very difficult to be successful in the long term by just picking and choosing certain ideas without having the other ones to support them.
  • Again I came across the idea that it takes 10 years to become an expert, or 10 years to really get The Toyota Way and use it in a sustainable way.

In Summary

The lean terminology (inventory, waste, etc) has become much more widely used on the last couple of projects I have worked on so it is good for me to have read this book as I now have a better understanding of what people are talking about. This served as a very good introductory book around this area.

I was worried that the book wouldn’t be that applicable to me as a software developer but I was able to see the parallels between how what we do and what is done in manufacturing have similarities. The next book for my lean learning will be The Poppendieck’s ‘Lean Software Development

Written by Mark Needham

November 19th, 2008 at 6:53 am

Posted in Books

Tagged with , ,

Standups: Pair stand together

with 3 comments

One of the common trends I have noticed in the stand ups of teams which practice pair programming is that very often the first person in the pair describes what they have been working on and what they will be doing today and then when it comes to the other person they say ‘ditto’.

After I dittoed one too many times on a project earlier this year it was pointed out to me that this was not a valuable way of contributing to the weekend and that I should describe my view of our progress as it may differ to my pair.

I had thought I was avoiding repetition by not contributing information which had already been covered, but I went along with this idea and immediately something was mentioned which hadn’t been previously covered. Instant vindication!

More recently I noticed a similar trend on another project I was working on and we came up with the idea of each pair standing together in the standup and effectively speaking as one.

This allowed us to avoid the ‘repetition problem’ and the ‘ditto problem’ as when it was the second person’s turn to speak they could just add in any details that their pair didn’t speak about without having to repeat the whole context again.

I think this worked reasonably well and it seemed to make our standups move along more quickly.

Written by Mark Needham

November 17th, 2008 at 10:16 pm

Posted in Agile

Tagged with , ,

Agile – Should everyone have to learn all the roles?

with 3 comments

In my final year of university a few years ago when I was applying for jobs I was really keen to join the (then) Reuters Graduate Technology program.

The thing that appealed to me the most was that over the 2 years you were on the graduate program you would have the opportunity to be placed in 4 different roles within the business. The website gives some examples:

Technical architect, Project manager, Infrastructure service manager, Business analyst, Product & development manager, Software engineer, Implementation engineer, Desktop design consultant, Technical specialist, Deployment project manager, Training

What I really liked about this idea was:

  1. It gave you an opportunity to try different things and see what you liked best.
  2. Regardless of what you eventually chose to do you now had a broader perspective of the world from having done the other 3 roles.

I was reminded of this a couple of weeks ago when I was having a conversation with a colleague about whether everyone on an agile team should have to learn how to do all the roles (Developer, BA, QA, Iteration Manager, Project Manager).

Why learn all the roles?

The main advantage of having people who have the ability to work in different roles is that it provides a team with great flexibility and helps to increase the team’s bus factor.

From my experience on agile teams it is often the case that there is a single Business Analyst and if they are on holiday or absent then there is not necessarily someone else on the team with the skill set to cover them. When these type of situations arise it would be useful to have someone who could play that role for the day.

Having experience across different roles gives you a different perspective when playing your current role. Certainly developers would be much more inquisitive about the business value a particular story is adding if they had experience working in an analyst role and and analysts who have been developers have a better idea about the types of things that developers need to know to complete a story.

As individuals we learn much more quickly when we are operating in beginner’s mind, a state we would be able to achieve with much more frequency if we are learning different roles. The natural inquisitiveness we display when learning a new skill can also be beneficial to the rest of the team as it will bring up assumptions which may not necessarily be correct but which wouldn’t have been exposed otherwise.

I’ve heard the argument that the skill set for BAs/QAs is quite similar and I have certainly worked with colleagues who are able to perform both roles. Many project managers I have worked with have performed another role previously, so maybe it is only natural that you will end up with some people with multi role skill sets on any given team.

Why not?

I think if we are to become skilled across multiple roles in a team then we need to become careful that we don’t have the situation where we become a jack of all trades and a master of none.

In agile teams the idea of having generalising specialists is encouraged so we would need to make sure that we retain some areas of specialism while also having the ability to fill other roles rather than spreading ourself too thinly across all roles.

This seems to indicate to me that a certain amount of depth is required in our principal role which mainly comes from spending time doing it. I think this is the same principle which holds when learning programming languages – it’s better to gain a solid understanding of one type of language before trying to learn other ones from my experience.

The idea of being skilled in multiple roles also seems to go against the idea of playing to your strengths encouraged by Marcus Buckingham in his book ‘Go Put Your Strengths to Work‘. If we have already found what we love to do and can do it well then there may not be that much benefit in trying to change, although it is always useful to step outside your comfort zone once in a while.

In Summary

I think it is good to have people with the ability to play multiple roles on a team but I’m not sure whether every single team member needs to be able to do this – it is certainly useful to have some people on a team who are purely experts in one role.

I’m sure I haven’t covered all the potential arguments here but I find it interesting to see how far the generalising specialist idea should stretch – where is the limit of when having greater depth in one area becomes preferable to spreading our abilities across different ones?

*Update*
As Lachlan points out in the comment we should be looking to increase the bus factor on the project rather than reduce it. My mistake.

Written by Mark Needham

November 17th, 2008 at 12:14 am

Posted in Agile

Tagged with

Build: Red/Green for local build

without comments

One thing I’m learning from reading The Toyota Way is that visual indicators are a very important part of the Toyota Production System, and certainly my experience working in agile software development is that the same is true there.

We have certainly learnt this lesson with regards to continuous integration – the build is either red or green and it’s a very obvious visual indicator of the code base at any moment in time.

On my last two projects we have setup build lights which project the build status even more visually without us even having to go to the reporting page.

I have never thought about applying this principle when running the build locally but my colleague James Crisp has setup our build to light up the whole background of the command prompt window red or green depending on the status code we get back from running Nant.

James has promised to post the code we used from the command prompt to do this on his blog – I’ve tried to work out how to achieve the same effect in the Unix shell but I haven’t figured out how to do it yet. I’ve worked out how to set the background colour for the prompt but not for the whole window – if anyone knows how to do it, let me know!

The idea is fairly simple though:

  • Run the build
  • If the exit status is 1, make the background red
  • If the exit status is 0, make the background green

Written by Mark Needham

November 15th, 2008 at 8:26 am

Posted in Build

Tagged with

Coding Dojo #2: Bowling Game & Object Calisthenics Continued

with 3 comments

We ran another Coding Dojo on Wednesday night as part of ThoughtWorks Geek Night where we continued working on the Bowling Game problem from last week, keeping the Object Calisthenics approach broadly in mind but not sticking to it as strictly.

The Format

This time we followed the Randori approach, with a projector beaming the code onto the wall, 2 people pairing on the problem and everyone else watching.

We rotated one of the pair every 7 minutes using the Minutes OS X widget to keep track of time. There were 6 of us and everyone had around 6 or 7 times at the keyboard.

The pair switching involved switching the current driver out, the current navigator taking over as the driver, and one of the people from the audience coming in as the new navigator.

What We Learnt

  • There were a couple of problems which became apparent early on. One problem we noticed which Danilo also pointed out in his paper was that keyboard shortcuts are a bit of a problem, both in terms of us developing on a Mac and using IntelliJ. My UK keyboard layout also provided an area of difficulty – we’ll probably try and use one with an Australia layout next time.
  • We started off using the Hamcrest library for doing assertions but eventually resorted to using JUnit assert methods as these were better known by the group.
  • We didn’t stick strictly to the navigator/driver roles when pairing – as we were able to adopt a TDD approach most pairs used the ping pong pairing approach to keep both people engaged.
  • After 2 or 3 goes each we noticed that people were always working with the same pairs when they were at the keyboard – we started to mix it up so that everyone got a chance to pair with the others. I think this helped to make it a bit more interesting and allowed us to achieve the goal of working with as many of the group as possible.
  • We had the Object Calisthenics rules written up on the board which worked much better for keeping them in mind. We didn’t keep strictly to them but one idea suggested was that if we violated them three times that could serve as a signal to refactor to the rules.
  • The problem was a bit too big to allow us to complete it although we managed to get much further than last time. We are going to look for some smaller problems for the next Dojo.
  • It was really hard to come into the code afresh at times and try to be productive straight away. I think some of this was down to not being fully engaged when in the audience. Certainly the pair just before your go you need to be completely aware of what is going on. It was also pointed out that it is much easier to come into a pair if we are getting the green bar all the time – it makes it much safer.
  • We had 2 or 3 pairing sessions in the middle where there was only discussion and no coding – luckily we managed to turn this around after this wake up call and managed to make quicker progress after this. Calls from the audience to code and stop designing helped drive this.
  • The necessity of taking small steps was again obvious. Often we adopted the approach of trying to implement the solution from a very high level test instead of drilling down into smaller tests and then making these pass. Eventually the small steps approach won through and by the end we were chalking off much smaller tests with greater frequency.

For Next Time

  • When rotating pairs bring the new person in as the driver with the person who stayed on from the previous pair taking a navigating role to start with to guide the direction of the code.
  • Keep a list of the next tasks/tests to write on the whiteboard next to the code. Anyone in the audience can add to this list. The idea is to try and keep the audience engaged while not distracting the focus of the current pair.
  • A bit more up front design and discussion of the problem before diving into the code. We still had the situation where we thought we understood the problem but struggled to implement it until we drew it up on the whiteboard about half way through.
  • Get the audience to follow the current pair more closely. We often had the situation where the audience was discussing one part of the problem while the pair at the keyboard was coding another part. Trying to get the two parties more aligned is a challenge for next time.
  • Get the pair to be more vocal about exactly what they are trying to do. The need to articulate ideas is even greater when there are others in the room trying to follow your train of thought so putting extra effort into this when at the keyboard may work better.
  • Try a smaller problem next time – probably one from the Online Judge website.

Written by Mark Needham

November 13th, 2008 at 10:39 pm

Technical/Code Base Retrospective

with one comment

We decided to run a technical retrospective on our code base yesterday afternoon but apart from one blog post on the subject and a brief mention on Pat Kua’s blog I couldn’t find much information with regards to how to run one.

We therefore decided to take a fairly similar approach to our weekly retrospectives in terms of having one column for ‘Like’ and one for ‘Dislike’. In addition we had columns for ‘Want To Know More About’ and ‘Patterns’. We kept this retrospective purely about the code base because we tend to cover development best practices and process in our normal retrospectives.

Since the code base is only a couple of months old and has been kept in fairly good shape with regular paying off of technical debt, there weren’t that many areas of the code best that people didn’t like.

We did have some interesting discussions around the best way to get data onto the page.

We are currently getting domain objects to render themselves into a ViewData object which is then available for consumption from our Freemarker templates.

The problem expressed with this approach is that when we are writing the code in the Freemarker template we need to know exactly how the domain object has rendered itself to the ViewData object in order to display the data on the page.

The alternative approach is to expose the data by adding getters to the domain objects but this seems wrong to me because it means breaking encapsulation and even if we mean to only use these getters from the view, the fact that the data is now exposed increases the chance of it being misused elsewhere.

The majority of the retrospective was taken up discussing the items in the ‘Want To Know More About Column’. Although we have been pairing rotating quite frequently there are still some areas of the code base where some people are stronger than others so this gave them an opportunity to share their knowledge.

One useful thing about this discussion was that a pair were able to explain why they had done something in the code base rather than just what they had done which is often the case when explaining things in stand ups for example. Hopefully this will help to create a greater understanding of the code base.

I don’t tend to notice patterns in code bases so I put that column into the retrospective more out of intrigue to see what others had noticed. We managed to come up with 5 or 6 although a lot of them were around the use of Pico Container, Servlet Filters and Restlet Filters although we do have some of the Domain Driven Design patterns appearing in the code as well.

Overall this was an interesting exercise to have undertaken and one which I first came across from Sarah Taraporewalla. It will be interesting to see how things change in the code base if another one is run in a month’s time.

Written by Mark Needham

November 12th, 2008 at 11:50 pm

Posted in Learning

Tagged with ,

Agile: The Client/User dilemma

without comments

While reading Marc’s post about the Customer or Client naming dilemma I was reminded of another situation I have noticed in software development – the Client/User dilemma.

From my experience of agile projects it tends to be much more likely that we can get easy access to our client than to the users of the system we are writing.

Alistair Cockburn mentions in Crystal Clear that having an expert user sit with the team can be very useful, but it is not something that I have experienced on all the projects that I have worked on.

I think the problem is that it is very difficult to get access to these people. All of the projects I have worked on have been internal systems, and it can be difficult to get access to the users because they are very busy doing their day job and don’t have the time to spend talking about what they want in the system.

We therefore either end up with the client conveying their wishes by taking a best guess or engaging some users and working out what they want.

Sometimes the client is a former user of the system in which case their guess is likely to be fairly accurate and we don’t have a problem.

The problem I have noticed is that sometimes the client can come up with requirements which don’t seem aligned with what the user would actually want.

In these situations it is very difficult to know what to do because on the one hand the client is the one paying for the system and therefore you need to keep them happy but on the other hand the success or failure of the project will probably rest on the users’ reaction to the system.

I’m not sure what the best solution is for these situations, it would be interesting to hear others’ opinions on this.

Written by Mark Needham

November 12th, 2008 at 7:22 am

Posted in Agile

Tagged with

Logging with Pico Container

without comments

One thing that we’ve been working on recently is the logging for our current code base.

Nearly all the objects in our system are being created by Pico Container so we decided that writing an interceptor that hooked into Pico Container would be the easiest way to intercept and log any exceptions throw from our code.

Our initial Googling led us to the AOP Style Interception page on the Pico website which detailed how we could create a static proxy for a class that we put in the container.

The code to do this was as follows:

1
2
3
4
5
6
7
8
        DefaultPicoContainer pico = new DefaultPicoContainer(new Intercepting());
        pico.addComponent(Interceptable.class, ConcreteInterceptable.class);
        Intercepted intercepted = pico.getComponentAdapter(Interceptable.class).findAdapterOfType(Intercepted.class);
        intercepted.addPreInvocation(Interceptable.class, new InterceptableReporter(intercepted.getController()));
        intercepted.addPostInvocation(Interceptable.class, new InterceptableReporter(intercepted.getController()));
 
        Interceptable a1 = pico.getComponent(Interceptable.class);
        a1.methodThatThrowsException();
1
2
3
public interface Interceptable {
    void methodThatThrowsException();
}
1
2
3
4
5
6
7
8
9
10
11
12
13
    private static class InterceptableReporter implements Interceptable {
        private Intercepted.Controller controller;
 
        public InterceptableReporter(Intercepted.Controller controller) {
            this.controller = controller;
        }
 
 
        public void methodThatThrowsException() {
            System.out.println("error happened");
 
        }
    }

While this approach works, the problem is that we need to define an individual proxy for every class that we want to intercept. It works as a strategy if we just need to intercept a few classes but not on a larger scale.

Luckily it is possible to create a dynamic proxy on the container so that we can intercept all the objects without having to create a static proxy for each one.

The code to do this was as follows:

1
2
3
4
5
        DefaultPicoContainer pico = new DefaultPicoContainer(new LoggingAwareByDefault());
        pico.addComponent(Interceptable.class, ConcreteInterceptable.class);
 
        Interceptable interceptable = pico.getComponent(Interceptable.class);
        interceptable.methodThatThrowsException();
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import org.apache.commons.logging.LogFactory;
import org.picocontainer.Characteristics;
import org.picocontainer.ComponentAdapter;
import org.picocontainer.ComponentMonitor;
import org.picocontainer.LifecycleStrategy;
import org.picocontainer.Parameter;
import org.picocontainer.behaviors.AbstractBehaviorFactory;
 
import java.util.Properties;
 
public class LoggingAwareByDefault extends AbstractBehaviorFactory {
    private static final String DO_NOT_LOG_NAME = "support-team-opt-out";
    public static final Properties DO_NOT_LOG = Characteristics
            .immutable(DO_NOT_LOG_NAME, Characteristics.TRUE);
 
 
    public <T> ComponentAdapter<T> createComponentAdapter(ComponentMonitor componentMonitor,
                                                          LifecycleStrategy lifecycleStrategy,
                                                          Properties componentProperties,
                                                          Object componentKey, Class<T> componentImplementation,
                                                          Parameter... parameters) {
        if (removePropertiesIfPresent(componentProperties, DO_NOT_LOG)) {
            return super.createComponentAdapter(componentMonitor, lifecycleStrategy, componentProperties, componentKey,
                    componentImplementation, parameters);
        } else {
            return new LoggingAware<T>(super.createComponentAdapter(componentMonitor,
                    lifecycleStrategy, componentProperties, componentKey,
                    componentImplementation, parameters));
        }
 
    }
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
import org.apache.commons.logging.Log;
import org.picocontainer.ComponentAdapter;
import org.picocontainer.ComponentMonitor;
import org.picocontainer.PicoContainer;
import org.picocontainer.behaviors.HiddenImplementation;
 
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
 
public class LoggingAware<T> extends HiddenImplementation {
    public LoggingAware(ComponentAdapter delegate) {
        super(delegate);
    }
 
    protected Object invokeMethod(Object componentInstance, Method method, Object[] args, PicoContainer container)
            throws Throwable {
        ComponentMonitor componentMonitor = currentMonitor();
        try {
            componentMonitor.invoking(container, this, method, componentInstance);
            long startTime = System.currentTimeMillis();
            Object object = method.invoke(componentInstance, args);
            componentMonitor.invoked(container,
                                     this,
                                     method, componentInstance, System.currentTimeMillis() - startTime);
            return object;
        } catch (final InvocationTargetException ite) {
            componentMonitor.invocationFailed(method, componentInstance, ite);
 
            // log the error
 
            throw ite.getTargetException();
        }
 
    }
}

From what I recall from looking at the source code I think in order to create a proxy around an object it needs to implement an interface otherwise the proxy will not be created.

Written by Mark Needham

November 11th, 2008 at 12:08 am

Posted in Java

Tagged with