Mark Needham

Thoughts on Software Development

Archive for the ‘xp2011’ tag

XP 2011: How complex is software?

with 4 comments

The last session I attended at XP 2011 was a workshop run by John Mcfadyen where he introduced us to Dave Snowden’s Cynefin model, which is a model used to describe problems, situations and systems.

I’d come across the model previously and it had been all over my twitter stream a couple of weeks ago as a result of Dave Snowden giving a key note at the Lean Systems and Software conference.

These are some of the things I learnt from the workshop:

  • The model is based around understanding the correlation between cause and effect:
    • Simple – there’s an obvious correlation between the two
    • Complicated – there’s a correlation but it’s only clear after some analysis, probably by an expert in that domain
    • Complex – we can only see the correlation in retrospect, not in advance.
    • Chaotic – there is no correlation between cause and effect
    • Disorder – there probably is a correlation but we don’t know what it is
  • My original understanding of the model was that you would try and work out where a system fitted into the model and I was under the assumption that most software projects would be considered ‘complex’.

    After a couple of hours in the workshop it became clear that we can actually break down the bigger system and see where its parts (sub systems) fit into the model as well.

    Since different approaches are required depending upon where in the framework we are, we would act differently depending on the situation.

    For example a software project as a whole might be considered complex but the design of the architecture might only be complicated because some of the team members have worked on something similar previously.

  • Another interesting point that John made is that things don’t have to live in one domain forever – they can move between them!

    For example on a lot of the projects I’ve worked on it often feels that if we knew everything we knew at the end of the project at the beginning we could have finished it significantly quicker.

    Effectively the project starts off being complex but if we got to do exactly the same thing again and it played out in exactly the same way then it might only be complicated.

  • We did a couple of exercises where we had to place different items onto the model – first non software related and then software related.

    It was interesting to note that in the first exercise we had quite a few items in the disorder section but in the latter we had none.

    As you would expect having experience in the group around the domain helps us understand the problems in that domain better.

    If we take this one step further it’s also beneficial to have diversity in the group because then we get a variety of perspectives and one person’s strengths can help make up for another’s weaknesses.

There’s an interesting article in the Harvard Business Review titled ‘A Leader’s Framework for Decision Making‘ where Dave Snowden explains the framework in more detail using scenarios which might fit into the different areas of the model.

The origins of cynefin is another nice write up where Snowden describes how he came up with the model.

Written by Mark Needham

May 19th, 2011 at 9:44 am

Posted in XP 2011

Tagged with ,

“In what world does that make sense”

without comments

In her keynote at XP 2011 Esther Derby encouraged us to ask the question “in what world does that make sense?” whenever we encounter something which we consider to be stupid or ridiculous.

I didn’t think much of it at the time but my colleague Pat Kua has been asking me the question whenever I’ve been describing something that I find confusing to him.

After about the third time I noticed that its quite a nice tool for getting us to reflect on the systems and feedback loops that may be encouraging the behaviour witnessed.

In one of our conversations I expressed confusion at the way something had been communicated in an email.

Answering the question made me think why the person would go for that approach and allowed me to see why what I initially thought was obvious actually wasn’t.

A common example of frustration for consultants at ThoughtWorks is around travel and hotel booking which is booked centrally.

People are often frustrated that they end up with a different hotel/flight than they would prefer.

Asking the question in that case helped to understand that the people doing the booking reported to the finance manager and had been told to ensure costs didn’t exceed a certain level.

In ‘Thinking In Systems‘ the author points out that people (mainly me) often have a very narrow view of the world which from my experience leads to us committing the fundamental attribution error:

…the fundamental attribution error describes the tendency to over-value dispositional or personality-based explanations for the observed behaviours of others while under-valuing situational explanations for those behaviours.

The fundamental attribution error is most visible when people explain the behaviour of others. It does not explain interpretations of one’s own behaviour—where situational factors are often taken into consideration. This discrepancy is called the actor–observer bias.

Asking this question seems to help us avoid falling into this trap.

Now I just need to remember to ask myself the question instead of jumping the inference ladder to conclusions!

Written by Mark Needham

May 14th, 2011 at 9:12 pm

Posted in XP 2011

Tagged with ,

XP 2011: Esther Derby – Still no silver bullets

with 3 comments

The first keynote at XP 2011 was one given by Esther Derby titled ‘Still no silver bullets’ where she talked about some of the reasons why agile adoption seems to work in the small but often fails in the large.

Esther quoted Donella Meadows, the author of ‘Thinking in Systems‘, a few times which was an interesting coincidence for me as I’m currently reading her book.

One of the first quotes from that book was the following:

The original purpose of a hierarchy is always to help its’ originating subsystems do their job better

Esther pointed out that a lot of times it seems like it’s the other way around and that the hierarchy is the point of the system in the first place.

Quite a lot of the rest of the talk was around the idea of looking at the systems in which we’re building software and seeing how we can ensure that people throughout the hierarchy have information that will help them make information.

People closer to the ground have the day to day information but the more senior management will have contextual or system information which is also useful for decision making.

Esther referred to this as the diamond of shared information.

My observation is that while people on the ground criticise management for not having day to day information, we don’t always make the effort or realise that we’re missing the system information.

In his lightening talk on systems thinking my colleague Pat Kua said that one of his roles as a consultant is to help people see the systems in their organisations.

I think this is an astute observation as Deming pointed that it’s difficult to see a system if you are inside it

A system cannot understand itself. The transformation requires a view from outside.

…which means it’s quite a useful thing for an outsider to do.

Another interesting observation made was that it’s useful if everyone within an organisation knows how the organisation makes money as this will help influence the way that they make decisions.

Esther used Amazon as an example of this and suggested that they make all their money off the float? which means that their people want to do whatever they can to ensure that books can get to customers as quickly as possible.

I know roughly how ThoughtWorks makes money but there’s certainly more that I could learn about that.

There were some other stories around people gaming systems in order to meet their targets which reminded me of Liz Keogh’s ‘Evil Hat‘ talk from QCon and is something which John Seddon points out frequently in ‘Freedom from command and control‘.

Esther suggested that while it’s useful to measure things to increase our learning it becomes problematic when we start recording those metrics and use them as targets.

Relative improvement contracts, where we measure our year on year performance versus ourselves or our competition, were suggested as an alternative to having targets. I could easily see those being used in a similar way to targets though so I’m not entirely sure of the difference.

I quite liked the section of the talk where Esther talked about metaphors and the paths that they take people down when they hear a metaphor being used.

The following chain of events was suggested:

Metaphor -> Story -> Possible actions

If our metaphor isn’t leading to useful possible actions then perhaps we need to pick a different one.

George Lakoff’s work on metaphors was suggested for additional reading.

Olaf has also written a blog post about the talk.

Written by Mark Needham

May 13th, 2011 at 12:26 pm

Posted in XP 2011

Tagged with

XP 2011: Michael Feathers – Brutal Refactoring

with 5 comments

The second session that I attended at XP 2011 was Michael Feathers’ tutorial ‘Brutal Refactoring’ where he talked through some of the things that he’s learned since he finished writing ‘Working Effectively With Legacy Code‘.

I’ve found some of Michael’s recent blog posts about analysing the data in our code repositories quite interesting to read and part of this tutorial was based on the research he’s done in that area.

Clean code vs Understandable code

The session started off discussing the difference between clean code and understandable code.

Feathers gave a definition of clean code as code which is simple and has no hidden surprises and suggested that this is quite a high bar to try to obtain.

Understandable code on the other hand is about the amount of thinking that we need to do in order to understand a piece of code and can be a more achievable goal.

He also pointed out that it would be very rare us to take a code base and refactor the whole thing and that we should get used to having our code bases in a state where not every part of it is perfect.

Behavioural economics

Next we discussed behavioural economics where Feathers suggested that the ‘incentives’ structure of programming leads to code ending up in a bad state

i.e. It’s much easier/less time consuming for people to add code to an existing method or to an existing class rather than creating a new method or class.

The main reason for that is the difficulty in choosing a name for those methods/classes rather than the mechanical complexity of doing so in a text editor.

Feathers suggested there are probably also things that we don’t know about ourselves and the way we program.

The shape of code

Next we covered the shape of code in systems and the research that Feathers has been doing and has written a few blog posts about.

Feathers suggested that most code bases will follow a power curve whereby most files are hardly changed at all but there will be a small number of files which get changed a lot.

I tried this out on a couple of code bases that I’ve worked on and noticed the same trend.

Feathers showed a graph with code churn on one axis and code complexity on the other and suggested that we really need to focus our refactoring efforts on code which is changed frequently and is complex!

When chatting afterwards Feathers suggested that it would be interesting to look at the way that certain classes increased in complexity over time and see whether we could map those changes to events that happened to the code base.

Finding the design

Feathers talked about the idea of grouping together clumps of code to try and see if they make sense as a domain concept.

For example if three parameters seem to be being used together throughout the code base then perhaps it means that those 3 parameters together mean something.

He described it as an inside out way of deriving domain concepts as we’re working from what we already have in the code and seeing if it makes sense rather than deriving code from domain concepts.

Rapid scratch refactoring

I’d previously read about scratch refactoring in Working Effectively With Legacy Code, where the idea is that we start refactoring code to how we want it to be without worrying about it actually working, and Feathers gave an example of doing this on a piece of code.

The most interesting thing about this for me was that he did the refactoring in notepad rather than in the IDE.

He said this was because the IDE’s compile warnings were distracting from the goal of the exercise which is to understand how we could improve the code.

Deciding upon architectural rules

Feathers said that when he first starts working with a team he gets people to explain to him how the system works and that as a side effect of doing this exercise they start to see ways that it could be simplified.

My colleague Pat Kua talks about something similar in his on boarding series and one of the benefits of adding new people to teams is that we end up having these discussions.

It helps to have a shared understanding of what the system is so that people will know where to put things.

We could do this by stating some rules about the way code will be designed e.g. receivers will never talk to repositories.

This seems somehow related to a recent observation of mine that it’s much easier to work when we have to do so within some sort of constraints rather than having free reign to do whatever we want.

Systemic concerns

Feathers gave an example of a place where he’d worked where an upstream team decided to lock down a service they were providing by using the Java ‘final’ keyword on their methods so that those methods couldn’t be overridden.

He pointed out that although we can use languages to enforce certain things people will find ways around them which in this case meant that the downstream team created another class wrapping the service which did what they wanted.

He also observed that the code around hard boundaries is likely to be very messy.

We also covered some other topics such as wrapping global variables/singletons in classes and passing those classes around the system and the idea of putting hypotheses/assertions into production code.

Pat’s also written some notes about this session on his blog.

Written by Mark Needham

May 11th, 2011 at 1:35 pm

Posted in XP 2011

Tagged with

XP 2011: J.B. Rainsberger – A Simple Approach to Modular Design

with 4 comments

After finishing my own session at XP 2011 I attended the second half of J.B. Rainsberger’s tutorial on modular design.

For most of the time that I was there he drove out the design for a point of sale system in Java while showing how architectural patterns can emerge in the code just by focusing on improving names and removing duplication.

The second half of the session was much more interesting to watch as this was when J.B. had set all the context with the code and we could start to see the points that he was trying to make.

These were some of the interesting bits that I picked up:

  • J.B. talked a lot about being able to detect smells in code both mechanically and intuitively. The latter comes from our general feel of code based on our experience while the former comes from following a set of rules/heuristics. He wrote about this earlier in the year.

    For example we might feel intuitively that our tests are unclear to read while mechanically we can see that there is duplication between our tests and code which is what’s leading to the unclearness.

  • By removing duplication from the point of sales example code we ended up with the MVC pattern albeit with the responsibilities in the wrong place e.g. the model/controller both had some logic that would typically belong in the view.

    I’m curious as to whether other types of code would naturally lead towards another architectural pattern without us noticing.

    It would make sense if they did seeing as patterns are typically extracted when people see a common way of solving a particular problem.

  • J.B. encouraged us to use long names to help us see problems in the code. For example naming something ‘priceAsText’ might help us see that we have primitive obsession which we may or may not want to do something about.

    It was interesting how using longer/more descriptive names made it easier to see which bits of code were similar to each other even though it wasn’t initially obvious.

  • I hadn’t heard of temporal duplication which was defined as ‘unnecessarily repeating a step across the run time of a program’.

    In the example code we were creating a map of bar code -> price every time we called the method to scan the bar code which was unnecessary – that map could be injected into the class and therefore only be created once.

  • J.B. described his 3 values of software which he suggested he uses to explain why we need to keep our code in good shape:
    1. Features – lead to the customer making money in some shape or form
    2. Design – we want to try and keep the marginal cost of features low i.e. we want to build a code base where there is a similar cost no matter what feature we decided to implement next.
    3. Feedback – we want to get the customer to say “not what I meant” as soon as possible since they’re bound to say it at some stage i.e. we want to reduce the cost of a wrong decision
  • He drew an exponential curve showing the cost of software if we only focus on features. It was interesting to note that if you finish the project early enough then it might not be such a big deal.

I think it’s very easy to dismiss the important of naming in our code because it seems so trivial.

After this session I can now see that I should be spending much more time than I currently do on naming and have much room for improvement.

Written by Mark Needham

May 11th, 2011 at 12:11 pm

Posted in XP 2011

Tagged with