Mark Needham

Thoughts on Software Development

Archive for the ‘systems-thinking’ tag

Gaming the system: Some project examples

with 2 comments

Earlier this year Liz Keogh gave a talk at QCon London titled ‘Learning and Perverse Incentives: The Evil Hat‘ where she eventually encouraged people to try and game the systems that they take part in.

Over the last month or so we’ve had two different metrics visibly on show and are therefore prime targets for being gamed.

The first metric is one we included on our build radiator which shows how many commits to the git repository each person has for that day.

We originally created the metric to try and see which people were embracing git and committing locally and which were still treating it like Subversion and only committing when they had something to push to the central repository.

The other advantage we wanted to try and encourage is that by creating lots of small commits it’s easier for someone browsing ‘git log’ to see what’s happened over time just from glancing at the commit messages.

Bigger commits tend to mean that changes have been made in multiple places and perhaps not all those changes are related to each other.

Since we made that metric visible the number of commits have visibly increased and it’s mostly been positive because people tend to push to the central repository quite frequently.

There have, however, been a couple of occasions where people have made 10/15 commits locally over the day and then pushed them all at the end of the day and gone straight to the top of the leader board.

IMG 20111026 175248 1

The disadvantage of this approach is that it means other people on the team aren’t integrating with your changes until right at the end of the day which can lead to merge hell for them.

There have also been some times when people’s count has artificially increased because they’ve checked in, broke the build and then checked in again to fix it.

We’re going to try and find a way to combine local commits with remote pushes in a combined metric as our next trick.

Another metric which we’ve recently made visible is the number of points that we’ve completed so far in the iteration.

Previously we’ve had this data available in our Project Manager’s head and in Mingle but since a big part of how the team is judged is based on the number of points ‘achieved’ the team asked for the score to be made visible.

Since that happened from my observation we’ve ‘achieved’ or got very close to the planned velocity every week whereas before that it was a bit hit and miss.

I think sub consciously the estimates made on stories have started to veer towards the cautious side whereas previously they were probably more optimistic.

Another change in behaviour I’ve noticed is that people tend to postpone any technical tasks they have to do when we’re near the end of an iteration and instead keep focus on the story to ensure it gets completed in time.

We’ve also seen a couple of occasions where people stayed 2/3 hours longer on the last day of the iteration to ensure that stories got signed off so the points could be counted.

It’s been quite interesting to observe how behaviour can change based on increasing the visibility of metrics even when in the first case it’s actually irrelevant to the perception of the team.

Written by Mark Needham

October 26th, 2011 at 11:55 pm

Posted in Systems Thinking

Tagged with

Bounded Rationality

without comments

In ‘Thinking In Systems: A Primer‘ one of the most interesting ideas that Donella Meadows describes is what Herbert Simon coined ‘bounded rationality‘:

Bounded rationality means that people make quite reasonable decisions based on the information they have. But they don’t have perfect information, especially about more distant parts of the system

Later on in the chapter the following idea is suggested:

If you become a manager, you probably will stop seeing labour as a deserving partner in production, and start seeing it as a cost to be minimised.

This helps explains something that I’ve noticed happen quite frequently.

Someone who was previously non management gets pulled into a management position and ‘mysteriously’ starts acting exactly like all the others in that type of role rather than having a holistic view.

The strange thing is that we don’t expect this to happen. The person was on ‘our’ side very recently so surely they should be able to see both perspectives!

Esther Derby referred to this problem in her keynote at XP2011 where she talked about two different types of information that occur in a system:

  • Day to day information – this is possessed by people ‘on the ground’
  • System information – this is possessed by people ‘in management’

When the people who recently moved into a management position are challenged on this they will often point out that “you can’t see the bigger picture” which is true but still doesn’t account for the fact that they probably aren’t seeing it either!

We’re both just seeing different parts of the system.

Meadows goes on to point out that the design of the system tends to encourage this type of behaviour:

Seeing how individual decisions are rational within the bounds of the information available does not provide an excuse for narrow-minded behaviour. It provides an understanding of why that behaviour arises.

Taking out one individual from a position of bounded rationality and putting in another person is not likely to make much difference. Blaming the individual rarely helps create a more desirable outcome.

Meadows finishes this section of the book with the following suggestion which I think is especially useful in a consulting environment where both consultants and management quite obviously tend to suffer from bounded rationality.

It’s amazing how quickly and easily behaviour changes can come, with even slight enlargement of bounded rationality, by providing better, more complete, timelier information.

I’ve seen various attempts at trying to help people enlarge their bounded rationality at ThoughtWorks, such as:

  • Presentations by the finance director showing where the revenue of the company gets spent
  • Management team members taking the time to have one on one discussions with consultants
  • Discussions about the sales pipeline and the types of work available in the market

I think if this type of thing happened more frequently then you’d probably see an enlargement of everyone’s bounded rationality which would be useful for all involved!

Written by Mark Needham

June 26th, 2011 at 5:05 pm

Posted in Systems Thinking

Tagged with

“In what world does that make sense”

without comments

In her keynote at XP 2011 Esther Derby encouraged us to ask the question “in what world does that make sense?” whenever we encounter something which we consider to be stupid or ridiculous.

I didn’t think much of it at the time but my colleague Pat Kua has been asking me the question whenever I’ve been describing something that I find confusing to him.

After about the third time I noticed that its quite a nice tool for getting us to reflect on the systems and feedback loops that may be encouraging the behaviour witnessed.

In one of our conversations I expressed confusion at the way something had been communicated in an email.

Answering the question made me think why the person would go for that approach and allowed me to see why what I initially thought was obvious actually wasn’t.

A common example of frustration for consultants at ThoughtWorks is around travel and hotel booking which is booked centrally.

People are often frustrated that they end up with a different hotel/flight than they would prefer.

Asking the question in that case helped to understand that the people doing the booking reported to the finance manager and had been told to ensure costs didn’t exceed a certain level.

In ‘Thinking In Systems‘ the author points out that people (mainly me) often have a very narrow view of the world which from my experience leads to us committing the fundamental attribution error:

…the fundamental attribution error describes the tendency to over-value dispositional or personality-based explanations for the observed behaviours of others while under-valuing situational explanations for those behaviours.

The fundamental attribution error is most visible when people explain the behaviour of others. It does not explain interpretations of one’s own behaviour—where situational factors are often taken into consideration. This discrepancy is called the actor–observer bias.

Asking this question seems to help us avoid falling into this trap.

Now I just need to remember to ask myself the question instead of jumping the inference ladder to conclusions!

Written by Mark Needham

May 14th, 2011 at 9:12 pm

Posted in XP 2011

Tagged with ,

System Traps: Rule Beating

with one comment

In ‘Thinking In Systems‘ section five focuses on systems which produce “truly problematic behaviour” and one of these so called system traps is known as ‘rule beating’.

Rule beating occurs when the agents in a system take evasive action to get around the intent of rules in a system:

The letter of the law is met, the spirit of the law is not.

A common system where we see this in organisations is around training budgets.

Each individual will be given a certain amount of money to spend each year and if they don’t spend it then they lose it.

The tendency, therefore, is for people to ensure that they spend their budget even if it’s on a training course that they might not have otherwise been interested in.

In a way they are gaming the system.

As I understand it, the system was originally designed this way because the organisation wants to have a predictable cash flow for the year.

In a 200 person organisation where each person is given £2,000 to spend, that amounts to £400,000 over a 12 month period.

If the majority of people decided to not spend their training budget during one year and all decided to use it the next year then the organisation would lose the ability to predict cash flow accurately.

There could be £100,000 being spent on training one year and then £700,000 the next which could result in the organisation having to borrow money from the bank in order to cover it.

In this case it’s not really a big system problem but the system doesn’t encourage to people to act in a way which is in their interests or the organisation’s.

If a person doesn’t feel the need to spend the budget one year but then suddenly gets really interested in a topic and wants to attend some conferences on it then they won’t be able to under the year by year system.

As an aside when Pat and I were discussing this system I was curious why not every agent in the system would behave in the way that the system seems to encourage i.e. some people won’t spend the training budget just for the sake of it.

Pat pointed out that this is why Deming said 95% of the problems are caused by the system and not 100% – there is still space for the individual to behave differently regardless of the system they’re in.

I understand the logic but don’t do that particular thing myself but I do make sure that I take all my vacation time each year because that also doesn’t roll over!

At XP 2011 Brian Marick spoke about gift based and transaction based economies. At the moment the training budget would be the latter but Pat suggested it would be interesting to see if the former approach would work better.

If that approach was followed then it would be more trust based i.e. people would be trusted to use the ‘gift’ of training however they saw fit without the need for a rule restricting how/when they could do so.

There would of course need to be some sort of checks/measurements in place to ensure that people didn’t abuse the system.

In the book Maverick Ricardo Semler suggested that only 3% of people would be problematic and you could then deal with those people instead of putting rules in place for everyone.

I would be really interested to see whether a trust based system would actually work but I guess it’s probably considered a bit of a risk for an organisation to try it out.

Written by Mark Needham

May 14th, 2011 at 9:02 pm

Posted in Systems Thinking

Tagged with

Feedback Loops: Human Decisions

without comments

I’ve been reading Donella Meadows’ ‘Thinking In Systems: A Primer‘, an introductory text on systems thinking, and after 30 pages or so the author poses the following challenge:

Sometimes I challenge my students to try to think of any human decision that occurs without a feedback loop – that is, a decision that is made without regard to any information about the level of stock that it influences

Meadows has quite a nice way of guiding us to thinking about systems by referring to ‘stocks’ and ‘flows’.

A stock is the foundation of any system. Stocks are the elements of the system that you can see, feel, count, or measure at any given time. A system stock is just what it sounds like: a store, a quantity, an accumulation of material or information that has built up over time.

Stocks change over time through the actions of a flow. Flows are filling and draining, births and deaths, purchases and sales, growth and decay, deposits and withdrawals, successes and failures.

For example the following diagram represents the flows into and out from a water reservoir:

Reservoir

The ‘water in reservoir’ is the stock in this system while rain/river inflow act as the in flows and evaporation/discharge are the out flows.

This is a reasonably simple example because it doesn’t show any of the factors in the system which might impact the in flows or out flows of the system.

I talk with friends of mine reasonably frequently on instant messenger so I thought it’d be interesting to try and see how that system would fit into this model:

Im conversations

The curved arrows in the diagram are known as information links and they direct the action in the system.

In this example we have “desired knowledge level” and “knowledge of friends’ lives” which are compared to each other and lead to a discrepancy which can be fixed through instant messenger conversations which increase the ‘communication with friends’ in flow.

I think the ‘stock’ in this system is the desire to know more about what my friends are up to but I’m sure there are other ways of looking at this relationship as well.

It feels a little bit ‘unhuman’ to think of things in terms of stocks/flows and I’m doubt most people think about stock levels consciously when deciding to have a conversation! The feedback loop is much more implicit.

When discussing this with Sat he pointed out that in a more accurate systems diagram there would also be other feedback loops which might include how busy we are, how interesting the conversation is, how tired you are and so on.

The book does get onto that but I hadn’t realised that such a simple human decision was in fact being influenced by a feedback loop!

Written by Mark Needham

May 5th, 2011 at 6:04 pm

Chris Argyris: Espoused Theory vs Theory in Action

with 6 comments

Via some combination of Christian Blunden, Pat Kua, David Joyce and Benjamin Mitchell I’ve been spending some time lately reading about the work of Chris Argyris.

I’ve previously come across his name while reading The Fifth Discipline but I didn’t realise how interesting his work actually is.

One of the interesting concepts I’ve come across so far is the difference between espoused theory and theory in use:

Espoused theory

The world view and values people believe their behaviour is based on.

Theory-in-use

The world view and values implied by their behaviour, or the maps they use to take action.

There are two areas that really stood out for me when I read these definitions.

Interviews

In face to face interviews the candidate is likely to give an answer based on their espoused theory of the world and hence can come across as being very good if they know the type of answers you’re looking for.

I’ve interviewed a couple of people over the last few years where I couldn’t find fault with any of the answers being given but I was convinced that they weren’t giving me an accurate picture of the candidate. The answers were too perfect.

Luckily we have an opportunity to get a closer look at a candidate’s theory in action in the pair programming interview that we do.

I believe HashRocket take this even further by having candidates pair with their team for a week before they potentially get hired.

Knowledge vs Experience

In a recent conversation with Dave Cameron I was telling him about the difficulties I’d been having in applying the ideas I’d read in Crucial Confrontations, Agile Coaching and Fearless Change.

I understand the ideas that the books are suggesting but in a real life situation I nearly always make a mistake.

Dave pointed out that this is the difference between knowledge and experience – just because you know what to do doesn’t mean that you will do it unless you’ve had some experience of the situation before.

This sounds pretty similar to the difference between espoused theory and theory in action – I know what I want to do in a situation but at the moment that isn’t what I actually do.

Written by Mark Needham

January 13th, 2011 at 8:02 pm

Posted in Systems Thinking

Tagged with