Archive for the ‘Systems Thinking’ Category
Earlier this year Liz Keogh gave a talk at QCon London titled ‘Learning and Perverse Incentives: The Evil Hat‘ where she eventually encouraged people to try and game the systems that they take part in.
Over the last month or so we’ve had two different metrics visibly on show and are therefore prime targets for being gamed.
The first metric is one we included on our build radiator which shows how many commits to the git repository each person has for that day.
We originally created the metric to try and see which people were embracing git and committing locally and which were still treating it like Subversion and only committing when they had something to push to the central repository.
The other advantage we wanted to try and encourage is that by creating lots of small commits it’s easier for someone browsing ‘git log’ to see what’s happened over time just from glancing at the commit messages.
Bigger commits tend to mean that changes have been made in multiple places and perhaps not all those changes are related to each other.
Since we made that metric visible the number of commits have visibly increased and it’s mostly been positive because people tend to push to the central repository quite frequently.
There have, however, been a couple of occasions where people have made 10/15 commits locally over the day and then pushed them all at the end of the day and gone straight to the top of the leader board.
The disadvantage of this approach is that it means other people on the team aren’t integrating with your changes until right at the end of the day which can lead to merge hell for them.
There have also been some times when people’s count has artificially increased because they’ve checked in, broke the build and then checked in again to fix it.
We’re going to try and find a way to combine local commits with remote pushes in a combined metric as our next trick.
Another metric which we’ve recently made visible is the number of points that we’ve completed so far in the iteration.
Previously we’ve had this data available in our Project Manager’s head and in Mingle but since a big part of how the team is judged is based on the number of points ‘achieved’ the team asked for the score to be made visible.
Since that happened from my observation we’ve ‘achieved’ or got very close to the planned velocity every week whereas before that it was a bit hit and miss.
I think sub consciously the estimates made on stories have started to veer towards the cautious side whereas previously they were probably more optimistic.
Another change in behaviour I’ve noticed is that people tend to postpone any technical tasks they have to do when we’re near the end of an iteration and instead keep focus on the story to ensure it gets completed in time.
We’ve also seen a couple of occasions where people stayed 2/3 hours longer on the last day of the iteration to ensure that stories got signed off so the points could be counted.
It’s been quite interesting to observe how behaviour can change based on increasing the visibility of metrics even when in the first case it’s actually irrelevant to the perception of the team.
Bounded rationality means that people make quite reasonable decisions based on the information they have. But they don’t have perfect information, especially about more distant parts of the system
Later on in the chapter the following idea is suggested:
If you become a manager, you probably will stop seeing labour as a deserving partner in production, and start seeing it as a cost to be minimised.
This helps explains something that I’ve noticed happen quite frequently.
Someone who was previously non management gets pulled into a management position and ‘mysteriously’ starts acting exactly like all the others in that type of role rather than having a holistic view.
The strange thing is that we don’t expect this to happen. The person was on ‘our’ side very recently so surely they should be able to see both perspectives!
Esther Derby referred to this problem in her keynote at XP2011 where she talked about two different types of information that occur in a system:
- Day to day information – this is possessed by people ‘on the ground’
- System information – this is possessed by people ‘in management’
When the people who recently moved into a management position are challenged on this they will often point out that “you can’t see the bigger picture” which is true but still doesn’t account for the fact that they probably aren’t seeing it either!
We’re both just seeing different parts of the system.
Meadows goes on to point out that the design of the system tends to encourage this type of behaviour:
Seeing how individual decisions are rational within the bounds of the information available does not provide an excuse for narrow-minded behaviour. It provides an understanding of why that behaviour arises.
Taking out one individual from a position of bounded rationality and putting in another person is not likely to make much difference. Blaming the individual rarely helps create a more desirable outcome.
Meadows finishes this section of the book with the following suggestion which I think is especially useful in a consulting environment where both consultants and management quite obviously tend to suffer from bounded rationality.
It’s amazing how quickly and easily behaviour changes can come, with even slight enlargement of bounded rationality, by providing better, more complete, timelier information.
I’ve seen various attempts at trying to help people enlarge their bounded rationality at ThoughtWorks, such as:
- Presentations by the finance director showing where the revenue of the company gets spent
- Management team members taking the time to have one on one discussions with consultants
- Discussions about the sales pipeline and the types of work available in the market
I think if this type of thing happened more frequently then you’d probably see an enlargement of everyone’s bounded rationality which would be useful for all involved!
In ‘Thinking In Systems‘ section five focuses on systems which produce “truly problematic behaviour” and one of these so called system traps is known as ‘rule beating’.
Rule beating occurs when the agents in a system take evasive action to get around the intent of rules in a system:
The letter of the law is met, the spirit of the law is not.
A common system where we see this in organisations is around training budgets.
Each individual will be given a certain amount of money to spend each year and if they don’t spend it then they lose it.
The tendency, therefore, is for people to ensure that they spend their budget even if it’s on a training course that they might not have otherwise been interested in.
In a way they are gaming the system.
As I understand it, the system was originally designed this way because the organisation wants to have a predictable cash flow for the year.
In a 200 person organisation where each person is given £2,000 to spend, that amounts to £400,000 over a 12 month period.
If the majority of people decided to not spend their training budget during one year and all decided to use it the next year then the organisation would lose the ability to predict cash flow accurately.
There could be £100,000 being spent on training one year and then £700,000 the next which could result in the organisation having to borrow money from the bank in order to cover it.
In this case it’s not really a big system problem but the system doesn’t encourage to people to act in a way which is in their interests or the organisation’s.
If a person doesn’t feel the need to spend the budget one year but then suddenly gets really interested in a topic and wants to attend some conferences on it then they won’t be able to under the year by year system.
As an aside when Pat and I were discussing this system I was curious why not every agent in the system would behave in the way that the system seems to encourage i.e. some people won’t spend the training budget just for the sake of it.
Pat pointed out that this is why Deming said 95% of the problems are caused by the system and not 100% – there is still space for the individual to behave differently regardless of the system they’re in.
I understand the logic but don’t do that particular thing myself but I do make sure that I take all my vacation time each year because that also doesn’t roll over!
At XP 2011 Brian Marick spoke about gift based and transaction based economies. At the moment the training budget would be the latter but Pat suggested it would be interesting to see if the former approach would work better.
If that approach was followed then it would be more trust based i.e. people would be trusted to use the ‘gift’ of training however they saw fit without the need for a rule restricting how/when they could do so.
There would of course need to be some sort of checks/measurements in place to ensure that people didn’t abuse the system.
In the book Maverick Ricardo Semler suggested that only 3% of people would be problematic and you could then deal with those people instead of putting rules in place for everyone.
I would be really interested to see whether a trust based system would actually work but I guess it’s probably considered a bit of a risk for an organisation to try it out.
I’ve been reading Donella Meadows’ ‘Thinking In Systems: A Primer‘, an introductory text on systems thinking, and after 30 pages or so the author poses the following challenge:
Sometimes I challenge my students to try to think of any human decision that occurs without a feedback loop – that is, a decision that is made without regard to any information about the level of stock that it influences
Meadows has quite a nice way of guiding us to thinking about systems by referring to ‘stocks’ and ‘flows’.
A stock is the foundation of any system. Stocks are the elements of the system that you can see, feel, count, or measure at any given time. A system stock is just what it sounds like: a store, a quantity, an accumulation of material or information that has built up over time.
Stocks change over time through the actions of a flow. Flows are filling and draining, births and deaths, purchases and sales, growth and decay, deposits and withdrawals, successes and failures.
For example the following diagram represents the flows into and out from a water reservoir:
The ‘water in reservoir’ is the stock in this system while rain/river inflow act as the in flows and evaporation/discharge are the out flows.
This is a reasonably simple example because it doesn’t show any of the factors in the system which might impact the in flows or out flows of the system.
I talk with friends of mine reasonably frequently on instant messenger so I thought it’d be interesting to try and see how that system would fit into this model:
The curved arrows in the diagram are known as information links and they direct the action in the system.
In this example we have “desired knowledge level” and “knowledge of friends’ lives” which are compared to each other and lead to a discrepancy which can be fixed through instant messenger conversations which increase the ‘communication with friends’ in flow.
I think the ‘stock’ in this system is the desire to know more about what my friends are up to but I’m sure there are other ways of looking at this relationship as well.
It feels a little bit ‘unhuman’ to think of things in terms of stocks/flows and I’m doubt most people think about stock levels consciously when deciding to have a conversation! The feedback loop is much more implicit.
When discussing this with Sat he pointed out that in a more accurate systems diagram there would also be other feedback loops which might include how busy we are, how interesting the conversation is, how tired you are and so on.
The book does get onto that but I hadn’t realised that such a simple human decision was in fact being influenced by a feedback loop!
I’ve previously come across his name while reading The Fifth Discipline but I didn’t realise how interesting his work actually is.
One of the interesting concepts I’ve come across so far is the difference between espoused theory and theory in use:
The world view and values people believe their behaviour is based on.
The world view and values implied by their behaviour, or the maps they use to take action.
There are two areas that really stood out for me when I read these definitions.
In face to face interviews the candidate is likely to give an answer based on their espoused theory of the world and hence can come across as being very good if they know the type of answers you’re looking for.
I’ve interviewed a couple of people over the last few years where I couldn’t find fault with any of the answers being given but I was convinced that they weren’t giving me an accurate picture of the candidate. The answers were too perfect.
Luckily we have an opportunity to get a closer look at a candidate’s theory in action in the pair programming interview that we do.
I believe HashRocket take this even further by having candidates pair with their team for a week before they potentially get hired.
Knowledge vs Experience
I understand the ideas that the books are suggesting but in a real life situation I nearly always make a mistake.
Dave pointed out that this is the difference between knowledge and experience – just because you know what to do doesn’t mean that you will do it unless you’ve had some experience of the situation before.
This sounds pretty similar to the difference between espoused theory and theory in action – I know what I want to do in a situation but at the moment that isn’t what I actually do.
Something which I’ve become fairly convinced about recently is that the environment that someone works in has far more impact on their perceived performance than their own individual skills.
Given that belief I’ve often got stuck answering why some people are better able to handle a difficult environment than others – in terms of accepting the situation and finding a way of being productive regardless.
Does this mean that they’re better than people who can’t work in that environment as effectively?
That’s certainly a judgement that I’ve made previously but after discussing this with Danilo Sato over instant messenger and Pat Kua & Esther Derby over twitter I can now see that I’m more than likely wrong.
Danilo pointed out that it doesn’t actually means that they’re better, it just means that they’re better at coping.
If we work on improving the system then perhaps we can allow everyone to work more productively.
Esther has a similar view:
@markhneedham Sure. ppl have different ways of coping. Why not improve the system so everyone can do better?
@markhneedham different strengths and interest at play. Emergent behaviour based on individual and environment
@markhneedham also people have different coping mechanisms and thresholds for tolerance/intolerance
Prior to this conversation I somehow hadn’t considered the benefits we can get from putting people in environments which allow them to play to their strengths.
I think we do this reasonably well when interviewing where one of the key criteria is to consider whether the candidate would enjoy working in the organisation’s environment.
Beyond that perhaps not so well because it’s implicitly assumed that whoever is hired should be able to operate effectively regardless of the environment.
Pat also pointed out that while it is good to work out how to get people into their optimal environment we shouldn’t forget that, as difficult as it may be, improving the system we’re currently working in can also be effective:
@patkua ok so it sounds like what you’re saying is we should look to try and place people in environments which are best suited to them
@markhneedham I’m saying that’s one possibility. Changing the system is another. We should be pursuing both strategies.
One of the things that I’ve noticed while working with various colleagues over the last few years is that the more experienced ones are much more skilled at making slight adjustments to their approach based on feedback that they receive from the environment.
I’ve been reading a couple of books on systems thinking over the last few months and one of the takeaways for me has been that we need to be careful when reacting to feedback we get from a system to ensure that we don’t over compensate and end up creating a new problem for ourselves instead.
The idea of over compensating is also know as ‘chasing the gauges‘ in the airline business where it describes the following situation:
When you roll on aircraft left or right. pitch it up or down, change the throttle or the brakes,t tokes time for the plane to “settle out.
Good pilots fly by making a change, then waiting a couple of seconds to see the results. If you don’t you’ll just “chase gauges” that are themselves still changing.
In terms of software development a mistake that I’ve made before is to see something go ‘wrong’ in a situation and then come up with a ‘solution’ to that which effectively means doing something completely different to what we’re doing now.
We had example of this on a project I worked on where we ended up doing a lot of re-work in one iteration because several people were working in a similar part of the code base and ended up trampling all over each other.
In the next iteration we had some stories which would touch a similar part of the code base and I thought it would make sense to split the work by front and back end rather than having each pair work on one story.
While this removed the re-work problem the unfortunate side effect was that it meant we had no visibility about either of the stories until one day before the end of the iteration.
A colleague pointed out that it might have been more effective to wait for a pattern to emerge before trying to make a correction.
In this case the solution probably didn’t need to be as dramatic as ensuring that people didn’t touch the same areas of the code base.
A more effective approach might have been to just have the pairs sitting next to each other and ensure that they communicated which bits of the code they were changing.
Tying that in with frequent commits would probably have removed the problem of re-work and still allowed us to keep the visibility of individual stories.
It takes a bit more discipline to not overcompensate and I think in a way it goes against the human instinct to try and fix a problem as soon as we see it.
I’m trying to move more towards the following approach:
- Wait and watch situations for longer before taking action
- Ensure any action isn’t too dramatic i.e. small steps
- Watch to see how the system responds to the change
- Try something else if necessary
It is difficult though and I certainly make a lot of mistakes.