Archive for the ‘Agile’ Category
At the start of most of the retrospectives I’ve been part of we’ve followed the safety check ritual whereby each person participating has to write a number from 1-5 on a sticky describing how they’ll be participating in the retrospective.
1 means you’ll probably keep quiet and not say much, 5 means you’re perfectly comfortable saying anything and the other numbers fall in between those two extremes.
In my experiences it’s a bit of a fruitless exercise because its viewed that a higher number is ‘better’ and therefore the minimum people will tend to write down is ‘3’ because they don’t want to stand out or cause a problem.
I’ve been in retrospectives where the majority of people were a ‘4’ but then when it came to a contentious topic only 1 or 2 people contributed so I’ve become a bit sceptical of this safety check approach!
- Explorers are eager to discover new ideas and insights. They want to learn everything they can about the iteration/release/project.
- Shoppers will look over all the available information, and will be happy to go home with one useful new idea.
- Vacationers aren’t interested in the work of the retrospective, but are happy to be away from the daily grind. They may pay attention some of the time, but are mostly glad to be out of the office.
- Prisoners feel that they’ve been forced to attend and would rather be doing something else.
Phil pointed out that in this case each of the possible categories describes how people will contribute to the retrospective whereas the traditional safety check is more about their degree of comfort with the environment.
I don’t have enough data points to declare that this is better than the 1-5 approach but from my experiences it seems that there’s less pressure on people to fit in one of the categories since none is ‘better’ than the other, just different.
In a recent retrospective where we used this it led to a conversation where people explained why they had chosen a particular category, which I thought was quite fascinating and something I haven’t seen happen with the numbers approach.
My instinct is that if someone actually felt like a ‘Prisoner’ then they might not actually categorise themselves as that but I haven’t seen this approach used enough to say that for certain!
A couple of weeks ago Joshua Kerievsky wrote a post describing how he and his teams don’t use story points anymore because of the problems they’d had with them which included:
- Story Point Inflation – inflating estimates of stories so that the velocity for an iteration is higher
- Comparing teams by points – judging comparative performance of teams by how many points they’re able to complete
On the team I’m currently working on we still estimate the relative size of stories using points but we don’t use velocity per iteration to keep score – most of the time it’s barely even mentioned.
Instead for the past couple of months we’ve just been using the velocity to see whether or not we were going to achieve the minimum viable infrastructure (MVI) that we needed to have done before launch.
The nice thing on this occasion was that the MVI was actually reasonably flexible and there were bits of it which were ‘nice to have’ if we had the time and would make our life easier but didn’t have to be there to launch.
As it turned out we ended up finishing the cards required for the MVI a couple of weeks early which left us some flexibility/time to do things which had been forgotten or cropped up late on because something else didn’t work.
We did have a few weeks where our velocity fluctuated massively but an interesting observation which Phil made at the time was that despite the points totals being different we’d actually completed roughly the same number of cards.
Joshua describes the same thing in his post when detailing an email sent to the Extreme Programming list by Don Wells:
We have been counting items done. Each week we just choose the most important items and sign up for them up to the number from last week. It turns out that we get about the same number of them done regardless of estimated effort
This approach was described to me a few years ago by Julio Maia but as I understand it works on the assumption that we can break the work down into chunks which are roughly the same size which in my experience is very difficult to do.
Whatever approach we end up using I think it’s important that we don’t praise or criticise the team based on the velocity achieved that week.
I’ve worked on numerous teams where the project manager will praise the team in the standup for achieving a velocity higher than normal even though nothing has changed which would account for the increase.
In fact the change in velocity can probably be accounted for by normal variance.
If we don’t do that and just use the story points as a tool for roughly judging how much work we can complete in a time period then they’re not so horrendous.
Easier said than done of course!
A few weeks ago a slide deck from an Esther Derby presentation on retrospectives was doing the rounds on twitter and one thing that I found interesting in the deck was the suggestion that a retrospective needs to be focused in some way.
I’ve participated in a few focused retrospectives over the past 7/8 months and I think there are some things to be careful about when we decide to focus on something specific rather than just looking back at a time period in general.
In a retrospective about 6 months ago or so we focused on the analysis part of our process as we’d been struggling to know when a story was complete and what exactly its scope was.
The intention wasn’t the victimise the people working in that role but since there were very few of them compared to people in other roles they were forced onto the defensive as people criticised their work.
It was a very awkward retrospective and it felt like a retrospective was probably the wrong place to address the problem.
It might have been better for the analysts to have been given the feedback privately and then perhaps worked on a solution with a smaller group of people.
Looking for a problem when there isn’t one
I had an interesting conversation with a colleague about whether with very focused retrospectives we end up looking for something to change rather than having any specific pain point which necessitates change.
The problem with this is that there’s a thin line between following the status quo because it works and getting complacent and not looking for ways to improve.
It is interesting to keep in mind though that if it doesn’t seem like there is something to change in an area then perhaps that’s the wrong thing to be focusing on at the moment, which nicely leads into…
Let the team choose the area of focus
There can be a tendency in the teams I’ve worked on for people in managementy roles to dictate what the focus of the retrospective will be which makes sense in a way since they may be able to see something which the team can’t.
On the other hand it can mean that we end up focusing on the wrong thing and team members probably won’t be that engaged in the retrospective since they don’t really get to dictate what’s talked about.
Esther points this out out on slide 23 of the presentation – “Choose a focus that reflects what’s going on for the team“. This perhaps can be determined by having a vote before hand based on some topics that seem prominent.
There’s lots of other useful tips in Esther’s slide deck which are worth having a look at and I’m sure most of the potential problems I’ve listed probably don’t happen when we have a highly skilled/experienced facilitator.
A few weeks ago Chris Matts wrote an interesting blog post ‘the language of risk‘ in which he describes an approach he used to explain the processes his team uses to an auditor.
Why did the auditor like what I said?
Because I explained everything we did in terms of risk. When they asked for a “process”, I explained the risk the process was meant to address. I then explained how our different process addressed the risk more effectively.
This seems like a pretty cool idea to me and it got me thinking of the different ‘processes’ we’ve used in teams I’ve worked on and what risks they might be addressing:
- Pair Programming
- Becoming dependent on one person with respect to knowledge of part of the code base.
- Having someone new working on an area of the code that they don’t know well and making a mistake.
- Making the same mistakes repeatedly/working in a way that (indirectly) wastes money.
- Story Kick Off
- Building the wrong thing
- Solving the business problem in an inefficient way
- Building something which is very difficult to test
- Stand Up
- Someone getting stuck on something which someone else in the group might be able to help with.
- People going down rabbit holes and getting stuck on things that don’t really matter
- Show Case
- Building the wrong thing for too long
- Automated testing
- The application regresses as new functionality is added
- Humans make mistakes when manually going through scenarios
That’s just a first attempt at this, I’m sure others could come up with something better!
In coming up with the list I’ve been working from a process which I’ve seen used and trying to work out what risk that might be addressing.
Chris seems to look at risks/processes the other way around to i.e. we think about what risks we need to address and then work out whether we need a process to address it and if so which one.
Taking that approach would help to explain why some teams don’t necessarily need a lot of process – the risks might be catered for in different ways or maybe they just don’t exist in specific contexts.
For example a lot of risks around communication go away if the product owner and the team are sitting in the same physical location and can easily just turn and talk to each other if they have any questions.
Even with this new way of looking at risks/process I still think it’s useful to keep checking whether or not a process is still necessary because as our team/product changes the risks we face probably do as well.
Last week my colleague Pat Fornasier ran our team’s fortnightly retrospective and one of the exercises we did was ‘the 5 whys’.
I’ve always wanted to see how the 5 why’s would pan out but could never see how you could fit it into a normal retrospective.
Pat was able to do this by using the data gathered by an earlier timeline exercise where the team had to plot the main events that had happened over the last 6 months.
We ended up with 5 key areas and split into groups to explore those topics further.
The 5 Whys is a questions-asking method used to explore the cause/effect relationships underlying a particular problem. Ultimately, the goal of applying the 5 Whys method is to determine a root cause of a defect or problem.
My group had to investigate the topic ‘Why are we so obsessed with points?’.
These were some of my observations from the exercise:
- It’s very easy to lose focus on the exercise and start talking about solutions or ideas when only a couple of whys have been followed.
Pat suggested that this problem could be solved by having a facilitator who helps keep the discussion on track.
- We went down a dead-end a few times where our 5th why ended up being something quite broad which we couldn’t do anything about.
We ended up going back up the chain of whys to see whether we could branch off a different way on any of the and it was actually reasonably easy to think of other whys the further up you went.
- By going beyond surface reasons for things you actually end up with much more interesting conversations although I think it does also become a little bit more uncomfortable for people.
For example we ended up discussing what ‘minimum viable product’ actually means for us and a couple of the group had a much different opinion to the product owner. It would have been interesting if we’d been able to continue the discussion for longer.
- For our particular topic we ended up discussing why the deadline we have was set when it was and couldn’t really come up with any reason for why it couldn’t be changed other than we’d been told it couldn’t.
It would have been more interesting to have the people external to the team who set the deadline so that we could understand if there was more to it.
I tried looking for a video to see a real life example of a 5 whys discussion being facilitated but I wasn’t able to find one.
Perryn pointed me to a chat log on the cucumber wiki where Aslak asks the 5 whys to someone trying to articulate why they want to have a login feature in their application but I’d be interested in seeing more examples if anyone knows any.
I’ve attended a lot of different retrospectives over the last few years and one thing that seems to happen quite frequently is that a problem will be raised and there will become a massive urgency to find an action to match with that problem.
As a result of this we don’t tend to go very deeply into working out why that problem happened in the first place and how we can stop it happening in the first place.
Any discussion tends to be quite shallow and doesn’t delve very far beyond the surface of the problem.
I’ve noticed that this tends to happen more when there are a lot of people in the retrospective and there’s a desire not to ‘waste’ everyone’s time which is understandable to some extent.
We recently had an iteration where there were a lot of stories going back and forth between the developers and testers which was leading to a lot of context switching for some developers.
Since it had felt very disruptive we tried to find some way of deciding when we should or shouldn’t context switch from the current story to fix bugs on earlier stories.
In hindsight it would have been more interesting to look at why that problem existed in the first place rather than directly addressing the problem.
In this case, as my colleague Chris pointed out, it might make more sense for a developer (pair) to go and work with a tester on the story until it was ready to be signed off rather than switching back and forth.
I’ve read about other retrospective formats such as the ‘five whys‘ which might help a team to dig deeper into the problems they’re facing but I’m curious whether it’d make sense to follow such a format with over 30 people attending.
We’d need to pick a sufficiently general problem to analyse so that everyone remained engaged.
I’d be curious whether anyone else has made a similar observation and how they made their retrospectives more effective.
I facilitated the latest retrospective my team had last week and decided to try The 4 L’s technique which I’d come across while browsing the ‘retrospectives’ tag on del.icio.us.
We had 4 posters around the room representing each of the L’s:
- Longed for
I’m not really a fan of the majority of a retrospective being dominated by a full group discussion as many people aren’t comfortable giving their opinions to that many people and therefore end up not participating at all.
I’ve seen much more participation if the facilitator tries to encourage less vocal people to give their opinions and if the first part of the retrospective is done in smaller groups.
We therefore started in groups of three where people discussed the previous iteration and came up with ideas which they stuck under each section. That lasted for around 15 minutes.
After that we split into groups of about 5 – one for each of the L’s – and each group spent 6/7 minutes grouping together the stickies and looking for any trends.
One member of each group then presented a summary of their section to the rest of the group and suggested what they thought the most important thing to discuss was.
Having gone around all of the groups we now had 30 minutes to discuss the 4 topics we’d identified. In fact two of them were the same so we only had 3!
My observation of this style of retrospective is that it seemed to achieve the goal of getting more people to participate. At least 2 or 3 people who have never spoken in one of our retrospectives before were giving their opinions to the whole group.
I was curious to see whether we’d cover all the topics that people wanted to discuss as I cut the whole group voting system which I’ve seen used in most retrospectives I’ve attended.
After we’d finished discussing the 3 main topics a couple of other points were raised which had both been on the ‘longed for’ wall.
We ended up just quickly agreeing to give these things a try for an iteration instead of having a prolonged discussion about the advantages/disadvantages of the idea.
Facilitating wise I think I could have been clearer with my instructions as people were a bit confused at times about what exactly they were supposed to be doing.
I think it’s vital to get everyone in the group involved early on or they just zone out and their insight is lost.
I’d be interested in hearing other types of retrospectives people have run which allow you to do that.
I’ve been thinking a bit about Parkinson’s Law recently and its’ applicability in software development.
Parkinson’s law is defined as follows:
Parkinson’s Law is the adage first articulated by Cyril Northcote Parkinson as the first sentence of a humorous essay published in The Economist in 1955:
“Work expands so as to fill the time available for its completion”
My colleagues quite frequently reference this law with respect to stories taking the amount of time that reflects the story point estimate assigned to them.
I haven’t noticed this so much but I think we’re more susceptible to the law when what we’re working on doesn’t have a clear goal or doesn’t have a fast feedback loop.
One of the times where I think we run into this problem is in the agile ‘iteration zero’.
Iteration Zero is an iteration reserved at the start of a project for setting up project infrastructure, building a walking skeleton and other such tasks. It’s typically a week long.
While I think it’s useful to do some up front work like this what I’ve noticed having participated in several of these is that we probably don’t need a week for this type of work even though it will easily fill a week if it needs to.
Despite my belief that the work fills the time it’s interesting to notice that we still don’t get everything perfectly setup in iteration zero and probably wouldn’t even if it was 2 weeks long instead.
We need to actually start driving some user functionality end to end and getting it deployed across our environments so that we can get proper feedback on the work that we’ve done.
For example one of the tasks of an iteration zero might be to have all the developer work stations ready to go by the time we start.
It’s certainly possible to get the work stations to a stage where we think we’ve got everything setup correctly but it’s not until people are actually using them that we know for sure.
The idea of an iteration zero is still useful but we should try to keep it as short as possible and accept that we’re still going to be learning/spending time in the first few iterations on iteration zero type stuff and expect a ‘slower velocity’ accordingly.
One of the approaches that I like the best in retrospectives is when the facilitator splits the team into smaller groups during the brainstorming part of the retrospective.
I decided to try this out in a retrospective we ran after one week of ThoughtWorks University, using The Retrospective Starfish to provide a framework in which people could frame their thoughts.
Usually what I’ve seen happen in these mini groups is that everyone will write down their own ideas on stickies and then discuss them as a group but still put up all the stickies even if the group didn’t agree with everything.
Frankie observed that in this retrospective that hadn’t really happened and the mini groups had been reaching a consensus on their ideas before one or two people went and put them up on the wall.
I think this partly came down to my failure to explain the intent of the exercise clearly i.e. it is a brainstorming exercise rather than an analytical one.
An interesting side effect was that when people came and looked at the wall to see what the other groups had come up with they noticed quite a few things that they agreed with but hadn’t come up with in their group.
I think next time I’ll try and explain the exercise a bit more clearly to help keep people in the mindset of coming up with ideas.
It’s difficult to say if any important points were missed by people analysing so early but there was certainly less duplication of ideas than normal!
After a few recent conversations with colleagues as well as my observations of several projects I’m coming to the conclusion that the way that people react in situations often differs significantly depending on whether they’re working in a large or small team.
One of the most obvious ways that this manifests itself is when there comes a need for someone to volunteer to take care of something – be it a particular functional area, communication with the onshore team or something else.
In larger teams there often seems to be a noticeable silence as no-one or one of a very few people offer to do it.
There seem to be two theories about why this happens:
- People assume that because the team is so big someone else will almost certainly take care of it so they needn’t bother.
- People assume that because the team is so big someone else is probably better placed than them to take care of it.
The problem that arises from people not taking care of things is that they don’t tend to feel as if they’re an important part of the team which invariably means that they don’t contribute as much as they could.
From what I’ve noticed this type of thing doesn’t seem to happen as much on on smaller teams – there are just things to do and they tend to get distributed amongst the people on the team.
The concept/stigma of ‘volunteering’ to do something isn’t there.
I recently came across Lewin’s Equation which suggests the following:
Lewin’s Equation, B=ƒ(P,E), is [...] a heuristic designed by psychologist Kurt Lewin. It states that Behavior is a function of the Person and his or her Environment.
I think this is reasonably accurate and I’ve noticed people who were fairly anonymous in larger teams become amazingly effective when they were put on a much smaller one.
The ‘agile’ approach to software delivery tends to encourage smaller team sizes and the idea of creating collective responsibility seems to be a key part of why you’d want to do that.
A typical approach to achieving that would be to split the building of a system into smaller teams where each covered a specific stream of work.
The collective team will still need to work together at some stage to build the entire system but at least within their streams they can feel a sense of ownership.
From my experience it can sometimes be quite difficult to do that because it seems that all the streams of work are tightly coupled.
I think we need to really endeavour to find a way to break them up though because it will lead to a much happier team and most likely a more productive one.