Archive for the ‘Agile’ tag
A couple of weeks ago Joshua Kerievsky wrote a post describing how he and his teams don’t use story points anymore because of the problems they’d had with them which included:
- Story Point Inflation – inflating estimates of stories so that the velocity for an iteration is higher
- Comparing teams by points – judging comparative performance of teams by how many points they’re able to complete
On the team I’m currently working on we still estimate the relative size of stories using points but we don’t use velocity per iteration to keep score – most of the time it’s barely even mentioned.
Instead for the past couple of months we’ve just been using the velocity to see whether or not we were going to achieve the minimum viable infrastructure (MVI) that we needed to have done before launch.
The nice thing on this occasion was that the MVI was actually reasonably flexible and there were bits of it which were ‘nice to have’ if we had the time and would make our life easier but didn’t have to be there to launch.
As it turned out we ended up finishing the cards required for the MVI a couple of weeks early which left us some flexibility/time to do things which had been forgotten or cropped up late on because something else didn’t work.
We did have a few weeks where our velocity fluctuated massively but an interesting observation which Phil made at the time was that despite the points totals being different we’d actually completed roughly the same number of cards.
Joshua describes the same thing in his post when detailing an email sent to the Extreme Programming list by Don Wells:
We have been counting items done. Each week we just choose the most important items and sign up for them up to the number from last week. It turns out that we get about the same number of them done regardless of estimated effort
This approach was described to me a few years ago by Julio Maia but as I understand it works on the assumption that we can break the work down into chunks which are roughly the same size which in my experience is very difficult to do.
Whatever approach we end up using I think it’s important that we don’t praise or criticise the team based on the velocity achieved that week.
I’ve worked on numerous teams where the project manager will praise the team in the standup for achieving a velocity higher than normal even though nothing has changed which would account for the increase.
In fact the change in velocity can probably be accounted for by normal variance.
If we don’t do that and just use the story points as a tool for roughly judging how much work we can complete in a time period then they’re not so horrendous.
Easier said than done of course!
A few weeks ago a slide deck from an Esther Derby presentation on retrospectives was doing the rounds on twitter and one thing that I found interesting in the deck was the suggestion that a retrospective needs to be focused in some way.
I’ve participated in a few focused retrospectives over the past 7/8 months and I think there are some things to be careful about when we decide to focus on something specific rather than just looking back at a time period in general.
In a retrospective about 6 months ago or so we focused on the analysis part of our process as we’d been struggling to know when a story was complete and what exactly its scope was.
The intention wasn’t the victimise the people working in that role but since there were very few of them compared to people in other roles they were forced onto the defensive as people criticised their work.
It was a very awkward retrospective and it felt like a retrospective was probably the wrong place to address the problem.
It might have been better for the analysts to have been given the feedback privately and then perhaps worked on a solution with a smaller group of people.
Looking for a problem when there isn’t one
I had an interesting conversation with a colleague about whether with very focused retrospectives we end up looking for something to change rather than having any specific pain point which necessitates change.
The problem with this is that there’s a thin line between following the status quo because it works and getting complacent and not looking for ways to improve.
It is interesting to keep in mind though that if it doesn’t seem like there is something to change in an area then perhaps that’s the wrong thing to be focusing on at the moment, which nicely leads into…
Let the team choose the area of focus
There can be a tendency in the teams I’ve worked on for people in managementy roles to dictate what the focus of the retrospective will be which makes sense in a way since they may be able to see something which the team can’t.
On the other hand it can mean that we end up focusing on the wrong thing and team members probably won’t be that engaged in the retrospective since they don’t really get to dictate what’s talked about.
Esther points this out out on slide 23 of the presentation – “Choose a focus that reflects what’s going on for the team“. This perhaps can be determined by having a vote before hand based on some topics that seem prominent.
There’s lots of other useful tips in Esther’s slide deck which are worth having a look at and I’m sure most of the potential problems I’ve listed probably don’t happen when we have a highly skilled/experienced facilitator.
A few weeks ago Chris Matts wrote an interesting blog post ‘the language of risk‘ in which he describes an approach he used to explain the processes his team uses to an auditor.
Why did the auditor like what I said?
Because I explained everything we did in terms of risk. When they asked for a “process”, I explained the risk the process was meant to address. I then explained how our different process addressed the risk more effectively.
This seems like a pretty cool idea to me and it got me thinking of the different ‘processes’ we’ve used in teams I’ve worked on and what risks they might be addressing:
- Pair Programming
- Becoming dependent on one person with respect to knowledge of part of the code base.
- Having someone new working on an area of the code that they don’t know well and making a mistake.
- Making the same mistakes repeatedly/working in a way that (indirectly) wastes money.
- Story Kick Off
- Building the wrong thing
- Solving the business problem in an inefficient way
- Building something which is very difficult to test
- Stand Up
- Someone getting stuck on something which someone else in the group might be able to help with.
- People going down rabbit holes and getting stuck on things that don’t really matter
- Show Case
- Building the wrong thing for too long
- Automated testing
- The application regresses as new functionality is added
- Humans make mistakes when manually going through scenarios
That’s just a first attempt at this, I’m sure others could come up with something better!
In coming up with the list I’ve been working from a process which I’ve seen used and trying to work out what risk that might be addressing.
Chris seems to look at risks/processes the other way around to i.e. we think about what risks we need to address and then work out whether we need a process to address it and if so which one.
Taking that approach would help to explain why some teams don’t necessarily need a lot of process – the risks might be catered for in different ways or maybe they just don’t exist in specific contexts.
For example a lot of risks around communication go away if the product owner and the team are sitting in the same physical location and can easily just turn and talk to each other if they have any questions.
Even with this new way of looking at risks/process I still think it’s useful to keep checking whether or not a process is still necessary because as our team/product changes the risks we face probably do as well.
I’ve attended a lot of different retrospectives over the last few years and one thing that seems to happen quite frequently is that a problem will be raised and there will become a massive urgency to find an action to match with that problem.
As a result of this we don’t tend to go very deeply into working out why that problem happened in the first place and how we can stop it happening in the first place.
Any discussion tends to be quite shallow and doesn’t delve very far beyond the surface of the problem.
I’ve noticed that this tends to happen more when there are a lot of people in the retrospective and there’s a desire not to ‘waste’ everyone’s time which is understandable to some extent.
We recently had an iteration where there were a lot of stories going back and forth between the developers and testers which was leading to a lot of context switching for some developers.
Since it had felt very disruptive we tried to find some way of deciding when we should or shouldn’t context switch from the current story to fix bugs on earlier stories.
In hindsight it would have been more interesting to look at why that problem existed in the first place rather than directly addressing the problem.
In this case, as my colleague Chris pointed out, it might make more sense for a developer (pair) to go and work with a tester on the story until it was ready to be signed off rather than switching back and forth.
I’ve read about other retrospective formats such as the ‘five whys‘ which might help a team to dig deeper into the problems they’re facing but I’m curious whether it’d make sense to follow such a format with over 30 people attending.
We’d need to pick a sufficiently general problem to analyse so that everyone remained engaged.
I’d be curious whether anyone else has made a similar observation and how they made their retrospectives more effective.
After a few recent conversations with colleagues as well as my observations of several projects I’m coming to the conclusion that the way that people react in situations often differs significantly depending on whether they’re working in a large or small team.
One of the most obvious ways that this manifests itself is when there comes a need for someone to volunteer to take care of something – be it a particular functional area, communication with the onshore team or something else.
In larger teams there often seems to be a noticeable silence as no-one or one of a very few people offer to do it.
There seem to be two theories about why this happens:
- People assume that because the team is so big someone else will almost certainly take care of it so they needn’t bother.
- People assume that because the team is so big someone else is probably better placed than them to take care of it.
The problem that arises from people not taking care of things is that they don’t tend to feel as if they’re an important part of the team which invariably means that they don’t contribute as much as they could.
From what I’ve noticed this type of thing doesn’t seem to happen as much on on smaller teams – there are just things to do and they tend to get distributed amongst the people on the team.
The concept/stigma of ‘volunteering’ to do something isn’t there.
I recently came across Lewin’s Equation which suggests the following:
Lewin’s Equation, B=ƒ(P,E), is […] a heuristic designed by psychologist Kurt Lewin. It states that Behavior is a function of the Person and his or her Environment.
I think this is reasonably accurate and I’ve noticed people who were fairly anonymous in larger teams become amazingly effective when they were put on a much smaller one.
The ‘agile’ approach to software delivery tends to encourage smaller team sizes and the idea of creating collective responsibility seems to be a key part of why you’d want to do that.
A typical approach to achieving that would be to split the building of a system into smaller teams where each covered a specific stream of work.
The collective team will still need to work together at some stage to build the entire system but at least within their streams they can feel a sense of ownership.
From my experience it can sometimes be quite difficult to do that because it seems that all the streams of work are tightly coupled.
I think we need to really endeavour to find a way to break them up though because it will lead to a much happier team and most likely a more productive one.
Although the majority of the teams that I’ve worked on over the past few years have been relatively small in size I have worked on a few where the team size has been pretty big and perhaps inevitably the productivity has felt much lower.
I think this is somewhat inevitable since although the overall throughput of these teams may be higher than on smaller teams, due to problems such as having difficulty parallelising work, not every pair is working at maximum productivity.
This tends to mean that you’ll end up with some pairs who have development work to do for only part of the iteration and will then be unable to pick up a new story because there are dependencies on other stories which are currently in progress.
As a result the people in this position will get extremely bored and eventually they’ll start to distract the other people on the team which means that their productivity goes down as well.
Another side effect of knowing that there isn’t anything else for you to pick up is that you subconsciously won’t finish the story you’re working on as quickly as you otherwise might be able to.
One suggestion I’ve heard is that people should pick up technical debt when they find themselves in this situation but from what I’ve noticed unless you were the one who noticed the technical debt there is both context and motivation lacking.
Another possibility in this type of situation is for the developers to try and help out elsewhere perhaps by doing some technical analysis on upcoming stories or helping out testers with their work.
The former seems to work reasonably well but the problem when doing this offshore is that you can often only get up to a certain point at which you need some help from guys onshore.
As a result technical analysis often isn’t enough to keep a pair occupied for that long.
I’ve previously seen the latter work reasonably well whereby a tester would help the developer to work out which unhappy path scenarios needed to be automated and the developer could then write the automation code.
I haven’t seen that work particularly well on my current team and I think the reason is probably down to the fact that we keep separate sets of ‘dev’ and ‘QA’ functional tests.
I can’t currently think of a good solution to this problem other than matching the amount of work and the number of pairs more carefully so that everyone is occupied for the majority of the time.
I was recently reading an article about how to write meaningful user stories and towards the end of it the author mentioned the INVEST acronym which suggests that stories should be:
From what I’ve seen the most difficult one to achieve in a distributed context is that stories should be ‘negotiable’, in particular when it comes to negotiating the way that the UX of a bit of functionality should work.
On most of the projects that I’ve worked on the people designing the UX tend to work slightly detached from the development team and then send their designs over as wire frames.
It’s not the most ideal setup even if you’re working onshore but it becomes even more challenging when you’re working offshore.
Typically onshore if a particular user flow was very difficult to implement then one of the developers might go and talk with the UX guys and then explain the problem and give another potential solution.
The trade off between the cost of implementation and the user experience that the UX person has suggested is clearly outlined in these conversations.
When that feedback comes from offshore it has much less impact and I think it comes across much more as a criticism of someone’s work rather than an attempt to be pragmatic in helping the client to deliver a product within a time frame.
One possible solution to this problem if you have some onshore colleagues is to have them go and talk about the problem but we’ve found it difficult to do that because we try to split onshore/offshore stories so that we’re working on different parts of the code base.
Therefore it is pretty difficult for an onshore developer to go and discuss it for you.
To add to the problem we tend to realise the difficult of implementing a certain UX during the middle of our day which means we need to wait half a day to see if we can get it changed.
Given that time lag we often just end up designing it the way that’s been specified so that we don’t ‘waste’ time.
It’s clearly not an idea situation so I’d be keen to hear if anyone has come up with ideas to get around this.
Earlier this year I wrote about some of the problems that we can run into when we have implicit assumptions in stories and another problematic approach I’ve seen around this area is where we end up with stories that are very heavily focused on technical implementation.
Initially this seems like it will work out pretty well since all the developer then needs to do is follow the steps that have been outlined for them but from my experience it seems to create more problems than it solves.
The first problem is that it seems to create a non thinking mindset and the more detail there is the less it feels like you need to think.
This isn’t a great position to end up in since there are frequently multiple ways to solve any problem and we will probably exclude those from our thinking if we’re just following instructions.
As a developer part of the skill-set is knowing how to implement a solution but an equally important part is being able to suggest alternative approaches which may be more appropriate in a given context.
Technically heavy stories only encourage the former as the developer may now assume that any other options have somehow already been considered.
In addition to this it tends to take a lot of time to explain these types of stories which presumably also means that it takes longer to write them up as well.
I’ve also found that it’s more difficult to understand these types of stories since it’s not initially obvious why you need to implement the technical approach that has been described.
Another thing which I hadn’t appreciated until quite recently is that it’s very boring/demotivating to implement stories if all the problems have already been ‘solved’ before you even start.
Given that it’s somewhat inevitable that we may end up with a story that is shaped like this the key seems to be not to get frustrated by the story but rather to try and find out what the underlying intent of the requirement actually is.
One of the trickiest things to do when working in bigger teams is ensuring that it is possible to parallelise the work we have across the number of pairs that we have available.
From my experience this problem happens much less frequently in smaller teams. Perhaps inevitably it’s much easier to find 2 or 3 things that can be worked on in parallel than it is to find 6 or 7 or more.
As a result we can sometimes end up with the situation where 2 stories are played in parallel with a slight dependency between them.
It may be possible to start those two stories at the same time but one may rely on an implementation of the other in order for it to be considered complete.
It’s not an ideal situation but it still seems doable if we ensure that the two pairs work closely together both metaphorically and in terms of their physical proximity.
I think a more useful strategy is to look at how many things can be worked on in parallel and then deciding the team size rather than choosing the team size and then trying to find a way to make the way the stories are written/played fit around this decision.
This is rarely what happens since budgets/need to get to market quickly often take precedence so we end up working in a sub optimal way.
While this is a trade off that the business may be happy to make I still think it’s useful to identify the risk we’re assuming by taking this approach as well as recognising that the amount of work we can flow through the system is limited by how much we can process in parallel.
A fairly common trend on nearly every project I’ve worked on is that at some stage the client will ask for more people to be added to the team in order to ‘improve’ the velocity.
Some of the most common arguments against doing so are that it will initially slow down the team’s velocity as the new members learn the domain, code base and get to know the other members of the team.
My colleague Frank Trindade wrote a blog post about 18 months ago where he described his observations of the above happening on a team he’d been working on.
Frank also identifies the following consequence of increasing a team’s size which I think is perhaps even more important:
the decrease of communication levels brought by the addition of new nodes
The majority of teams that I’ve worked on have had 15 or less people but there have been a couple of exceptions including my current team.
We currently have somewhere around 25 people working in Pune and then perhaps another 10 in Chicago so it’s the biggest team that I’ve worked on. Having said that I do recognise that it’s quite small in size compared to some of the other projects in India.
The following are some of the consequences with respect to communication that I’ve noticed. These are applicable as we increase team size and also just generally when having larger teams.
Standup takes longer
This is somewhat inevitable since there are more people taking part. It’s therefore much easier to lose focus on what other people are saying.
We’ve managed to reduce the time to something more reasonable by not having any discussions in the stand up, instead taking them offline.
Other meetings are much more difficult to control with more people in and I think it requires quite a skilful facilitator to allow everyone to take part and still ensure that the meeting doesn’t overrun.
Technical decision consensus
Getting consensus on any technical approach is much more difficult than when there are just a few developers on the team.
Technical discussions seem to take way longer than I’m used to because we try to ensure that everyone gets to express their opinion.
There are also more people available to then disagree with that opinion which tends to mean that we go around in circles more frequently.
I’ve found that there are rarely more than 3 or 4 ways to solve any problem so a lot of the time people are expressing similar opinions in a slightly different way.
When the team gets this big I don’t think it makes sense to include everyone in all technical decisions. My current thinking is that having a group of 3 or 4 people involved in each one is more than enough.
Greater distance between team members
I think the optimal setup for a software team is to have all the team members working on a single table – this means we can have around 12 members per team.
That way everyone is within talking distance of each other and communication remains smooth.
Once we have more people than that we need another table which means that there are 2 rows of people in between people on the far side of each table.
In The Organisation and Architecture of Innovation the authors include the following diagram which I think is quite revealing:
Although the authors are talking about bigger distances, I think the amount of communication is less if we are separated even by this little amount of space.
You now need to take into account the number of people that you might be disrupting by shouting across the tables which was less of an issue before.
Adding people to a team may increase the velocity but it’s unlikely to lead to an improvement which is directly proportional to the number of people added.
From my experience it might make more sense to take a bit longer building the product or application rather than aiming at a strict time deadline with more people.
These are just surface observations – I’m sure others who have worked for longer in big teams would be able to point out a whole range of other consequences.